The product has changed names over the years, and all references to interMedia Text refer to the what is currently known as Oracle Text
Index by release:
interMedia integrates all of the features and functions of the former Oracle ConText product with image, audio and video, and geographic location services for Web Content Management applications built with the Oracle8i Internet Platform.
interMedia Text has been completely re-architected, enhanced and tightly integrated with Oracle8i to provide up to an order of magnitude better search performance, greater ease of use and new capabilites like thematic searches. Please refer to the documentation for detailed information.
interMedia Text extends Oracle8i by indexing any text or documents stored in Oracle8i, in operating system flat files or URLs. It also enables content-based queries, (e.g., find text and documents which contain this word) using familiar, standard SQL. It enables file-based text applications to use Oracle8i to manage text and documents in an integrated fashion with traditional relational information.
The easiest way to grasp what it does is to see it in action...
Here is a quick example. We'll start by creating a table to hold our documents. For now, we'll use a small varchar2 column to hold the document text. We'll also need a primary key to identify each document.
create table docs (id number primary key, text varchar2(80));Nothing special about that. Let's populate it with two example documents:
insert into docs values (1, 'first document');
insert into docs values (2, 'second document');
commit;
Now comes the fun part -- we build a Text index:
create index doc_index on docs(text)
indextype is ctxsys.context;
This is, for the most part, familiar DDL -- the difference is the indextype clause, which instructs Oracle to build a Text index instead of a regular, b-tree index.
Once the index has been created, we can run content-based queries on our huge database of two documents using the contains function:
select id from docs
where contains(text, 'first') > 0;
This will find all rows in docs where the text column contains the word "first", (which is document 1, for those not following closely). The > 0 part is necessary to make it legal Oracle SQL, which does not support boolean return values for functions (at this time).
interMedia Text is more tightly integrated with Oracle8i. For those of you familiar with previous versions of ConText, notice that-- in the previous example, no ctxsrv was required. The result is a specialized "domain index" which works like any b-tree index. It's all transparent to the user -- Oracle8i is simply extended to support text. Unlike previous versions of Oracle ConText, there are no servers to start up, there is no query rewrite, and index creation is done through familiar SQL rather than through a custom PL/SQL interface.
Due to significant architectural changes, interMedia Text is not backward-compatible with previous versions of Oracle ConText. Existing installations and applications need to be migrated to work with Oracle8i. There is a manual which details the steps of this process. Please remember that some of those steps need to be completed on the existing system before you install Oracle8i.
By default, the Oracle interMedia installation procedure installs interMedia Text. interMedia Text can also be installed separately, if you choose a custom install.
| ?/bin/ctxsrv | This may be renamed ctxsrv8 or some such. Oracle8i still supports the ctxsrv server. However, the only valid personality mask is M. You need to run it only when you want background DML. Batch DML (sync) does not require a running server. We'll go into DML in more depth later. |
| ?/ctx/admin | Data dictionary scripts. See below for details. |
| ?/ctx/admin/defaults | Language-specific default preferences. See below, and an upcoming issue on the preference system for more details. |
| ?/ctx/data | This directory has data files needed by interMedia Text, which includes the linguistic lexicon files, the Xerox stemming files, and the Korean lexer dictionary. If you get strange internal errors in linguistics, stemming, etc. check this directory. |
| ?/ctx/bin | This directory is for user-defined filters, which we'll talk about later. This directory should also contain the ctxhx program which is used by the INSO filter. |
| ?/ctx/lib | This directory has platform-specific shared libraries and template files used by the INSO filter. It also contains the safe callout used by indexing and document services -- libctxx8.so (name will change from platform to platform -- look for "ctxx") |
interMedia Text is integrated with the Oracle Database Creation Assistant (DBCA) so the ctxsys data dictionary should be installed with this tool. If a manual installation is required please follow these steps.
sqlplus internal @dr0csys <password> <def_tblspc> <tmp_tblspc>
sqlplus ctxsys/<password> @dr0inst <ctxx_library>The argument is the full path to the ctxx library, for example:
sqlplus ctxsys/<password> @dr0inst /some_directory/app/oracle/product/8.1.5/ctx/lib/libctxx8.soOn Unix, you must not use the environment symbol $ORACLE_HOME. You have to use the actual physical path.
To install the US defaults, for instance:
sqlplus ctxsys/<password> @defaults/drdefus.sqlWe'll talk more about this in the upcoming preference system issue.
After these steps, interMedia Text should be installed and working.
If this database was an existing ConText site, make sure to remove text_enable from the init.ora. It is no longer used in Oracle8i, and will actually prevent Oracle8i from operating properly -- you'll get errors like "cannot find package DR_REWRITE".
Finally, ensure that the Net8 listener is running and is configured to invoke external procedures. A brief description of the process is below, with complete details are in the Oracle8i Server Administrator's Guide.
extproc_connection_data =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = ipc)
(KEY = DBSID))
(CONNECT_DATA = (SID = ep_agt1)))
DBSID is the database SID. ep_agt1 can be named anything. extproc_connection_data
should not be changed.
(SID_DESC = (SID_NAME = ep_agt1)
(ORACLE_HOME = /oracle)
(ENVS = LD_LIBRARY_PATH=/oracle/ctx/lib)
(PROGRAM = extproc))
ep_agt1 matches the CONNECT_DATA SID for extproc_connection_data in the tnsnames.ora.
A quick way to test the Net8 configuration is to do:
exec ctx_output.start_log('log')
from SQL*Plus. If you get the error:
DRG-50704: Net8 listener is not running or cannot start external proceduresthen things aren't set up correctly. Some of the things to check:
The first step in using interMedia Text is to create a Text index. Without a b-tree index, value queries are slower; without a Text index, contains queries are simply not possible. As we've seen, the index is created using the create index command:
create index INDEXNAME on TABLE(COLUMN) indextype is ctxsys.context
Unlike previous versions of ConText, there is no separate policy creation step. We'll talk more about this when we discuss the preference system. View indexing is not allowed in Oracle8i, consistent with regular b-tree indexes. Parallel index creation is also not supported in this first Oracle8i version.
Composite indexes are not supported -- only one column is allowed in the column list. This column must be one of the following types: CHAR, VARCHAR, VARCHAR2, LONG, LONG RAW, BLOB, CLOB, BFILE. Date, number, and nested table columns cannot be indexed. Object columns also cannot be indexed, but their attributes can be, provided they are atomic datatypes.
The table must also have a primary key constraint. This is needed mainly for identifying the documents for document services, and may be used in the future for other purposes. Composite primary keys are supported, up to 16 columns.
The issuing user does not need the ctxapp role to create an index. If the user has Oracle grants to create a b-tree index on the column, then they have sufficient permission to create a Text index.
Unlike previous versions of ConText, the issuing owner, table owner, and index owner can all be different users, just like regular b-tree indexes.
IMPORTANT: If a syntax error occurs in the create index statement, the index is still created. This is different from regular b-tree indexes; before you reissue the corrected statement, you must drop the failed index first.
If an error occurs during actual indexing (e.g. you run out of tablespace) then you can pick up where you left off (after correcting the problem, of course) using alter index:
alter index INDEXNAME rebuild parameters ('resume')
Storage clauses to the create index statement are not used. Index storage parameters are provided using the STORAGE object, which we will discuss later. Partitioning clauses are also not supported at this time, although you can manually partition the index tables if you wish, again using the STORAGE object.
Once the index is created, any export will include the index definition. At import time, imp will re-create the index by issuing the create index statement.
Oracle8i detects that this is a Text index and performs the text indexing. The index is created using a pipeline of steps - "The Indexing Pipeline".
The diagram below shows the indexing pipeline:
+-----------+ +----------+ filtered +-----------+
| | doc data | | doc text | |
| datastore |---------->| filter |----------->| sectioner |---------+
| | | | | |--+ |
+-----------+ +----------+ +-----------+ | |
^ | |
| section offsets | |
|column data +---------------------------------+ |
| | |
| V |
+==========+ +----------+ +-----------+ |
| | | | | | |
| database |<-----------| engine |<-----------| lexer |<--------+
| | index data| | tokens | | plain
+==========+ +----------+ +-----------+ text
Let's step through this pipeline, stage-by-stage. Each stage will be covered in
depth later, so they'll be described here briefly.
Start with the datastore. This stage loops over the rows of the table and reads the data out of the column. Unlike previous versions, the rows are not read in any particular order. The datastore passes out document data. Usually, this will simply be the column data, but some datastores use the column data as a pointer to the document data. The URL_DATASTORE, for instance, uses the column data as a URL, does a GET, and passes out the returned data.
The filter takes the document data and transforms it to some kind of text representation. This is needed when storing binary documents such as Word or Acrobat files. Unlike previous versions, the output of the filter does not have to be plain text -- it can be a text format such as XML or HTML.
The sectioner, new for Oracle8i, takes the output from the filter, and converts it to plain text. There are different sectioners for different text formats, including XML and HTML. Conversion to plain text includes detecting important section tags, removing "invisible" information, and reformatting the text.
The lexer takes the plain text from the sectioner and splits it into discrete tokens. We have lexers for whitespace-delimited languages, and specialized lexers for Asian languages, where segmentation is quite a bit more complex. The basic lexer also includes theme functionality to build unified text/theme indexes.
Finally, Oacle8i takes all the tokens from the lexer, the section offsets from the sectioner, and a list of low-information words called the stoplist, and builds an inverted index. An inverted index stores tokens, and the documents in which these tokens occur. For instance, our example from issue #1:
insert into docs values (1, 'first document'); insert into docs values (2, 'second document');would produce an inverted index like this:
DOCUMENT ---> doc 1 position 2, doc 2 position 2 FIRST ---> doc 1 position 1 SECOND ---> doc 2 position 1Each document is assigned an identifier called a docid, which is used in the inverted index. The document primary key isn't stored.
The time needed to create an index is a function of the amount of text being indexed. Oracle8i can create a log file which can be monitored during indexing. start_log starts the logging, and should be called before issuing the create index, in the same session:
ctx_output.start_log('mylog');
This will create a mylog file in ?/ctx/log which can be used to monitor indexing
progress. The directory can be changed by ctxsys, using the system parameter LOG_DIRECTORY.
See documentation for details.
IMPORTANT: In this first release of interMedia ?/ctx/log is not created at install time. You may get errors like "unable to open file". Simply have the DBA create this directory.
Logging is halted when the session exits or when end_log is called:
ctx_output.end_log;
Processing errors encountered during indexing -- filter errors, file cannot be read, etc. -- do not bring indexing to a halt. Instead, the error is logged and indexing continues to the next file.
You can see these logged errors in the ctx_user_index_errors view. It will tell you the rowid of the failed document, and the error which was encountered. You need to manually empty this table when you are finished reviewing the errors:
delete from ctx_user_index_errors;
The preference system allows you to customize Text indexing. Each stage of the indexing pipeline is really an interface into which customized objects can be plugged in. These stages are called "classes", and they can be found in the ctx_classes view:
| DATASTORE | Reads column data and returns document data |
| FILTER | Reads document data and returns formatted document text |
| SECTION_GROUP | Reads formatted document text and returns section offsets and plain document text |
| LEXER | Reads plain document text and breaks it into tokens |
| WORDLIST | Contains query expansion defaults |
| STOPLIST | Contains a list of low-information words |
| STORAGE | Index table storage parameters |
Each class has one or more "objects" which are like behavioral templates. The DATASTORE class, for instance, has the following objects:
| DIRECT_DATASTORE | Passes column data directly as document data |
| DETAIL_DATASTORE | Concatenates multiple rows of a detail table to construct document data |
| FILE_DATASTORE | Interprets column data as a filename. Reads file and passes file contents as document data |
| URL_DATASTORE | Interprets column data as a URL. Performs a GET and passes return as document data |
| USER_DATASTORE | Invokes a stored procedure to synthesize document data |
Objects can be found in the ctx_objects view.
But objects are only templates, and cannot be used directly in an index. Instead, you create a "preference" from one of these object templates, customize it by setting "attributes", then use the preferences to create the index. Let's take a closer look:
You create a preference using ctx_ddl.create_preference, specifying an object to use as template:
ctx_ddl.create_preference('mypref', 'FILE_DATASTORE');
This creates the preference mypref, using the FILE_DATASTORE object. If you are
familiar with previous versions, the object names have been changed, and there
is no comment argument.
This is done in PL/SQL, so surround it with "begin" and "end", or use "exec" if you are in SQL*Plus. Also, in order to create preferences, the issuing owner must have the ctxapp role.
Once the preference is created, we can customize it using set_attribute: For our example, let's say all our files were in /docs. We can simply set the PATH attribute to communicate this to interMedia:
ctx_ddl.set_attribute('mypref', 'PATH', '/docs');
If you are familiar with previous versions, set_attribute now comes *after* create_preference, which allows us to do attribute-level validation, and necessitates the first argument, which is the preference name.
This attribute is a text attribute, so it is set to a text value. There are also number attributes and boolean attributes, which should be set to 'TRUE' or 'FALSE' (or 'YES'/'NO'). The attributes for each object and their types can be found in view ctx_object_attributes or in the documentation.
Some attributes have a list of allowed values. The allowed values can be found in the view ctx_object_attribute_lov. Unlike previous versions, LOV attributes can be set using a descriptive "tag" rather than a numeric value.
Preferences are used for all classes except the SECTION_GROUP and STOPLIST classes. These classes use specialized objects and have their own API. We'll discuss this in the respective upcoming issues.
Once you've created your preferences, you can build all kinds of customized indexes by mixing and matching them. Because all objects of a class adhere to a common interface, any filter can be hooked up to any datastore, etc. Got a bunch of Korean Word 97 files? No problem -- hook up a FILE_DATASTORE preference, an INSO_FILTER preference, and a KOREAN_LEXER preference.
You plug in your preferences using the parameters clause of create index:
create index doc_index on docs(text)
indextype is ctxsys.context
parameters ('datastore mypref')
This tells create index to use mypref as the datastore. Each of the classes has
a parameters keyword to name a preference. The keyword is the same as the class
name, except for SECTION_GROUP, whose keyword is the phrase "section group". You
can specify multiple classes by simply tacking the keyword-preference pairs on
the end of the string:
parameters('datastore mypref filter myfilter')
Any user can use any preference. To specify a preference in another user's schema, simply add the owner name in the parameters string:
parameters('datastore kenny.mypref')
Unlike previous versions, the index makes a copy of the preference settings, which means that after use in an index, the preference can be modified or dropped without affecting the index. Note that only the preference owner can modify or drop it.
In the example above, only the datastore class is set. Since preferences were not specified for the other classes in the parameters string, they get their values from the default system.
The default system uses system parameters. System parameters are name- value pairs which apply to the interMedia Text installation as a whole -- kind of our version of init.ora. They can be seen in the view ctx_parameters.
The values for the parameters used by the default system are preference names. For instance, in our example we have not specified a lexer preference. interMedia gets the value of the system parameter DEFAULT_LEXER, and uses the preference named there. This allows the administrator to set the default behavior for the system.
The set of parameters used by the default system are:
| DEFAULT_DATASTORE | The default datastore preference. At install time (the default default) it is set to CTXSYS.DEFAULT_DATASTORE, a preference which uses the DIRECT_DATASTORE object. |
| DEFAULT_FILTER_BINARY | This is the filter preference to use when indexing binary datatypes such as RAW, LONG RAW, or BLOB. At install time it is set to CTXSYS.INSO_FILTER, which uses the INSO_FILTER object. |
| DEFAULT_FILTER_FILE | This is the filter preference to use when indexing files, either BFILE or the FILE_DATASTORE. At install time it is set to CTXSYS.INSO_FILTER. |
| DEFAULT_FILTER_TEXT | This is the filter preference to use when indexing text datatypes such as CHAR, VARCHAR2, or CLOB. At install time it is set to CTXSYS.NULL_FILTER, which does no filtering. |
| DEFAULT_SECTION_HTML | This is the section group to use when the filter class uses the INSO filter, or the datastore uses the URL_DATASTORE. The INSO filter filters binary files to HTML (see filters). At install time it is set to CTXSYS.HTML_SECTION_GROUP, which merely converts the HTML to plain text. |
| DEFAULT_SECTION_TEXT | This is the section group to use in all other cases. At install time it is set to CTXSYS.NULL_SECTION_GROUP, which does no sectioning. |
| DEFAULT_LEXER | This is the default lexer preference. At install time it is set to CTXSYS.DEFAULT_LEXER. The definition of that preference is language-specific. See below for more details. |
| DEFAULT_WORDLIST | This is the default wordlist preference. At install time it is set to CTXSYS.DEFAULT_WORDLIST. The definition of that preference is language-specific. See below for more details. |
| DEFAULT_STOPLIST | This is the default stoplist. At install time it is set to CTXSYS.DEFAULT_STOPLIST. The contents of that stoplist are language-specific. See below for more details. |
| DEFAULT_STORAGE | This is the default storage preference. At install time it is set to CTXSYS.DEFAULT_STORAGE, which has no settings. |
The LEXER, WORDLIST, and STOPLIST classes are the most affected by the language of the documents. Every installation gets DEFAULT_LEXER, DEFAULT_WORDLIST, and DEFAULT_STOPLIST preferences, but the definition of these preferences will depend on the language chosen at install time.
?/ctx/admin/defaults holds language-specific scripts for each supported language. Based on the language chosen, DBCA runs the matching language-specific script, which creates the default preferences most appropriate for that language.
The result is that a German instance gets a DEFAULT_LEXER which is case- sensitive and does German decompounding, a DEFAULT_STOPLIST which has common German words, and a DEFAULT_WORDLIST with German stemming. A Japanese instance gets different defaults, more tailored to Japanese.
Here is a list of views useful for browsing the interMedia Text data dictionary: ctx_classes Available classes ctx_objects Available objects for each class ctx_object_attributes Attribute names and types for each object ctx_object_attribute_lov Allowed values for each LOV object attribute ctx_preferences Preferences ctx_preference_values Attribute values of each preference ctx_user_index_objects Objects used by the user's indexes ctx_user_index_values Attribute values for the user's indexes
The datastore class is responsible for reading the column data from the database, getting the document data based on the column data, and passing that up to the filter.
It is important to keep in mind that some form of the document data must be in a table. Datastore objects which access external resources such as files and web pages still require a table of pointers to those resources.
Datastore preferences are plugged into indexes using the datastore keyword:
create index
...
parameters ('datastore mydstore');
The datastore class has five objects.
The direct datastore is the simplest case -- it assumes that document data is stored in the indexed column. It is so simple it has no attributes to customize. There should be no reason to create a preference based on the DIRECT_DATASTORE object -- CTXSYS.DEFAULT_DATASTORE can be used for any index.
IMPORTANT: If you are indexing BFILEs, make sure the user ctxsys has READ permission on the BFILE directory.
The file datastore reads the column data as a filename. It opens the file and returns the contents as the document data. The indexed column cannot be LONG or LOB types. The files must be accessible from the database host machine.
This object has one attribute: PATH. PATH is optional, and specifies the directory where the documents are stored.
IMPORTANT: If PATH is specified, the column data must be simple file names; if not specified, the column data must be full path names. You cannot mix and match -- for instance, with PATH of /A, and a column of B/C.TXT it will NOT find /A/B/C.TXT.
On some platforms, PATH can specify multiple directories. If you do this, make sure that your documents have unique names -- if a document C.TXT is in directories /A and /B, and /A is in the PATH before /B, only /A/C.TXT will be indexed.
There is not much of a difference between this object and BFILEs. It's a matter of choice. Like BFILE's, when the file changes, the row will not be automatically reindexed. You should update the column value to itself to trigger a reindex.
The URL datastore reads the column data as a URL. It performs a GET and returns the contents as the document data. The URLs for the documents must in the table -- the URL datastore does not crawl. The indexed column cannot be LONG or LOB types.
http, ftp, and file protocols are supported. The URL datastore also has multi-threaded read (on most platforms) to maximize throughput. It is very customizable, and has several attributes:
| TIMEOUT | Read request timeout in seconds |
| MAXTHREADS | Maximum number of threads to use for multithreaded read |
| URLSIZE | Maximum length of an URL |
| MAXURLS | Maximum number of URL's to read at one time |
| MAXDOCSIZE | Maximum length of document |
| HTTP_PROXY | HTTP proxy |
| FTP_PROXY | FTP proxy |
| NO_PROXY | Domain for proxy exclusion |
Like the file datastore, if the web page changes, then you should manually update the column value to itself to trigger the reindex.
Many legacy applications store text line-by-line, in multiple rows of a detail table. The detail datastore constructs documents by concatenating the text of these lines into a single, coherent document.
The detail datastore must be told the specifics of the detail table:
| DETAIL_TABLE | Name of the detail table |
| DETAIL_KEY | The foreign key column(s) in the detail table |
| DETAIL_LINENO | The sequence column in the detail table |
| DETAIL_TEXT | The text column in the detail table |
For instance, let's say the master and detail tables look like this:
the_master my_detail
ID TITLE M_ID SEQ LINETEXT
1 Grate Expectations 1 1 It was the best of times
1 2 it was the worst of times.
"That's G-R-A-T-E Expectations, also by Edmund Wells."
The attributes of the preference should be set like this:
| DETAIL_TABLE | my_detail |
| DETAIL_KEY | m_id |
| DETAIL_LINENO | seq |
| DETAIL_TEXT | linetext |
There is one final attribute: BINARY. By default, the detail datastore treats each row as a separate line, and, when concatenating the contents, will automatically stick a newline at the end of the text for each row. BINARY set to TRUE suppresses this. In our example, we should set BINARY to FALSE or simply leave it unset.
The index itself should to be built on the master table:
create index myidx on master(somecolumn)
indextype is ctxsys.context
parameters ('datastore my_detail')
The master table must have a primary key column, just like any other table indexed
by interMedia Text. This primary key column is used to find the
corresponding detail rows, where detail.fk = master.pk.
The indexed column can be any column of allowable type -- the contents are not used by interMedia. If you can afford the space, you can add a dummy char(1) column called "text" or "body", to make the queries more readable:
select author, title from master
where contains(text, 'best & worst') > 0;
If detail rows are added, removed, or changed without changing the master text column, you should manually update the text column to itself to trigger a reindex.
For those of you familiar with previous versions of ConText, this is similar to the master detail new datastore. The old master detail datastore is no longer supported
The user datastore is new for Oracle8i. You write a stored procedure which, given a rowid, synthesizes a document. interMedia calls this procedure whenever document data is required. Let's look at an example. Say we have a table like this:
articles
id number
author varchar2(80)
title varchar2(120)
text clob
and we want to automatically have author and title be part of the indexed document
text. We can write a stored procedure following the user datastore interface:
create procedure myproc(rid in rowid, tlob in out NOCOPY clob) is
offset number := 1;
begin
for c1 in (select author, title, text from articles
where rowid = rid)
loop
append_varchar_to_lob(tlob, c1.title, offset);
append_varchar_to_lob(tlob, 'by '||c1.author, offset);
dbms_lob.append(tlob, c1.text);
end loop;
end;
This procedure takes in a rowid and a temporary clob locator, and concatenates
all the articles columns into the temp clob, This assumes a helper procedure append_varchar_to_lob
has been written.
Note the use of the NOCOPY in the procedure signature. This is very important to achieve best performance and to reduce TEMP space usage.
To ensure that the DBA has signed off on the code, only ctxsys-owned stored procedures are allowed for the user datastore. Thus, we need to do something like this as ctxsys:
create procedure s_myproc(rid in rowid, tlob in out NOCOPY clob) is
begin
appowner.myproc(rid, tlob);
end;
And, we need to make sure that the index owner can execute the stub procedure,
so:
grant execute on s_myproc to appowner
Now, back as the app owner, we create the preference, setting the PROCEDURE attribute to the name of the ctxsys stub procedure:
ctx_ddl.create_preference('myud', 'user_datastore');
ctx_ddl.set_attribute('myud', 'procedure', 's_myproc');
When we then create the index on articles(text) using this preference, interMedia
indexing will see author and title in the document text.
The user datastore can be used for any on-the-fly document synthesis, including more complex master-detail relationships, nested tables, text preprocessing, or multicolumn concatenation, like the example.
There are four constraints on the procedures used in user datastores:
When using the USER_DATASTORE, you may run into OCI_INVALID_HANDLE errors. This is bug 881904, and is fixed in patch release 8.1.5.1.
The filter class takes the document data provided by the datastore class, and filters it to readable text, passing it to the sectioner. With Oracle8i, this does not have to be plain text -- it can be a text format such as HTML.
Filter preferences are plugged into indexes using the filter keyword:
create index
...
parameters ('filter myfilter');
The filter class has three objects.
The null filter is used when the document contents are not in a binary format. It simply passes text from the datastore to the sectioner. The default CTXSYS.NULL_FILTER preference can be used in any index to employ the null filter.
The charset filter is new for Oracle8i. It converts documents from a foreign character set to the database character set. This is most useful for Japanese customers, who have to deal with two widely-used and incompatible character sets (and one infrequently-used incompatible character set).
The charset filter has one attribute, CHARSET, which takes the NLS name of the source character set. A list of NLS charsets can be found in the Oracle National Language Support Guide. The destination character set is always the database character set, so does not need to be specified.
Additionally, you can specify JAAUTO, which is a custom setting for Japanese character set auto-detection. Oracle8i will automatically detect Shift-JIS, JIS7, or EUC for each document and convert it to the database charset if needed.
The user filter is a method for plugging in your own filters. You write a filtering program, place it in ?/ctx/bin, and the indexing engine will invoke it to filter each document. Here's an example -- we'll create an uppercase filter which will uppercase every word.
We start by creating a program to do the filtering -- in this case, we'll write a perl script:
#!/usr/local/bin/perl open(IN, $ARGV[0]); open(OUT, ">".$ARGV[1]); while (This perl script converts a file to uppercase.) { tr/a-z/A-Z/; print OUT; } close(IN); close(OUT);
User filter programs like this take two arguments. The first argument is the name of the input file. The second argument is the name of the output file. Our filter reads the contents of the input file, filters it, and writes the output to the output file.
the program (called upcase.pl) is placed in ?/ctx/bin. Ensure that it's executable by the oracle operating-system user. Now the preference is created:
ctx_ddl.create_preference('mypref', 'USER_FILTER');
ctx_ddl.set_attribute('mypref', 'EXECUTABLE', 'upcase.pl');
When this preference is used in an index, the indexing engine will invoke the
user filter program for each document when the text is required.
The INSO filter automatically recognizes and filters over a hundred different formats, including Word 97 and Acrobat. The full list can be found in the documentation.
The INSO filter does not have any attributes at this time, so the CTXSYS.INSO_FILTER preference can be used in any index needing filtering.
IMPORTANT: This filter outputs HTML, not plain text. Make sure you employ the HTML section group or all these tags will be indexed. The default system will employ the HTML section group when the INSO filter is detected.
The INSO filter uses code from Inso, "the world leader in filtering and viewer technology," and Adobe. This code is not integrated code directly, it instead follows the user filter architecture. The user filter executable for the INSO filter is ctxhx.
ctxhx requires shared libraries and other files (.flt and .tpt files) located in ?/ctx/lib. The installer should copy the correct platform- specific files to this directory. Additionally, ensure that the external procedure agent has ?/ctx/lib in the LD_LIBARY_PATH environment variable (PATH on NT). This can be done using the ENVS section in the listener.ora. On NT you may need to have ORACLE_HOME set in the ENVS section, as well.See Installation section above for details.
If you are encountering problems using this filter, it can be run by hand. First get a formatted binary file. Let's call it testfile.doc. You do:
ctxhx testfile.doc testfile.txtThis should create the file testfile.txt with the HTML representation. The error messages you get, if any, should help you determine what's wrong.
The INSO filter is supported only on Solaris, HP-UX, AIX, and NT. Other platforms are not able to use this filter at this time.
Some of you familiar with previous versions of ConText may be wondering about the HTML filter. In 8i, this code has been moved to the new sectioner class, which allows any filter to spit out HTML if it so desires.
The section group class is new for Oracle8i, and incorporates functionality that was part of the wordlist and filter classes in ConText. It takes a text format, such as XML or HMTL, as input. It is unique in having two outputs -- the section boundaries, which go to the engine, and the plaintext, which goes to the lexer.
Section groups are not created using create_preference -- they have their own API in ctx_ddl:
ctx_ddl.create_section_group('mygroup','html_section_group');
The first argument is the name of the new section group. The second argument is
the section group type. This specifies the input text format, and tells the sectioner
the rules for detecting section boundaries and transforming to plaintext. Each
section group type is covered briefly below.
An empty section group can transform formatted test to plaintext, but will not index any section boundaries. You must first tell it which sections to index by adding sections to the group.
Sections have three important attributes: TAG, NAME, and TYPE. TAG tells the section group how to recognize the section. Because the section group already knows the format, you do not need to specify start and end tags; a section in an HTML section group with TAG of B knows that the section starts with <b> and ends with </b>, for instance. Tags are unique across the sections of a section group.
NAME is how you refer to the section in queries. You may want to name the B tag BOLD, for readability. Multiple tags can be mapped to the same name -- they are then treated as instances of the same section. H1, H2, H3 can all be treated as instances of the HEADLINE section. We recommend avoiding non- alphanumeric characters -- such as underscore -- in section names. Using these characters will force you to escape the names in queries.
TYPE is totally new for 8i. There are three different types of sections: ZONE, SPECIAL, and FIELD.
If a ZONE section repeats, each instance is treated separately in query semantics. See Examples, below. ZONE sections can enclose other sections, including themselves, and can be enclosed by other sections. ZONE sections are added like this:
ctx_ddl.add_zone_section('groupname', 'sectionname', 'tag');
SPECIAL sections are so named because they are not recognized by tags. There are
two special sections in Oracle8i -- SENTENCE and PARAGRAPH -- both recognized
by punctuation in the lexer. They index the start and end of sentences and paragraphs,
respectively. You add special sections like this:
ctx_ddl.add_special_section('groupname', 'sectionname');
No tag argument is needed. There are only two allowed values for section name:
SENTENCE, and PARAGRAPH.
ZONE and SPECIAL sections index section start and end word offsets, but do nothing to the words in the section. FIELD sections, on the other hand, extract their contents and index them separately from other words in the document. WITHIN queries are run on this separate, smaller index. This makes field section query faster than zone section query -- up to three times as fast in tests we've conducted -- especially when the section tags occur in every document.
This speed comes at the cost of flexibility. FIELD sections are meant for non-repeating, non-overlapping sections. If a field section repeats, it is treated as a continuation of the section, not a distinct instance. If a field section is overlapped by itself or by another field section, it is implicitly closed at the point where the other section starts. Also, there is a maximum of 64 field sections in any section group. This is 64 section names, not tags. Remember that you can map multiple tags to the same section name. You add field sections like this:
ctx_ddl.add_field_section('groupname', 'sectionname', 'tag');
To illustrate, let's work through a couple of examples to illustrate the different
types of sections and the impact on query semantics. We'll use the following document
as an example:
<A>rat</A><A>ox</A>
<B>tiger rabbit</B>
<C>dragon<C>snake</C></C>
This is an XML-style markup, but without a DTD, so we will use the basic section
group type:
ctx_ddl.create_section_group('mygroup','basic_section_group');
Let's start with ZONE sections:
ctx_ddl.add_zone_section('mygroup', 'asec', 'a');
ctx_ddl.add_zone_section('mygroup', 'bsec', 'b');
ctx_ddl.add_zone_section('mygroup', 'csec', 'c');
This tells the section group to recognize A tags as the section ASEC, B tags as
the section BSEC, etc. To do section queries, we use the WITHIN operator. Remember
to use the section name and not the tag:
contains(text, 'rat within asec') > 0This finds the document.
contains(text, 'tiger within asec') > 0This does not find the document. Although it has "tiger", it does not occur in the ASEC. If instead of the original setup we had mapped A and B to the same section name:
ctx_ddl.add_section('mygroup', 'asec', 'a');
ctx_ddl.add_section('mygroup', 'asec', 'b');
Then both:
contains(text, 'rat within asec') > 0
contains(text, 'tiger within asec') > 0
would find the document, because both A and B are treated as ASEC.
An important facet of ZONE section query semantics is that each instance is treated distinctly. That is, a query like this:
contains(text, '(tiger and rabbit) within bsec') > 0finds the document, but a query like this:
contains(text, '(rat and ox) within asec') > 0does not find the document. Although the document has "rat" and has "ox", and "rat" is in ASEC, and "ox" is in ASEC, "rat" and "ox" are not within the SAME ASEC. Note that
contains(text, '(dragon and snake) within csec') > 0finds the document, since they are both in the outer CSEC, even though the inner CSEC contains only "snake".
Special sections follow the same semantics, so a query like this:
contains(text, '(jumbo and shrimp) within sentence') > 0is looking for documents with "jumbo" and "shrimp" in the SAME sentence.
Now let's look at field sections. What if in setup we made ASEC a field section instead of a zone section:
ctx_ddl.add_field_section('mygroup', 'asec', 'a');
Unlike zone sections, each instance of a field section is considered a continuation
of the section. The query
contains(text, '(rat and ox) within asec') > 0finds the document, although it didn't when ASEC was a zone section. The field section simply stitches the two instances together. Field sections work best on sections like BSEC, which does not repeat like ASEC nor nest like CSEC.
One last subtlety with field sections. If ASEC is a zone section, then
contains(text, 'rat') > 0finds the document -- it contains the word rat. However, remember that field sections work by extracting the document contents and indexing them separately. Thus, if ASEC were a field section, this query would not find the document. "rat" is in ASEC, which is separate from the document contents.
You can, however, change this by making the field section visible. There is an optional boolean fourth argument to add_field_section. If this boolean argument is set to true:
ctx_ddl.add_field_section('mygroup', 'asec', 'a', TRUE);
then the field section is visible, and the section contents are visible to non-within
queries, just like zone sections. This is accomplished by double- indexing the
word -- once as part of the extracted section, and once as part of the body, so
this option has index space cost.
Section groups are plugged into indexes using the section group keyword:
create index
...
parameters ('section group mygroup');
Unlike previous versions, you do not need to set an attribute in the wordlist
object. You also no longer need to set STARTJOINS and ENDJOINS in the lexer. However,
the WHITESPACE, NEWLINE, and PUNCTUATIONS settings in the basic lexer affect sentence
and paragraph boundary recognition, which then impact special sections. There
are five section group types.
The null section group is used when there is no section information to be extracted -- it simply passes the text through to the lexer. Only special sections can be added to the null section group.
The basic section group is for simple XML-style markup without going full-
bore on the DTD stuff. It can be used in custom situations where full XML is
unnecessary. Tags are in the form
The HTML section group is for HTML. Good name, huh? It knows the rules for HTML, including ISO Latin-1 entity translation, HTML to plaintext conversion, and
omission. It knows HTML 4.0 tags, and has code to deal with unknown tags.SCRIPT and STYLE contents, and comments are removed. Contents of the TITLE section are preserved. META tag information indexing is a known customer requirement for a future version.
For those familiar with previous versions, there is no need for KEEP_TAG anymore -- this is automatic when you add the tag to the section group.
The XML section group is for XML. It does no processing to plaintext except for entity translation and tag removal. It can handle non-system, non-parameter entities in the internal DTD. It cannot access external DTD's in this version.
The news section group handles newsgroup-style postings -- RFC-1036 in particular. This format is used in newsgroups and is similar to the one used for e-mail. Note that while RFC-1036 defines the set of allowable header fields, Oracle does not require this -- any header fields can be used.
Messages in this format consist of a header section, a blank line, and a body section. The news section group recognizes <beginning of line>TAG: as the start of sections, and <end of line> as the end of the section. It stops looking for this pattern when the blank line is encountered.
Translation to plaintext consists of removing all header lines which are not important to section searching. A message like this:
From: me To: you X-ref: 5!oowj hello! How are you!with only the from and to tags in the section group, will look like this to the indexing engine:
<from section start>me<from section end> <to section start>you<to section end> hello! How are you!
The lexer class gets plaintext from the sectioner and splits it into words.
Lexer preferences are plugged into indexes using the lexer keyword:
create index
...
parameters ('lexer mylex');
The lexer class has five objects.
The basic lexer can be used for most European languages. It is programmed with default rules for splitting whitespace-delimited text into words. You can modify its parsing behavior to some extent with the lexical attributes.
Ordinarily, non-alphanumeric characters end a word. SQL*Plus is lexed as "SQL" and "Plus". The JOINS attributes allow you to modify this behavior by declaring sets of non-alphabetic characters to treat as valid word letters.
PRINTJOINS are non-alphanumeric characters which are treated as valid characters, and become part of the word. Declaring PRINTJOINS of "*" lets SQL*Plus be lexed as "SQL*Plus".
SKIPJOINS are like PRINTJOINS, but they do not become part of the word. Declaring SKIPJOINS of "*" lets SQL*Plus be lexed as "SQLPlus".
STARTJOINS are like PRINTJOINS, but they only come at the start of the word. If they occur in the middle of a word, a new word is started. Having "*" as STARTJOINS lets SQL*Plus be lexed as "SQL" and "*Plus". Multiple STARTJOINS are allowed at the start of a word.
ENDJOINS are like STARTJOINS, but for the end of the word. They implicitly end a word, too. So, "*" as ENDJOINS lexes SQL*Plus as "SQL*" and "Plus". Multiple ENDJOINS are allowed at the end of a word. STARTJOINS and ENDJOINS used to be important for section searching in previous versions, but with the new sectioner class, they are no longer needed for that purpose.
Each of these four JOINS attributes is a set of characters. Setting PRINTJOINS to "_%*" means that "_", "%", and "*" are all PRINTJOINS.
NUMJOIN is a single character which is the numeric join character. NUMGROUP is the single numeric group character. They are defaulted based on your NLS_LANG setting. For US, NUMJOIN is "." and NUMGROUP is ",".
Finally, CONTINUATION are those characters which indicate line continuation. These and the following newline are removed from the text. Default value is hyphen and backslash.
Then there are three attributes important for sentence/paragraph indexing. PUNCTUATIONS are the set of punctuation marks (?!. by default). WHITESPACE is a set of additional whitespace characters other than space and tab, which you get for free and cannot change. NEWLINE can be set to NEWLINE or CARRIAGE_RETURN. This controls whether lines are ended by \n or \r\n.
A sentence is then recognized as a PUNCTUATION followed by one or more WHITESPACE. A paragraph is a PUNCTUATION followed by a NEWLINE. There are other patterns which are variations on this, but that's the basic idea.
The third set of basic lexer attributes controls term normalization -- the process of converting words to a standard form. BASE_LETTER is a boolean attribute which, if set to YES, will convert accented characters to their unaccented forms.
By default, each word is uppercased during indexing so that queries are case-independent. MIXED_CASE is a boolean, which, if set to YES, does not do this. This makes indexing and queries case-sensitive.
ALTERNATE_SPELLING is an attribute which, if set, uses letter transformation rules for common alternate spellings. It can be set to GERMAN, DANISH, or SWEDISH. In GERMAN mode, for instance, A-umlaut gets transformed to AE. Since the lexer is used at both indexing and query time, query for a word containing A-umlaut will find the word spelled with A-umlaut or AE.
Finally, COMPOSITE controls word decompounding. In German, for instance, multiple words are often combined into a single string. This makes lexing difficult, because the words are no longer whitespace-delimited. If COMPOSITE is set, then the lexer breaks up these compound words, which allows interMedia to index and query as usual. COMPOSITE can be set to GERMAN or DUTCH. Unset or DEFAULT means no decompounding.
The last set of basic lexer attributes control text and theme indexing. New for 8i, the basic lexer can generate and index themes for each document, in addition to splitting it up into words. This merges the functionality found in the separate THEME LEXER and BASIC LEXER of previous versions.
INDEX_TEXT is a boolean which controls word indexing. INDEX_THEMES controls theme indexing. Setting INDEX_THEMES to NO makes it a text-only index. Setting INDEX_TEXT to NO makes it a theme-only index. They cannot, of course, be both NO. Indexing themes takes longer and uses a bit more index space, but improves the efficacy of ABOUT queries. We'll talk more about this when we discuss ABOUT query.
The CTXSYS.DEFAULT_LEXER preference has theme indexing ON for the English language-specific scripts.
Japanese is not a whitespace-delimited language (except in textbooks) so it is very difficult for computers to pick out individual words. The Japanese V-gram lexer gets around this problem by indexing overlapping clusters of characters. The Japanese word ABC might be decomposed into AB and BC, for instance. Query terms are also decomposed this way. Thus, the contains query is not looking for words, but for patterns of characters.
The Japanese lexer does not have any attributes in this version.
Chinese is also not whitespace-delimited, so a similar solution has been applied for Chinese, as well. The Chinese v-gram lexer also does not have any attributes.
New for Oracle8i is a segmenting Chinese lexer, which can actually recognize some Chinese words as whole entities. The rest of the text is still broken into v-grams, but this should be more space-efficient and have faster query than the pure v-gram method. The Chinese segmentation lexer does not have any attribtues.
Korean is whitespace-delimited, but has problems with verbs, which can have thousands of different forms. Our Korean lexer is a lexicon-driven engine (using a third-party 3-soft lexicon) which simply eliminates verbs from indexing. New for Oracle8i is the ability to eliminate adverbs and adjectives, do various form conversions and perform morphological and segmentation decomposition.
Not every word in a document is worth indexing. Linguistic lubricant like prepositions and conjunctions are important for language understanding, but are not very useful for information retrieval -- they are very common and so convey very little information about the document by their presence. Most of the time, it's not worth the space to index these words.
The stoplist class holds a list of these words, called stop words. During indexing, the engine consults the stoplist and filters out the stop words. The stoplist class does not use the create_preference API. Instead, it has its own API:
ctx_ddl.create_stoplist('mylist');
Words are added using add_stopword:
ctx_ddl.add_stopword('mylist', 'the');
Here the article "THE" has been added to the stoplist. You can see the words in
a stoplist using the ctx_stopwords view. A stoplist can have a maximum of 4095
stopwords. Each can be up to 64 bytes in length.
Stopwords are case-sensitive in a stoplist, but if the index is not case- sensitive, the stopwords will not be case-sensitive. So, you could add "THE" and "the" to a stoplist, but if the index is not case-sensitive, there would be no difference -- you'd just be wasting a stopword slot.
Stoplists are plugged into indexes using the stoplist keyword:
create index
...
parameters ('stoplist mylist');
New for Oracle8i, stopwords can be added to the index without re-indexing:
alter index myidx rebuild parameters ('add stopword AND');
This adds "AND" as a new stopword after indexing. Remember that the index makes
its own copy of everything, so this does not affect the stoplist used to create
the index.
The default stoplist, CTXSYS.DEFAULT_STOPLIST, is language-specific, and default stoplists are provided for most European languages.
Stoplists can also hold two other kinds of objects: stop themes and stop classes. Stop classes are classes of words, rather than individual words. The only stop class available in Oracle8i is NUMBERS, which stops all numbers from being indexed. Alphanumeric words, such as 123.456.789 and abc231 are still indexed. This behavior depends on the lexer to recognize numbers.
Stop themes are used only during theme indexing. The entries added as stop themes are themed, and these themes AND THEIR CHILDREN are not indexed. For instance, say you enter "biology" as a stop theme. This will prevent themes like "Physical Sciences:Biology:Genetics" and "Physical Sciences: Biology:Botany" from being indexed. This is just an example, and does not actually reflect the knowledge base.
Stop themes are used only during theme indexing, so adding "biology" as a stop theme will not stop the word "biology" or biology-associated words from being indexed.
The wordlist class is an odd class because it has no effect on indexing. Instead, it holds fuzzy and stem expansion settings used at query time.
Wordlist preferences are plugged into indexes using the wordlist keyword:
create index
...
parameters ('wordlist mylist');
There is only one wordlist object: BASIC_WORDLIST, with the following attributes:
| STEMMER | This attribute is set to the default stemming type. Stemming is an expansion
of a word to different forms. The stem expansion of GO might include GO,
GOING, WENT, and GONE. The rules for this vary from language to language.
(See expansion query operators).
STEMMER can be set to one of the following: ENGLISH (inflectional), (English) DERIVATIONAL, DUTCH, FRENCH, GERMAN, ITALIAN, SPANISH, and NULL, which means no stemming. The CTXSYS.DEFAULT_WORDLIST setting is language-specific. |
| FUZZY_MATCH | This attribute is set to the default type of fuzzy match. Fuzzy match
is an expansion technique designed to find words close in form, such as
mistyped or mis-OCR'ed versions.
FUZZY_MATCH can be set to one of the following: GENERIC, JAPANESE_VGRAM, KOREAN, CHINESE_VGRAM, ENGLISH, DUTCH, FRENCH, GERMAN, ITALIAN, SPANISH, and OCR. The CTXSYS.DEFAULT_WORDLIST setting is language-specific. |
| FUZZY_SCORE | This is the default score floor for fuzzy expansions. New for Oracle8i is the ability to limit the fuzzy expansion to the best matches. The fuzzy score is a measure of how close the expanded word is to the query word -- higher is better. Setting fuzzy score means that fuzzy expansions below this score will not be produced. You can set this from 0 to 80. |
| FUZZY_NUMRESULTS | This is the default maximum number of fuzzy expansions. Setting fuzzy numresults limits the fuzzy expansion to a certain number of the best matching words. You can set this up to 5000. |
The storage class holds storage parameters for the index tables. The Text index is made up of five oracle objects, so the single storage clause in the create index statement is insufficient to specify storage. Storage preferences are plugged into indexes using the storage keyword:
create index
...
parameters ('storage mystore');
There is only one storage object: BASIC_STORAGE, with the following attributes:
| I_TABLE_CLAUSE | This attribute is the storage parameters for the I table, which is the
main index table. There will be at least one row per unique word in your
index -- probably multiple rows per word. Each row has a BLOB which tries
to stay inline, making the row a bit large. The value is simply tacked on
to the end of the create table statement, so you could set it like this:
ctx_ddl.set_attribute('mystore', 'I_TABLE_CLAUSE',
'tablespace foo storage (initial 1K)');
|
| K_TABLE_CLAUSE | This attribute is the same, but for the K table, which is an IOT for rowid to docid conversion. There will be one row per row in your base table. Each row consists of a number and a rowid. |
| R_TABLE_CLAUSE | This attribute is the same, but for the R table, which is a BLOB table for docid to rowid conversion. There will be only a few rows in this table, and will have a large BLOB in each row. |
| N_TABLE_CLAUSE | This attribute is the same, but for the N table, which has invalid docids waiting for garbage collection. The number of rows in this table will depend on the frequencies of update and delete DML, and of garbage collection optimization. Each row is short, holding two numbers. |
| I_INDEX_CLAUSE | This attribute is the same, but for the unique index on the I table. |
Those of you familiar with previous versions may be wondering where the index memory setting is. For new users, interMedia builds an inverted index, so documents are not added to the index one at a time, like a b-tree index. Instead, information from multiple documents is buffered up and periodically flushed to the index during indexing. In previous versions, you could set the size of this buffer in the analogue of the storage class. Setting this buffer higher reduces index fragmentation but uses more memory.
In Oracle8i, index memory can be set directly in the parameters string. This allows you to set index memory on a per-operation basis. The keyword used is MEMORY:
create index
...
parameters ('memory 2M');
Here a paltry 2 megabytes of memory is used for indexing. If not specified, the system parameter DEFAULT_INDEX_MEMORY is consulted for the systemwide default index memory. At install time, this is 12M.
The amount of memory cannot exceed system parameter MAX_INDEX_MEMORY. This allows the system administrator to disallow outrageous amounts of index memory.
The MEMORY keyword is also allowed in the alter index parameter string for batch DML (background DML uses DEFAULT_INDEX_MEMORY). This allows you to have different memory sizes for create index and DML, something which is not possible in previous versions. You simply specify a different memory amount in the create index parameter string then in the alter index parameter string.
Most text search engines have asynchronous document updates and additions -- changes to the index are usually done in batches, after and separately from the document changes. This is due to three aspects of inverted indexing:
| INSERT | The document rowid is placed into a queue, dr$pending, for later addition to the text index. Queries before this DML is processed will not find the new document contents. |
| UPDATE | The old document contents are invalidated immediately, and the document rowid is placed into the dr$pending queue for later reindexing. Queries before this DML is processed will not find the old contents, but neither will it find the new contents. |
| DELETE | The old document contents are invalidated immediately. No further DML processing is required -- queries will no longer find the document. |
Thus, interMedia has synchronous invalidation but asynchronous addition. This extends to transactions, as well:
delete from table where contains(text, 'delword') > 0; select * from table where contains(text, 'delword') > 0; --> you get no rows returned rollback; select * from table where contains(text, 'delword') > 0; --> you get the rows returnedand:
insert into table values (1, 'insword'); select * from table where contains(text, 'insword') > 0 --> you do not get the row commit; select * from table where contains(text, 'insword') > 0 --> you still do not get the rowThe synchronous invalidation is new to this version. Also new is that the DML notification is done through integrated kernel code, and not through triggers as in previous versions.
So how do you process your inserts and updates when they are sitting in the queue? interMedia provides two methods: sync and background.
Sync is the manual way -- you control when and how it's invoked. Each invocation processes all pending inserts and updates for an index:
alter index myindex rebuild online parameters ('sync')
The ONLINE keyword is very important. Without this, during the sync operation
queries are blocked and DML on the base table fails. Make sure to use it for sync
operations.
Background DML requires you to start a ctxsrv background daemon in the operating system:
ctxsrv -user ctxsys/<password>Once started, the daemon polls the dr$pending queue for DML, and automatically processes additions as they become available.
You can use either or both -- it's largely a matter of your application requirements. Background DML scans for DML constantly. This leads to new additions being indexed automatically and quickly. However, it also tends to process documents in smaller batches, which increases index fragmentation. Sync DML is done at the user's request, so usually the batches are larger and thus there's less index fragmentation. A happy compromise could possibly be reached by invoking sync through dbms_sql in a dbms_job which runs at longer intervals.
The Text index is an inverted index, so essentially consists of words and lists of documents which contain that word. When new documents are added to this index, these lists have to be extended. Instead of appending to these lists, more rows for the word are created. However, this can lead to many rows with sub-optimal, short lists -- called "index fragmentation". For instance, say we have the following in our index:
DOG DOC 1 DOC 3 DOC 5Now a new document with "dog" comes along. Simply add it to the index as a new row:
DOG DOC 1 DOC 3 DOC 5
DOG DOC 7
Subsequent DML will also create new rows:
DOG DOC 1 DOC 3 DOC 5
DOG DOC 7
DOG DOC 9
DOG DOC 11
This is why background DML generally produces more index fragmentation than spot
invocation of sync: processing DML less frequently in larger batches allows newly
added rows to have longer lists, which reduces the number of rows in the index
table.
Keep in mind that a certain amount of fragmentation is expected in 8i. interMedia will try to keep index table rows below 4000 bytes long, to inline the LOB and to speed up index row merges used during phrase search, AND, section search, etc. However, frequent DML will probably result in much more fragmentation than this.
During an update or delete, the old document information must be removed from the index so that queries will no longer hit it. However, because the inverted index consists of words and rows of occurrences, this would entail finding all words in the old version of the document (which may not be available), and removing an occurrence from the lists of those rows in the index table. This is too much work to do synchronously, and deletes must be synchronous in the integrated model.
Instead, interMedia marks the old document contents as invalid and does not touch the index. The mark signals queries to remove the document from any query result sets. However, this leaves the old information in the index, taking up space in the index tables.
To solve these potential problems in the index, you run optimization. Optimization has two modes: FAST and FULL. FAST optimization targets fragmentation only:
alter index myindex rebuild online parameters ('optimize fast');
The ONLINE keyword is important -- without it, queries are blocked and DML fails
with an error. FAST optimization runs through the whole index table and glues
fragmented rows together, to a maximum of 4000 bytes for a single row. This reduces
the number of rows in the index table.
FULL optimization does both defragmentation and garbage collection, which removes the old information left over after document invalidation:
alter index myindex rebuild online parameters ('optimize full');
It loops through each row of the index table, but, unlike fast optimization, cracks
open each row's list, removing old document information. It also glues separate
rows together where possible.
Because garbage collection is more involved and time-consuming than defragmentation, FULL optimization does not have to be run on the whole table at one time. Instead, you can run it for a limited period of time:
... parameters ('optimize full maxtime 5')
which means run for a maximum of 5 minutes. At the end of 5 minutes, it will stop.
The next time you run optimization, it will pick up where it left off. This lets
you do a little of bit optimization each night during off times, and ensure that
it is not loading the system when the off time ends.
Garbage collection is needed only after document invalidation, which happens only for delete and updates. If your document table is append-only, meaning that documents are inserted and never changed or removed, then it should suffice to run only defragmentation.
Unlike previous versions, optimization (either mode) does not block background DML processing. However, because Oracle prevents alter index operations from running concurrently, sync invocations will be prevented.
At this point all aspects of building an index and keeping it up to date have been discussed. However, indexes are a means to enabling content- based queries.
IMPORTANT: Unlike a value query, which is slower without a b-tree index, contains queries are completely disallowed without a Text index.
You do a query using the contains operator:
select id
from texttab
where contains(textcol, 'query') > 0
The first argument to contains is the name of the text column. The second is the
text query, which is limited to 2000 bytes in length. It returns a number, which
indicates the strength of match. 0 means not a match, so it is constrained to
> 0. The result is that this query finds the id's of all rows in texttab where
the textcol has the word "query".
Of course, now you're wondering how to use that return value in the select list and order by clause. An operator is provided just for that purpose:
select id, score(1)
from texttab
where contains(textcol, 'query', 1) > 0
order by score(1) desc
The "1" in the score and contains operators is called the "contains label" and
it's a number which links the score in the select list to the contains in the
where clause -- it can be any number, so long as it matches.
Score can be between 0 and 100, but the top-scoring document in the query will not necessarily have a score of 100 -- scoring is relative, not absolute. This means that scores are not comparable across indexes, or even across different queries on the same index. Score for each document is computed using the standard Salton formula:
3f(1+log(N/n))Where f is the frequency of the search term in the document, N is the total number of rows in the table, and n is the number of rows which contain the search term. This is converted into an integer in the range 0 - 100.
Query operators like AND and OR operate on the scores of their operands.
Unlike the transparent query rewrite of previous versions, you can include a contains clause in any SQL statement using any tool which can issue oracle SQL. Here is a short list of new places where contains queries can pop up:
There is no longer a limitation on the length of the contains SQL statement, although there is a limit on the query term length.
For those of you familiar with previous versions, there is no two-step or text cursors (n-memory query). If you absolutely must have result tables, use insert as select. Instead of text cursors, use real PL/SQL cursors. To sort by score, use an order by clause with a FIRST_ROWS hint. The hint will avoid an oracle SORT ORDER BY.
However, interMedia still supports count_hits. count_hits is a fast way to get the hit count for a text query, in PL/SQL:
num_hits := ctx_query.count_hits('indexname','query',TRUE);
The first argument is the name of the index. This should be a string, so use quotes if you are using a literal value. The second argument is the query string, and the third is a boolean toggle called "exact". If exact is TRUE, then the result is accurate. If exact is FALSE, then the result is only an upper bound. The actual count could be lower. exact FALSE is faster, because it doesn't screen out invalid documents (see issues on DML and optimization for more details).
You could also do "select count(*)" with a contains clause, and this is recommended when you have structured conditions, but count_hits is faster for text-only queries.
The simplest query is a single word:
contains(text, 'dog') > 0You can escape your word with curlies:
contains(text, '{dog}') > 0
This is useful when you want to query on a reserved word or your word has special
characters which are query operators. The list of reserved words and the query
operator characters are in the documentation.
You can query on a phrase just as easily:
contains(text, 'dog my cat') > 0Unlike internet search engines, this searches for the phrase "dog my cat", not just any document which has these words.
If your phrase contains a stopword:
contains(text, 'dog the cat') > 0then the stopword ("the" in this case) is treated as a wildcard, matching any word. This query would match all the following documents:
dog the cat
dog my cat
dog cat cat
but not this document:
dog cat frogIn other words, the stopword must match something. Stopwords alone disappear from the query. The query:
contains(text, 'the & cat') > 0is reduced to
contains(text, 'cat') > 0Queries on just stopwords will return no rows.
Boolean operators are used to combine the results of subqueries using boolean algebra. interMedia has three boolean operators:
| AND (&) OR (|) |
AND and OR are the basic boolean operators. However, the operands are returning numeric scores, rather than boolean values, so we've implemented AND as a minimum of its operand scores, and OR as a maximum of its operand scores. This results in expected behavior -- if any operand of AND scores 0 (the word is not in the document) -- then the AND retuns 0 (the document is not returned). You can use either word or symbol: 'dog and cat' is the same as 'dog & cat'. |
| NOT (~) | NOT is not the unary negator. Instead, it is "AND NOT". 'dog NOT cat' returns all document which have "dog" except those which also have "cat". The score returned is the score of the left child. |
You can use parentheses for subqueries and precedence, as in:
(dog my cat AND fido) OR horseWithout parentheses, the operators follow a precedence hierarchy which can be found in the documentation. Since AND is higher precedence than OR, the parens are not actually needed in the example above. However, they are needed in this query:
dog my cat AND (fido OR horse)in order to override the precedence hierarchy.
The scoring operators operate on the scores of their operands. There are four scoring operators:
| WEIGHT (*) | The weighting operator multiplies a search term's score to make it more
or less important in the query. The multiplier can be from .1 to 10:
contains(text, '(dog*2) AND cat') > 0This query looks for documents with "dog" and "cat", but the score for "dog" is multiplied by 2. This makes documents with "dog" more likely to have a higher score in the result set than those with "cat". WEIGHT does not have a word equivalent -- you must use the character *. |
| THRESHOLD (>) | The threshold operator filters out documents below a particular score.
'dog > 10' will result only those documents containing "dog" which score
higher than 10. You can also do it this way:
contains(text, 'dog') > 10but threshold is implemented at a lower level, so will be faster. THRESHOLD does not have a word equivalent -- you must use >. |
| MINUS (-) | The minus operator takes the left operand score and subtracts the right operand score. 'dog - cat' looks for documents with "dog", but those documents which also have "cat" will probably score lower than those without. This is not the same as NOT, which completely filters out the document if the right operand search term is present. |
| ACCUM (,) | Accumulate groups several words or phrases, and scores higher when more of its operands are hit. 'dog, cat, frog' will score any document with all three words higher than any document with two of the three, which will score higher than any document with only one of the three. This is changed from previous versions, where ACCUM merely added the scores of its operands. You can either use the symbol , or the word ACCUM. |
For those of you familiar with previous versions, the MAXDOC (:) and FIRST/NEXT (#) operators have been removed. You can use cursor fetching loops to get the same effect.
Word expansion operators expand a word to find similar forms of the word in the document set. For Oracle8i, the limit on maximum number of expansions has been removed. However, the more words a term expands to, the slower the query will run. There are five word expansion operators:
| WILDCARD (% _) | You can use the SQL like wildcards % and _ to find words matching a pattern. 'do%', for instance, finds all documents with words beginning with do, such as dog, door, etc. This is done via a like query on the index table, so a wildcard query like '%do%' will be slow, as a full table scan of the index table is required. |
| FUZZY (?) | Fuzzy query finds words with similar form, using a proprietary algorithm. This is useful for finding mis-typed or mis-OCR'd words. The fuzzy operator is ?, as in '?dog'. You can change the rules used for fuzzy and limit the expansion using the wordlist class. FUZZY has no word equivalent -- you must use the ? operator. |
| STEM ($) | Stem query finds words with the same stem form, using integrated Xerox linguistic code. This is useful for finding "GOING" and "WENT" from "GO", for instance. You can change the settings for stem expansion using the wordlist class preferences. STEM has no word equivalent -- you must use the $ operator, as in '$go'. |
| SOUNDEX (!) | Soundex query finds words which sound alike. This no longer uses a separate table, but is instead a specialized fuzzy expansion. SOUNDEX has no word equivalent -- you must use the ! operator, as in '!dog'. |
| EQUIV (=) | Equivalence is for manually inputting the various forms of a word, as in 'dog = cat', searching for documents with 'dog' or 'cat', treating them as different forms of the same word. Equiv only works on simple words -- you cannot equiv phrases. You can either use the symbol = or the word EQUIV. |
Other than phrases, interMedia has one proximity operator -- NEAR, which finds documents where the input phrases are close to each other. The closer the words are to each other, the higher the score. NEAR has two forms. The shorthand form uses the ; character;
dog ; cat ; boatThis query finds documents which have dog, cat, and boat, and scores the document higher the closer they are to each other.
The function form has options to set range and directionality:
NEAR((dog,boat), 10, TRUE)The first argument to NEAR is the list of words. In the example, we are looking for dog and boat. The second argument is the maximum allowed span. In the example, we are constraining it to within a 10-word span. That means there cannot be more than 10 words between "dog" and "boat". This range can be up to 100. It is optional -- if omitted, then it defaults to 100. The third argument is directionality. If TRUE, as in the example, then the words have to appear in the order specified in the list. If FALSE, the words can appear in any order, as long as they fit within the specified word span. This can be omitted, in which case FALSE is the default.
The within operator limits a subquery to a particular section:
dog within titlewill find all documents with "dog", but only if it occurs in the "title" section. The name of the section is used rather than the tag. The "title" section has to have defined in your section group -- you cannot specify arbitrary tags. If your section name has non-alphanumeric characters, you should enclose the name in curlies:
dog within {my_title}
which is why it is't recommended to use non-alphanumeric characters in section
names. Within is high on the precedence hierarchy, so if you have a subquery,
it is safer to use parentheses to group it:
(dog and cat) within titleIf you don't use parentheses, it will search for documents with "dog" anywhere and with "cat" in the title section:
dog and cat within title == dog and (cat within title)which is a different query.
If the section is a zone or special section, the subquery is constrained to a particular instance of the section:
(dog and cat) within sentencemeans dog and cat within the SAME sentence. This is different from:
(dog within sentence) and (cat within sentence)for instance -- which means dog in any sentence, and cat in any sentence. Section types and query semantics were covered in issue #7.
Within does not nest, so this is not allowed:
ABOUT is an advanced query operator. We take the input to ABOUT and do our best to increase precision and recall. If your index has a theme component (INDEX_THEMES is YES for BASIC_LEXER) then ABOUT does a theme query. It determines the theme of the input term, and finds documents with the same theme. This allows a query like this:
contains(text, 'about(canines)')to find relevant documents even if they don't have the word "canines" in them -- they might have "dogs" or "wolves" instead, for instance. It will also filter out documents which have the word "canines" but are not significantly about canines, such as an article on dentistry, for instance.
The input to ABOUT does not have to be a single term. It can be an unstructured phrase:
about(japanese banking investments in indonesia)This phrase will most likely have multiple themes -- banking, Japan, etc. The documents retrieved by this query will have all those themes. This allows you to use plainly-worded descriptive phrases instead of rigid query syntax and still find relevant documents.
If your index does not have a theme component, then ABOUT will do a query expansion to increase recall -- it will split the input into words, stem each, then string it back together using accumulate:
about(go home now)is transformed to
$go,$home,$nowNote that the purpose of ABOUT is to have interMedia do what it can to find the best-fitting documents. It does not have a formulaic definition like the other operators.
ABOUT can be combined with other operators:
contains(text, 'about(dogs) and $cat')for instance. This allows you to do a combined theme and text search, which was not as easy to do in previous versions.
The thesaurus operators allow you to expand a thesaurus phrase using a thesaurus. The SYN operator, for instance, expands a word into synonyms:
SYN(dog) == {dog} | {mutt} | {canine}
The expansion uses a previously loaded thesaurus. This is usually a thesaurus
of your own creation -- interMedia does not install a thesaurus by default,
although a sample thesaurus is included in ?/ctx/sample/thes. You can see the
expansion by using the ctx_thes package functions:
declare
output varchar2(80);
begin
output := ctx_thes.syn('dog');
dbms_output.put_line(output);
end;
Thesaurus operators take simple thesaurus terms as input -- they cannot nest or
take expansions as input:
SYN($dog) <-- BAD
SYN(BT(dog)) <-- BAD
SQE's are Stored Query Expressions -- a macro operation. You store a query string and give it a short name, using the ctx_query PL/SQL package:
ctx_query.store_sqe('cats', 'cat = feline = kitty');
The first argument is the name of the SQE -- "cats". The second argument is the
stored expression. When I use it in a query:
contains(text, 'SQE(cats)')interMedia expands it to the stored query expression, making this equivalent to:
contains(text, 'cat = feline = kitty')SQE text can even refer to other SQE's:
ctx_query.store_sqe('animals', 'frog | hog | sqe(cats)');
Although if you set up a circular reference, you will get an error at query time.
There are fundamental changes to SQE in 8i. First, stored query expressions are
no longer tied to an index -- you can use any SQE in any query on any index. You
can even use other user's SQE's by prepending the owner name:
SQE(bob.cats)Second, SQE partial results are no longer stored. Instead, SQE works as a macro operation, replacing lexically during query parsing. Finally, there is no longer the notion of session-level SQE -- store_sqe does a commit.
For those of you familiar with previous versions, the PL/SQL operator (@) has been removed. This is due mainly to stricter transactional restrictions resulting from tighter integration with the kernel
interMedia provides a program to load formatted, file-based thesauri into ctxsys- owned tables in the database. These thesauri can then be used at query time to expand query terms for improved recall. interMedia also provides PL/SQL functions for browsing the thesaurus while it's in the database, and a program for exporting the thesaurus back out to a file.
These thesauri are usually customer-defined. interMedia provides a sample thesaurus in ?/ctx/sample/thes, but this is not imported by default.
The first step in thesaurus import is construction of the thesaurus file. This is a list of terms in the thesaurus, each followed by their relationship to other terms. For instance:
canine
SYN mutt
BT mammal
NT wolf
lizard
BT reptile
The first term in this thesaurus is "canine". The second line declares that "canine"
has a synonym, "mutt". The following lines declare that the broader term of "canine"
is "mammal", and a narrower term of "canine" is "wolf".
Terms, such as "canine" or "lizard" in our example above, must be on their own line, and must be at the start of the line -- no whitespace is allowed between the beginning of the line and the term.
Relationship lines, such as "SYN mutt", must have whitespace before them. The relationship word (SYN) is not case-sensitive. You cannot have more than one relationship word / word pair on a single line:
Relationships cannot be on the same line as the term:
These rules are required for proper parsing of the thesaurus file. The more relaxed rules in previous versions were not able to correctly parse certain cases, especially when terms had common substrings with relationship words, such as "NT 4.0".The thesaurus file supports the following set of relationship words:
| SYN | Synonymity. A SYN B means that A and B mean the same thing. A SYN B implies B SYN A. A SYN B and B SYN C implies A SYN C. |
| UF | Use For. Same as SYN. |
| PT | Preferred term. A PT B means that B is the preferred term for A. There can be only one preferred term for any thesaurus term. If multiple PT lines are seen, latter PT lines override previous lines. |
| USE | Same as PT. |
| SEE | Same as PT. |
| BT | Broader Term. A BT B means that B is a broader term of A. A BT B implies
B NT A (A is a narrower term of B). Terms can have multiple broader terms.
Circular references should not be introduced.
You can specify whole hierarchies under a single term using BTn: dog
BT1 canine
BT2 mammal
BT3 vertebrate
BT4 animal
This isn't declaring multiple broader terms of dog, but instead the whole
hierarchy above dog -- canine is the first-level BT of dog, then mammal
is the second-level BT -- meaning that mammal is the BT of canine. BTn goes
up to BT16. |
| BTP | Broader Term Partative. A BTP B means that B is a broader term of A in the sense that A is a part of B. Hard drive BTP Computer, for instance -- Computer is a broader term of hard drive in the sense that a hard drive is a part of a computer (NCA notwithstanding). A BTP B implies B NTP A, but does not imply A BT B -- the partative hierarchy is separate from the normal hierarchy. This relationship is sometimes called "meronymity", although not by anyone with a life. |
| BTG | Broader Term Generic. A BTG B means that B is a broader term of A in the sense that B is a generic name for A. Rats BTG Rodents, for instance. A BTG B implies B NTG A, but does not imply A BT B. This relationship is sometimes called "holonymity". |
| BTI | Broader Term Instance. A BTI B means that B is a broader term of A in the sense that A is an instance of B. Cinderella BTI Fairy Tale, for instance. A BTI B implies B NTI A, but does not imply A BT B. I don't know any fancy name for this relationship. |
| NT | Narrower Term. The opposite of BT. You can use NTn, just like BTn. |
| NTP | Narrower Term Partative. The opposite of BTP. |
| NTG | Narrower Term Generic. The opposite of BTG. |
| NTI | Narrower Term Instance. The opposite of BTI. |
| RT | Related Term. A RT B means that A and B are associated, but are not synonyms and cannot be arranged into a hierarchy. A RT B implies B RT A. A RT B and B RT C does not imply A RT C. |
| <lang>: | Foreign language translation. <lang> can be any label you wish to
use for the language, 10 characters or less. This applies to the immediately
preceding term, even if it is a relationship word:
cat
french: chat
RT hat
french: chapeau
Here "chapeau" is the french translation for "hat", not "cat". |
| SN | Scope note. You can attach a little note to a term as a comment. This
can be up to 2000 characters. If you need multiple lines, simply repeat
the SN keyword -- the text is concatenated:
cat
SN I had a cat once. He was brown and
SN grey and well-behaved.
|
Our thesaurus functionality also supports homographic disambiguation using parenthesis:
mercury (god)
BT Greek mythology
mercury (element)
SYN quicksilver
as well as compound terms using the + sign:
thermometer
SYN temperature + instrument
Once you have the file, you need to import it into the interMedia data dictionary. From the OS command line:
ctxload -user ctxsys/ctxsys -thes -name mythes -file mythes.txtThe login user can be any CTXAPP user -- it does not have to be ctxsys. -thes is the toggle for thesaurus import. mythes is the name for the new thesaurus, and mythes.txt is the file to import.
If a thesaurus of this name already exists, you will get an error and you must drop the existing thesaurus before importing this one. Thesaurus names are database-global.
Optionally, you can specify the thescase argument to make the thesaurus terms case-sensitive:
ctxload ... -thescase YThe default is N, meaning that the thesaurus is not case-sensitive. Mercury (the god) and mercury (the element) are the same term if found in the thesaurus file.
Once the thesaurus is loaded, any user can use it in any query on any index using the thesaurus operators. These mimic the relationship names for the most part. For instance:
contains(text, 'SYN(dog, mythes)') > 0will search for documents which contain any of the synonyms of "dog" defined in the "mythes" thesaurus. The actual expansion uses OR like this:
{canine}|{doggie}|{mutt}|{dog}
Homographic disambiguators are not included in the expansion:
mercury (element)
SYN quicksilver
SYN(quicksilver) === {quicksilver}|{mercury}
Compound phrases are treated as a conjunctive:
thermometer
SYN temperature + instrument
SYN(thermometer) == {thermometer}|({temperature}&{instrument})
Thesaurus operators take simple thesaurus terms as input -- they cannot nest or
take expansions as input:
SYN($dog) <-- BAD
SYN(BT(dog)) <-- BAD
Here are the specific thesaurus functions:
| SYN( term [, thesname] ) | The input term is always part of the output; if no synonyms are found then term is the expansion. thesname is the name of the thesaurus to use. If omitted, thesaurus DEFAULT is consulted. There is no DEFAULT thesaurus installed automatically -- it is up to the user to load a thesaurus named DEFAULT. |
| PT( term [, thesname] ) | Preferred term. Only the preferred term is returned in the expansion. |
| |
Broader/Narrower terms of all types. The optional second argument is
a numeric scope indicating how many levels of hierarchy you want. A BT with
a level of 2, for instance, would search for the term, all broader terms
of the term, and all broader terms of those terms.
NOTE: the syntax demands that if you want to specify thesname
then you MUST specify level too, eg |
| TT( term [, thesname] ) | Top term. Travel up the BT hierarchy to the root term. Only this root term is searched. |
| RT( term [,thesname] ) | Related term. Search for term and all related terms. |
| TR( term [, lang [, thesname]] ) | Foreign language translation of the term. Lang argument should match label used in the input thesaurus. If omitted all foreign language translations will be searched. Note that the foreign language translation must have been entered in the input thesaurus -- we don't do automatic translation. |
| TRSYN( term [, lang [, thesname]] ) | Foreign language translation of term synonyms. This is functionally equivalent
to taking SYN of the term, then TR of each resulting term. Like TR, if the
target language is omitted, all available translations will be searched.
NOTE: the syntax demands that if you want to specify thesname
then you MUST specify lang too, eg |
There are numerous functions in the ctx_thes package for browsing the thesaurus through PL/SQL. For the most part, they have the same names and arguments as the query functions, but they return the expansion text. For instance:
declare
exp varchar2(4000);
begin
exp := ctx_thes.syn('dog','mythes');
end;
Here the synonyms of "dog" in the "mythes" thesaurus are returned. The expansion
is returned as a string, so exp will get the value:
{canine}|{doggie}|{mutt}|{dog}
The return can be up to 32000 characters in length.
Thesaurus export is also done through ctxload:
ctxload -thesdump -user ctxsys/ctxsys -name mythes -file mythes.txtThe exported version will be structurally similar to the imported version -- all the words will be there in the right places, but it may not be exactly the same -- in particular, it will probably be more verbose, specifying explicitly relationships which were implied in the input thesaurus.
The phrase ISO-2788 or -5964 thesaurus standard is somewhat misleading. The computing industry considers a "standard" to be a specification of behavior or interface. These standards do not specify anything. If you are looking for a thesaurus function interface, or a standard thesaurus file format, you won't find it here. Instead, these are guidelines for thesaurus compilers -- compiler being an actual human, not a program. They give general advice on how to build and maintain a thesaurus, such as "Adverbs should not be used alone as indexing terms".
What Oracle has done is taken the ideas in these guidelines and in ANSI Z39.19 -- an American version of ISO-2788 -- and used them as the basis for a specification of our own creation.Therefore interMedia can handle all the inter-word relationships mentioned in Z39.19, and even use the same abbreviations, but the interface is our own. the interMedia file format looks a bit like "Figure 1" of ISO-2788, but Oracle imposed its own rules to make it parseable. interMedia can implement foreign language lookups like ISO-5964, but the input thesaurus looks nothing like their examples. So, Oracle supports ISO-2788 relationships or ISO-2788 compliant thesauri.
In Oracle8i, tuning Text queries is not very different from tuning regular SQL queries. This is demonstrated with an extended example, using this table of magazine articles:
create table articles (
id number primary key, -- article id
issue number, -- issue number
title varchar2(80), -- article title
published date, -- publication date
text clob -- article text
);
Assume issue has a non-unique b-tree index called art_issx, and text has a Text
index called art_textx.
The first step in query tuning is analysis of the query plan. This is done using explain plan. Note that the SQL explain plan command is different from ctx_query.explain. explain plan works on a SQL query, and prints the SQL query plan executed by oracle. ctx_query.explain works on the input to the contains function, and prints out the Text query plan executed by interMedia. ctx_query.
The Oracle Server Tuning manual details how to set up and run SQL explain plan, so check there if you do not know how this is done. You don't need to do anything special for interMedia queries -- thanks to tighter integration with the kernel, the optimizer now understands what the contains clause means and can generate plans for Text queries -- it's all transparent to the end user. Here is a simple query:
select title from articles -- query 1
where contains(text, 'iced lolly') > 0
This will normally generate a query plan like this:
1.1 TABLE ACCESS BY INDEX ROWID ARTICLES
2.1 DOMAIN INDEX ART_TEXTX
The DOMAIN INDEX part is our equivalent of an INDEX RANGE SCAN -- it means that
the index (the Text index, in this case) is producing a stream of rowids for the
kernel. This is called rowsource invocation, and, like an index range scan, is
a fairly efficient way of finding matching rows.
add score to the select list:
select score(1) score, title from articles -- query 2
where contains(text, 'iced lolly', 1) > 0
This generates the same query plan as above -- score does not appear in the query
plan. However, score does make query 2 execute a bit more slowly than query 1.
The SCORE function is called "ancillary data", and it is delivered separately
from the rowid stream. The kernel will ask for some rowids, then, in a separate
call, will give a rowid and ask for that row's score. This second call imposes
a bit of overhead, which is why query 2 is slower than query 1.
But how about a query like this:
select score(1) score, title from articles -- query 3
where contains(text, 'iced lolly', 1) > 0
order by published desc
1.1 SORT ORDER BY
2.1 TABLE ACCESS BY INDEX ROWID ARTICLES
3.1 DOMAIN INDEX ART_TEXTX
The kernel classifies the SORT ORDER BY as a "blocking operation". The presence
of a blocking operation means that Oracle8i cannot guarantee that it will
ask for scores in the same order as the produced rowid stream. It could produce
rowid ABC first, but get asked for its score last. This means that the scores
for all rows must be kept in memory, called "batch processing". If the hitlist
is large, this could use disk, and be slower than query 2 score resolution. Of
course, the order by would normally make query 3 slower anyway.
The presence of any blocking operation switches interMedia to batch processing, even something like this:
select score(1) score, title from articles -- query 4
where contains(text, 'iced lolly', 1) > 0
order by score(1) desc
could use incremental processing, but still uses batch processing.
Now, query 4 is important in information retrieval, because it is very common to show only the first few top-scoring hits in the hitlist. Web search engines work this way, for instance. This case puts a premium on response time -- the time to get the first page is critical, the time to get the second page is not as important. Unfortunately, in the database world, the order by decreases response time, because all the hits have to be produced and sorted before the first row is displayed.
Because this is a common situation, interMedia has an alternative query method which improves response time for score sorting:
select /*+ FIRST_ROWS */ score(1) score, title -- query 5
from articles
where contains(text, 'iced lolly', 1) > 0
order by score(1) desc
The addition of the FIRST_ROWS hint causes the kernel to notify interMedia
of the sort by score. interMedia produces the rowid stream in score-sorted
order, and the kernel eliminates the SORT ORDER BY -- similar to a b-tree index
range scan ordered by the key prefix:
1.1 TABLE ACCESS BY INDEX ROWID ARTICLES
2.1 DOMAIN INDEX ART_TEXTX
This is called "pushing down" the sort. Also, the elimination of the SORT ORDER
BY allows us to use incremental score processing instead of batch score processing.
The payoff of FIRST_ROWS is improved response time; the cost is decreased throughput. interMedia cannot produce the hits in score-sorted order naturally. Instead, it runs the Text portion internally, and remembers the top-scoring 500 or so rows. Because the set size is small and because we keep it all in memory rather than disk, Oracle8i can sort it faster than a SORT ORDER BY. However, when it runs out of hits in this batch, it has to re-execute the query to get another batch. Thus, if you're fetching all the results, do the SORT ORDER BY; if you just want the first few hits, use the FIRST_ROWS hint.
The FIRST_ROWS hint will not just affect interMedia -- the entire query will be optimized for response time. This is not usually a problem, since you want the whole query optimized for response time or throughput. However, if you want to optimize the overall query one way and interMedia another way, you can use the DOMAIN_INDEX hints. To have interMedia score-sorting, but optimize the overall query for throughput:
select /*+ ALL_ROWS DOMAIN_INDEX_SORT */Conversely, if you want the overall query optimized for response time, but you don't want interMedia score-sorting:
select /*+ FIRST_ROWS DOMAIN_INDEX_NO_SORT */
So far, the discussion has looked at rowsource invocation for contains -- Oracle8i asks for a stream of rowids which match the contains. However, there is an alternate invocation called "function invocation". In function invocation, Oracle8i produces the rowid stream some other way -- a b-tree index, a full table scan, etc. and then asks , for each rowid in that stream, if the row matches the contains:
select /*+ FULL(articles) */ title -- query 6
from articles
where contains(text, 'face flannel') > 0
1.1 TABLE ACCESS FULL ARTICLES
Here a full table scan is forced, which makes Oracle8i go through the table
row by row, calling function contains for each one. Note that the contains function
invocation is not shown in the plan, just as a structured condition filter would
not be shown.
For the same number of rowids, function invocation is much slower than rowsource invocation. So why have it? Consider a query like this:
select title from articles -- query 7
where contains(text, 'face flannel') > 0
and issue = 7
Recall that issue has a non-unique b-tree index. rowsource invocation could be
used - get a rowid stream from the Text index, and check each rowid to see if
issue = 7:
1.1 TABLE ACCESS BY INDEX ROWID ARTICLES -- plan 7
2.1 DOMAIN INDEX ART_TEXTX
On the other hand, the b-tree index could be used on issue - find all rows where
issue = 7, and call function contains individually for each of those:
1.1 TABLE ACCESS BY INDEX ROWID ARTICLES -- plan 8
2.1 INDEX RANGE SCAN ART_ISSX NON-UNIQUE
The query plan is driving from the index on issue, which implies function invocation
of contains. Why do that? Function invocation is slower than rowsource for the
same number of rowids.
If the number of articles in issue 7 is very small, and the number of articles with "face flannel" in the text is very large (they were quite the rage in the 80's) then the number of rowids is not the same. It may be faster to drive from the b-tree index and use function contains on a small number of rows than to drive from the Text index and do a TABLE ACCESS for a large number of rows.
Of course, it is usually impossible to have a priori knowledge of the selectivities, which makes it impractical to use hints to specify which index to use. Thanks to integration with the cost-based optimizer, you don't have to hard-code it -- let the CBO decide which method is more efficient.
In order to choose correctly, the cost-based optimizer needs to know the selectivities of the structured and Text conditions. You provide this information by generating statistics:
analyze table articles
compute statistics
This will (eventually) generate stats for the table and all indexes, including
the Text index. You can use estimate statistics, as well -- is makes no difference
with respect to the Text index.
When interMedia is asked to compute statistics, it analyzes term frequencies in the Text index. For the most common terms, we record the number of documents which contain that term. It also calculates the average number of documents per term to use for all other terms.
Given a Text query, and using these stats, it can compute an estimate of the query selectivity. Because it doesn't record exact term frequency for every term, and because a Text query can combine terms using AND, phrase, WITHIN, etc. this selectivity is only a rough estimate, not an exact number.
The CBO, confronted with a query like query 7, can then use the b-tree stats and the Text index stats and -- without hints -- can choose the best index to use to drive the query.
If your b-tree data is not uniformly distributed, then histograms for the data can help:
analyze table articles
compute statistics for column issue size 127
See Oracle SQL Reference and Server Tuning manuals for more information. Nothing
special is done for histograms -- it simply provides more accurate stats to the
CBO. Like any SQL query, you can use hints to override the CBO when it's wrong.
The CBO can choose a third option for query 7 -- use both indexes, and perform a rowid bitmap merge of the two results. You can force this using the INDEX_COMBINE hint:
select /*+ INDEX_COMBINE(articles art_issx art_textx) */ -- query 9
title
from articles
where contains(text, 'face flannel') > 0
and issue = 7
which produces the following plan:
1.1 TABLE ACCESS BY INDEX ROWID ARTICLES -- plan 9
2.1 BITMAP CONVERSION TO ROWIDS
3.1 BITMAP AND
4.1 BITMAP CONVERSION FROM ROWIDS
5.1 SORT ORDER BY
6.1 DOMAIN INDEX ART_TEXTX
4.2 BITMAP CONVERSION FROM ROWIDS
5.1 INDEX RANGE SCAN ART_TEXTX NON-UNIQUE
This is the best performer when the result lists for the structured and contains
are roughly equal, somewhat large, and the intersection is small. In comparison,
plan 8 would be hobbled by function contains overhead, and plan 7 would require
unnecessary TABLE ACCESSes to check the structured condition. A similar plan is
used when bitmap indexes are employed.
Bitmap merge, like SORT ORDER BY, is a blocking operation, which, if you are selecting SCORE, will force batch processing instead of incremental processing.
Text queries rarely give you exactly what you want. Consider web search engines like AltaVista. Sometimes there's too many documents. Sometimes you wonder why a certain document turned up in the hitlist. Sometimes the document you wanted to find isn't in the results at all.
Similar results could ocur with interMedia, so it provides two query feedback methods which allow you to refine your Text query.
Explain is the interMedia analogue of explain plan, and let's you see exactly what your query looks like. This lets you trim unintended wildcard expansions, for instance, or add parentheses to correct precedence errors.
You need a result table, just like explain plan, with the following columns, in the following order:
create table xres (
explain_id varchar2(30),
id number,
parent_id number,
operation varchar2(30),
options varchar2(30),
object_name varchar2(64),
position number
)
| explain_id | A label which distinguishes one query plan from another. |
| id | A step sequence number. This is an ascending sequence number assigned to each step which ignores hierarchy. The root step of each plan has id 1, the second step has id 2, etc. |
| parent_id | The id of the parent of this query plan step. The root step has parent_id = 0. |
| operation | The type of operation for this step. This can include WORD, AND, WITHIN, etc. |
| options | The expansion operators (fuzzy, wildcard, etc.) are transformed into equivalence operators. The options column tells you what the original expansion function was. options is also used to show arguments to NEAR. |
| object_name | The operand of the query step. For WORD operations, this is the actual word being searched. It also holds section names for SECTION operations, weight values, etc. |
| position | A sequence number to sequence children of the same parent. The first child step will have position 1, the second will have position 2, etc. |
You then run explain for your query on your index:
ctx_query.explain('myindex', 'dog and ($run spot)', 'xres');
The first argument is the name of the index. The second argument is the query
string, and the third is the name of the result table. The query is explained
and the results inserted into the plan table. These results are not committed,
which mimics the behavior of SQL explain plan. You can then print it out in explain
plan style using this SQL:
select lpad(' ',2*(level-1))||level||'.'||position||' '||
operation||' '||
decode(options, null, null, options || ' ') ||
object_name plan
from xres
start with id = 1
connect by prior id = parent_id;
Which produces the following plan:
1.1 AND
2.1 WORD DOG
2.2 PHRASE
3.1 EQUIVALENCE ($) RUN
4.1 WORD RAN
4.2 WORD RUN
4.3 WORD RUNNING
4.4 WORD RUNS
3.2 WORD SPOT
The optional fourth and fifth arguments to explain, sharelevel and explain_id,
allow multiple explains to use the same result table. When sharelevel is 0, the
default, then the result table is automatically truncated before the explain is
run. When sharelevel is 1, then any rows in the result table with the same explain_id
are deleted before the explain is run, but other rows are not touched. Thus, multiple
explains can use the same result table as long as they use distinct explain_id's.
When using shared result tables, the start with and connect by clauses of the example sql have to be modified to limit them to a specific explain_id:
start with id = 1 and explain_id = 'YOURID'
connect by prior id = parent_id and explain_id = 'YOURID'
Explain lets you see exactly how your query is run, which allows you to adjust your query to refine the results. Hierarchical query feedback goes one step further and actually suggests other search terms you might want to try.
You start by creating a result table. This result table looks a bit like the explain result table, but with the addition of nested tables for suggested alternative terms:
create table fres (
feedback_id varchar2(30),
id number,
parent_id number,
operation varchar2(30),
options varchar2(30),
object_name varchar2(64),
position number,
bt_feedback ctxsys.ctx_feedback_type,
rt_feedback ctxsys.ctx_feedback_type,
nt_feedback ctxsys.ctx_feedback_type
)
nested table bt_feedback store as fres_bt
nested table rt_feedback store as fres_rt
nested table nt_feedback store as fres_nt
The bt_feedback, rt_feedback, and nt_feedback store suggested broader terms, related
terms, and narrower terms, respectively. Each is a nested table of ctx_feedback_type
objects, which stores the suggested term in the attribute "text". The other attributes
are not currently used, but are reserved for future use.
Now you run hfeedback on your query:
ctx_query.hfeedback('myindex', 'felines', 'fres');
hfeedback will then draw on the linguistics knowledge base to suggest broader,
narrower, and related terms. The results are inserted into the result table. Like
explain and SQL explain plan, these inserts are not committed implicitly. You
can select from the nested tables to get feedback:
select text from the(select nt_feedback from fres
where object_name = 'felines')
TEXT
----
cat leopards tiger
Be aware that when getting feedback for queries with ABOUT clauses, the input
is broken into themes, so you cannot simply select back. For instance, if your
query were "ABOUT(African safari)", there would be no row with object_name = 'African
safari' -- more likely, there would be separate rows for "Africa" and "safaris".
The feedback terms are obtained from the built-in linguistics knowledge base. If the knowledge base has been extended, feedback can also draw on these extensions.
The results will depend on the contents of your index -- only worthwhile terms are shown. "Lion" is a narrower term of feline, but my index does not have any documents with "Lion" in them, so it does not appear in the narrower term table. Note that for performance reasons, we do not screen out invalid documents, so it is possible (but unlikely) to get feedback terms which would not produce query hits.
The feedback for non-ABOUT portions of the query can contain alternate forms of words, as well -- narrower terms for "learning" might include "education", "educating", "educational", etc.
If the index is case-sensitive, then case-sensitive feedback terms are provided. For instance, narrower terms for "databases" might include both "PL/SQL" and "Pl/sql".
When the index is case-insensitive, queries are case-insensitive -- a contains query for "oracle" will pull up the same records as "Oracle". However, feedback uses the linguistic knowledge base, which is case-sensitive. So, even if your index is case-insensitive, the feedback you get depends on the case of the query -- the feedback for "oracle" will not be the same as the feedback for "Oracle".
You can also give hfeedback multiple terms in a query string:
ctx_query.hfeedback('myindex', 'felines and canines', 'fres');
hfeedback will generate query plan steps as in explain, but phrases are not broken
into individual words and expansion operators such as fuzzy, wildcard, soundex
and stem are ignored. Each word or phrase step will have bt, nt, and rt tables.
Operator steps like AND will not have anything in these tables.
Like explain, hfeedback has sharelevel and feedback_id arguments for result table sharing.
Highlighting and linguistic extraction have been combined into a unified document services package called ctx_doc. A document service is any ad hoc operation on the document text.
The filter command fetches the document contents and filters it to readable text. This allows you to see what the indexing engines sees. You first make a result table:
create table fres ( query_id number, document clob )Now request a filter:
ctx_doc.filter('myindex', '123', 'fres');
The first argument is the
name of the index -- you can use USER.INDEX, but the issuing user must have
SELECT access to the base table. The second argument is the primary key value
of the document you wish to filter. If you have a composite primary key, use
commas to separate the key fields:
ctx_doc.filter('myindex', 'ABC,123', 'fres');
Alternatively, you can use the function ctx_doc.pkencode:
ctx_doc.filter('myindex', ctx_doc.pkencode('ABC','123'), 'fres');
The third argument to filter is the
name of the result table. interMedia will translate the primary key to
rowid, fetch the document contents by rowid, and filter to readable text. It
will insert a new row into the result table and the readable text output will
be placed in the document clob column.
There is an optional fourth argument -- query_id -- which can be any integer. This will be the value of the query_id column for the row in the result table. If you are sharing result tables across several filter requests, you can have each request use a different query_id, and distinguish the outputs by this column. filter does not pre-truncate or pre-delete rows in the result table, regardless of the setting of query_id. The default value of query_id is 0.
Recall part of the indexing pipeline for a moment:
+-----------+ +----------+ filtered +-----------+ plain
| | doc data | | doc text | | text
| datastore |---------->| filter |----------->| sectioner |----->
| | | | | |
+-----------+ +----------+ +-----------+
There are two "readable text" outputs in this segment of the indexing chain --
the filtered text, output from the filter, and the plain text, output from the
sectioner. When using the INSO filter, for instance, the filtered text is HTML
and might have text like this:
<BODY>
<P>This is the first
<B>sentence.</B>
</BODY>
The HTML section group will transform this into plain text, removing tags and
reformatting whitespace, much like a browser might save to text file:
This is the first sentence.By default, filter will provide the filtered text. An optional boolean parameter "plaintext" (the fifth argument) allows you to request the output from the sectioner instead:
ctx_doc.filter('myindex', '123', 'fres', plaintext=>TRUE);
The plaintext output can be used for human readers, while the filtered text output
can be used if you are sending the results to a browser. Note that if you are
using the null section group, the plain text and filter text are the same, so
the plaintext argument would have no effect.
The default value of plaintext is FALSE, meaning filter text is produced.
The highlight service takes a query string, fetches the document contents, and shows you which words in the document cause it to match the query. The result table for highlight looks like this:
create table hres (
query_id number,
offset number,
length number
)
Now request the highlight:
ctx_doc.highlight('myindex', '123', 'dog | cat', 'hres');
Like filter, the first argument is the name of the index and the second is the
document primary key. The third argument is the text query, and the fourth is
the name of the result table.
interMedia will parse the query and determine which words in the document match the query. It then creates rows in the result table, showing the character offset and length of each matching word. Let's say that our document looks like this for the highlight call above:
I have a black cat and a white dog."I" is character 1, per oracle convention, so "cat" is at character 16 and has length 3. "dog" is at character 32 and has length 3. If you are using a multi-byte character set, remember that these are character offsets and lengths, not byte offsets and lengths. The hres table will look like this:
QUERY_ID OFFSET LENGTH
---------- ---------- ----------
0 16 3
0 32 3
A single row in the result table can cover multiple words in the source document
-- the example simply uses single words.
Highlight has query_id as the fourth argument, working just like it does in the filter service. Offset and length in the result table can apply to either the plain text or filter text version of the document. This is controlled by the plaintext argument in the fifth position.
You can use the filter service output in conjunction with the highlight output to display the matching words:
select dbms_lob.substr(fres.document, hres.length, hres.offset)
from fres, hres
If your documents are in a varchar or clob you can do something similar with the
base table instead of the filter output.
Markup takes the highlight service one step further, and produces a text version of the document with the matching words marked up. The markup result table is the same format as the filter result table:
create table mres (
query_id number,
document clob
)
Now call markup:
ctx_doc.markup('myindex', '123', 'cat | dog', 'mres');
And the result is:
I have a black <<<cat>>> and a white <<<dog>>>.Like the other highlight services, markup has a query_id argument for result table sharing, and a plaintext argument to specify markup of filter text or plain text.
You can also override the text used to markup the matching words. This is broken into start and end tags. Start tag is the text which goes before the matching word, end tag is the text which goes after the matching word. By default, start tag is <<< and end tag is >>>. You can override this using the starttag and endtag arguments:
ctx_doc.markup('myindex', '123', 'cat | dog', 'mres',
starttag => '-->', endtag => '<--');
I have a black -->cat<-- and a white -->dog<--.
The length limit on tags is 30 characters.
Highlight navigation is a new feature for Oracle8i, and allows you to markup your text so that an interpreting program can navigate between highlights. Let's use as an example HTML output being sent to a browser. The start and end tags could be overridden to display highlighted words in bold:
ctx_doc.markup('myindex', '123', 'cat | dog', 'mres',
starttag => '<b>', endtag => '</b>');
<b>cat</b> ... <b>dog</b>
Now you might want to have links next to each highlighted word which take you
to the previous or next highlight. You would first have to anchor and name each
highlight using the A tag:
<A HREF=h1><b>cat</b></a> ... <A HREF=h2><b>dog</b></a>Note that the start tags for these two words are different -- cat has h1 and dog has h2. You can do this using markup macros. Each highlighted word is numbered in document occurrence order. If %CURNUM is in a tag, it is replaced with this number. This lets you do the anchoring above using start and end tag definitions like so:
ctx_doc.markup('myindex', '123', 'cat | dog', 'mres',
starttag => '<a name=h%CURNUM><b>',
endtag => '</b></a>');
But clickable links for highlight navigation have not been achieved. A links would
need to added before and after each highlight, like this:
<a href=h1><b>cat</b></a><a href=#h2>></a> ...
<a href=#h1><</a><a href=h2><b>dog</b></a>
The first problem is that start and end tag cannot be used for this. For highlight
navigation, the first highlight will not have a prev link, and the last highlight
will not have a next link. Start and end tags are added for every highlight, so
those can't be used.
Luckily, markup has prevtag and nexttag specifically for this. Prevtag is added before starttag for every highlight except the first one. Nexttag is added after endtag for every highlight except the last one.
The second problem is that it is necessary to reference the next highlight number in nexttag, and the previous highlight number in prevtag. For this, two other markup macros, %PREVNUM and %NEXTNUM are used
Putting it all together, highlight navigation is obtained like this:
ctx_doc.markup('myindex', '123', 'cat | dog', 'mres',
starttag => '<a name=h%CURNUM><b>',
endtag => '</b></a>',
prevtag => '<a href=#h%PREVNUM><</a>',
nexttag => '<a href=#h%NEXTNUM>></a>');
This gives highlighted words in bold, with < and > links next to each one to take
the reader to the previous or next highlighted word.
The call is complex so three sets of common definitions for starttag, endtag, prevtag, and nexttag have been defined. You can use these sets by using the tagset argument of markup:
ctx_doc.markup('myindex', '123', 'cat | dog', 'mres',
tagset => 'HTML_NAVIGATE')
This will give you similar functionality to the call above. Here are the defined
tagsets:
TAGSET
-------------
TEXT_DEFAULT
starttag = <<<
endtag = >>>
HTML_DEFAULT
starttag = <b>
endtag = </b>
HTML_NAVIGATE
starttag = <A NAME=ctx%CURNUM><b>
endtag = </B></A>
prevtag = <A HREF=#ctx%PREVNUM><</A>
nexttag = <A HREF=#ctx%NEXTNUM>></A>
You cannot define your own tagsets in this version. The tagset default is TEXT_DEFAULT.
Any specified values for starttag, endtag, prevtag, and nexttag override the values
for the tagset, so something like this:
ctx_doc.markup('myindex', '123', 'cat | dog', 'mres',
tagset => 'TEXT_DEFAULT',
endtag => '<--')
would highlight like this:
<<<dog<--That is, the endtag has been overridden, but the starttag definition from the tagset is still valid.
With the INDEX_THEMES attribute of the basic lexer, and the ABOUT clause, you can easily enable thematic indexing and searching for your document table. The linguistic extraction routines in the document services enable you to generate document themes or theme summaries, on-demand and per-document.
The big change from previous versions is the elimination of the serivces queue. In ConText, linguistics was asynchronous -- you submitted a linguistic request, then eventually the answer would appear in the result table. Vast improvements in linguistic engine performance have allowed us to make these requests synchronous in Oracle8i. Now you invoke the linguistic function, and when the function returns, you know the data is in the result table.
Linguistic extraction works only on English documents.
Themes allow you to generate the themes of a document. First you need a result table like this:
create table tres (
query_id number,
theme varchar2(80),
weight number
)
Then invoke the themes request:
ctx_doc.themes('myindex', '123', 'tres');
Like the highlighting calls, the first argument is the name of the index, the
second is the document primary key, and the third is the name of the result table.
interMedia will fetch the document text, filter it, run it through the sectioner, then parse it for thematic content. It will generate up to 50 themes, and place the results in the result table, in the theme column, one row per theme. The weight column is a number showing relative importance of the theme. It is only comparable to other themes of the same document. You cannot compare theme weights from different documents.
An optional fourth query_id argument lets you share result tables. Like highlighting, themes will not pre-truncate or pre-delete the result table.
The optional fifth argument, full_themes, allows you to generate themes with their full knowledge catalog hierarchy. The call above might generate a theme of "cats", for instance. Setting full_themes on:
ctx_doc.themes('myindex', '123', 'tres', full_themes => TRUE);
would generate
:science and technology:hard sciences:life sciences:biology:zoology:
vertebrates:mammals:carnivores:felines:cats:
instead. If you are using full themes, the length of the theme column in your
result table should be increased to 2000 or so.
Gist allows you to generate a theme summary of the document text. This does not generate an original abstract like a human would. Instead, it is composed of selected paragraphs or sentences which contribute most to the themes of the document. The gist result table looks like this:
create table gres (
query_id number,
pov varchar2(80),
gist clob
);
and the gist function looks like this:
ctx_doc.gist('myindex', '123', 'gres');
interMedia will fetch the document text, filter it, run it through the sectioner, then parse it for thematic content. It will generate the themes, then generate gists for those themes, one row per gist, with the gist column consisting of those paragraphs which contributed to the theme. The theme is contained in the pov (point-of-view) column. Additionally, it might generate a GENERIC gist, which consists of those paragraphs which best represent the overall themes of the document.
The optional fourth argument to gist is query_id, which allows you to share result tables, like all other calls in document services.
The optional fifth argument, glevel, is a flag which allows you to specify the gist level. If glevel is 'P', the default, the gists are composed of whole paragraphs from the document text. This is called the paragraph-level gist. If glevel is set to 'S', the gists are composed of individual sentences from the document text. This is called the sentence-level gist.
The optional sixth argument, pov, allows you constrain output to just one point-of-view. Usually, this is done to generate only the GENERIC gist. This is case-sensitive, so specify GENERIC in uppercase. This is not a way to force the gist to a particular point-of-view; you cannot specify pov's here which would not normally get produced. For instance, if you have a document in which "dogs" and "cats" are pov's, and you specify "frogs", you will get no output.
The optional seventh and eighth arguments, numparagraphs and maxpercent, allow you to constrain the size of the generated gist. numparagraphs specifies the maximum number of paragraphs (or sentences, if you are doing a sentence-level gist) which can make up each generated gist. maxpercent is similar, but limits it to a percentage of the source document. If both are specified, the smaller is the effective limit.
All theme based features in interMedia -- themes, ABOUT queries, gists, ABOUT query highlighting, and hierarchical query feedback -- depend on an internal knowledge base (KB) that is supplied with interMedia. The knowledge base consists of a lexicon of English words and phrases organized in a hierarchy of categories. For the first time in Oracle8i, you can extend and customize this knowledge base by adding new terms or redefining existing ones.
Consider a simple example in the domain of coffee futures. Documents in this domain may mention the term ICO (International Coffee Organization). interMedia can identify the theme "ICO" in such documents, but it does not exist in the supplied KB -- it is known that the documents are talking about the ICO, but not what ICO is. ICO is not classified in the concept hierarchy, and is not associated with terms such as "coffee" or "International Coffee Organization". ICO would look like this in a full-themes listing:
:ICO:if it showed up at all. Because it is an isolated term with no parents, it won't be judged a strong theme.
What is needed is to do is tell interMedia what ICO is all about. The first step is to create a simple thesaurus for the term:
international trading organizations
BT international trade and finance
International Coffee Organization
BT international trading organizations
RT coffee
ICO
USE International Coffee Organization
Now load the thesaurus using ctxload, just like any other thesaurus. Make sure
the thesaurus is loaded case-sensitive, since case matters for theme generation:
ctxload -user ctxsys/ctxsys -thes -thescase Y -name COFFEE -fileand run the KB compiler, ctxkbtc (a new executable in 8i):
ctxkbtc -user ctxsys/ctxsys -name COFFEEThis will generate a new, extended KB. ctxkbtc would interpret the thesaurus above as the following commands:
add "international trading organizations" to the KB under
"international trade and finance"
add "International Coffee Organization" under
"international trading organizations"
make "coffee" a related term to "International Coffee Organization"
add "ICO" as an alternate form for "International Coffee Organization"
Note that "international trade and finance" is an existing category in the supplied
KB. All supplied categories are listed in the documentation. It is recommended
that new terms be linked to one of these categories for best results in theme
extraction. If new terms are kept completely disjoint from existing categories,
they might not be strong enough to qualify as themes.
There can be only one user extension per installation -- each run of ctxkbtc affects all users. However, you can specify multiple thesauri (up to 16) in the -name argument of ctxkbtc to add them all to the extended KB:
ctxkbtc -user ctxsys/ctxsys -name COFFEE TEA MILKBecause ctxkbtc affects the entire installation, we recommend that only interMedia administrators should extend the knowledge base.
Nothing special needs to be done to use the new KB, other than to close and re-establish open sessions -- iMT will use the new KB automatically. If you generate themes now, this time you will see:
:business and economics:commerce and trade:international trade and
finance:international trading organizations:International Coffee
Organization:
Note that ICO has been normalized to "International Coffee Organization", it now
has parents, and its weight is higher because it is a known term.
There are some size limits and other restrictions on thesaurus terms for extending
the KB. See the Oracle8i interMedia documentation for details.
This is a technical overview of the improvements found in interMedia Text,
version 8.1.6. This is intended for an audience familiar with version 8.1.5. If you are
new to interMedia Text, please start with Oracle8i interMedia Text 8.1.5 - Technical Overview. Please refer to the
documentation for detailed information. While 8.1.6 has features which improve index performance and ease-of-use, most of the
new features are targeted in two key areas: structured documents and multi-lingual
databases. The improvements in structured document support include indexing and searching
attribute text, nested within for sophisticated queries, doctype-limited tag detection,
dynamic add section for existing indexes, and a new AUTO sectioner which requires no
section pre-definition. Multi-lingual databases can now store documents of different languages in a single
column, thanks to the new MULTI lexer. Furthermore, improvements in INSO and charset
filters allow documents of different formats and character sets in a single column.
UNICODE support has been added to the Japanese and Korean lexers. Other interesting features include limited parallel indexing, more flexible stem and
fuzzy operators, and in-memory document services. For an automated upgrade from 8.1.5 to 8.1.6, use Oracle's Database Migration Assistant
tool (ODMA) -- this will run all needed scripts for interMedia. There's no easy way to verify that the upgrade completed successfully. However, you
can, after the upgrade, issue: as ctxsys. If this reports 8.1.6, the upgrade script has been run. It's probably wise
to check for any invalid ctxsys objects anyway. In the event that the automatic upgrade fails, the manual process is below. This
process assumes that you are starting with a valid 8.1.5 ctxsys data dictionary, and that
the system is not being used for index, DML, or query. 1. run ?/ctx/admin/upgrade/s0801060.sql as SYS 2. run ?/ctx/admin/upgrade/u0801060.sql as CTXSYS 3. run ?/ctx/admin/dr0pkh.sql as CTXSYS 4. run ?/ctx/admin/dr0plb.sql as CTXSYS 5. run ?/ctx/admin/dr0type.plb as CTXSYS 6. check for any invalid ctxsys objects Nothing needs to be done to upgrade existing 8.1.5 context indexes. These will still
work without any problems in 8.1.6. The user datastore in 8.1.5 is limited to CLOB output for the generated documents. A
new attribute, OUTPUT_TYPE, allows you to change the output type to BLOB or VARCHAR2. For
instance, a preference like this: mandates a foo procedure with the following signature: If instead we add: then interMedia Text will expect the foo procedure to have the signature: If the synthesized document is expected to be less than 32000 bytes, then a varchar2
output function is probably easier to write, easier to test, and faster to execute than
the CLOB version. You can also set the output_type to BLOB, for formatted documents or documents in a
different character set. Note, however, than it is still not possible to use or call an
external procedure in the user datastore. If output_type is not specified, CLOB is assumed, for backward compatibility. The new NESTED_DATASTORE object allows indexing the rows of a nested table as parts of
a single document -- it is very much like the detail datastore, but here the detail table
is a nested table. For example, consider a table mytab, where the document text is stored in a nested
table, one line per row of the nested table: If you are running the above in SQL*Plus, you'll need forward slashes after each
statemeny to execute it. For each row of mytab, interMedia Text needs to concatenate the rows of the
nested doc table into a single document. In 8.1.5, this has to be done using a user
datastore. 8.1.6 provides this as built-in functionality. We start by creating a nested table datastore: We need to provide the name of the nested table column in the base table: We also need to provide the type of the nested table. This should include the type
owner schema. This cannot be determined automatically because the table name is not known
until create index time: Finally, we need to specify the names of the line number and text columns within the
nested table record type: The extensible indexing framework does not allow creating an index on a nested table
type, so we need to create the index on a dummy column, just like the detail datastore: For each record of the base table, interMedia Text will select the rows of the
nested table using sql somewhat like this: and concatenate the rows into a single document. Thus, a search like: will return the one base table row, even though the hits are in different rows of the
nested table. Like the detail datastore, the nested table datastore has a BINARY attribute to control
the concatenation of the nested table lines. If BINARY is TRUE, interMedia Text
will simply concatenate the text together. If BINARY is FALSE, interMedia Text will
insert a newline in between each nested table line. Given the nested table sql, a unique index on the nested table, on the columns
(nested_table_id, <lineno>) should improve indexing performance. Because the index is created on a dummy column, any DML on the nested table must be
manually propagated to the the dummy column -- interMedia Text cannot automatically
detect such changes and mark rows for reindex. In 8.1.5, the INSO filter outputs ISO8859-1 text, which is then converted to the
database character set. However, not all documents can be expressed in ISO8859-1, so the
INSO filter cannot be used in all situations. 8.1.6 uses a newer revision of the INSO libraries which features UNICODE output. Since
UNICODE is designed to be able to express any text, the INSO filter can now be used on any
database, as long as the document text can be expressed in the database character set. For instance, in 8.1.5, a Japanese document in MS Word on a JA16EUC database cannot be
sent to the INSO filter, because the plain text output is forced to ISO8859-1, which is
incapable of expressing the document. In 8.1.6, the plain text is UNICODE, and everything
works fine. Sending the same document through on a US7ASCII database, however, will not
work since the Japanese text cannot be converted to the database character set. Another improvement in the INSO filter is the addition of optional bypasses for
heterogenous docsets. In 8.1.5, if you employ the INSO filter in your index, every
document in the base table is sent through the INSO filter. This works fine when all the
documents are formatted documents, but the docset may also have HTML and plain text
documents. Filtering these wastes time and sometimes produces unexpected results. 8.1.6 adds a mechanism to bypass INSO filtering for selected rows. This requires a
column in the base table -- called the format column -- which specifies that the data is
BINARY or TEXT. For instance, let's consider the table: We can insert a word doc, with the value 'BINARY' in the format column: and an HTML document, with the value 'TEXT' in the format column: When we create the index, we need to specify the name of the format column, using the
new 'format column' clause of the parameters string: Since we have specified a format column, for each row, the INSO filter will examine the
value in 'fmt'. If this value is 'BINARY', as it is for row 1, then it will send it
through the INSO filter. If the value is 'TEXT', it by-passes the INSO filter and sends
the text straight through. The format column must be in the same table as the text column, and it must be CHAR,
VARCHAR, or VARCHAR2 type. The value is case-insensitive, so 'BINARY' and 'binary' are the
same. If the column value is null or not 'BINARY' nor 'TEXT', then the document is assumed
to be binary, and it is filtered by the INSO filters. The format column does not have to be specified when using the INSO filter. This bypass
mechanism is optional. The format column can be specified when not using the INSO filter,
but it has no effect. If the document set is even more heterogenous, the TEXT documents may not all be in the
same character set, and may not be in the database character set. The INSO filter can do
per-row character set translation for such documents, if the character set is specified in
another column, called the charset column. For instance, let's expand our example: We can insert a plain text Japanese document in EUC: and one in Shift-JIS: When we create the index, we need to specify the names of the format and charset
columns: The INSO filter first looks at the format column. If the format column value is TEXT,
then it looks at the charset column for the character set of the document. The document
text is converted from this character set to the database character set. The charset column must be in the same table as the text column, and it must be CHAR,
VARCHAR, or VARCHAR2 type. The value is case-insensitive, but should be an NLS character
set string. If the column value is null or is not recognized, then the document is passed
through without any conversion. Charset column does not have to be specified when using
the INSO filter. If the charset column is specified and the format column is not specified, then all
documents are assumed to be text documents, and the INSO filter works like the CHARSET
filter, described below. The CHARSET filter can also take advantage of the charset column, which allows you to
index a table of text documents in different character sets. When using the charset
column, the CHARSET attribute becomes the default value -- when the charset column is null
or the value is not recognized, then the document is assumed to be in the character set
named in the attribute, and is converted from that to the database character set. Like the INSO filter, you do not need to specify the charset column -- it works just
like the 8.1.5 version, and assumes all documents are in the CHARSET attribute character
set. The tagged section groups (basic, html, xml) have all been re-written to share a common
parser. As a result, the BASIC section group can now parse tags with attributes, although
currently it is not able to index those attributes. The HTML section group can now index META tags. META tags in HTML have two forms. The
first is used for embedding http header data, and is not interesting to information
retrieval. The second form, however, is used for encoding document meta-data in name/value
pairs: 8.1.5 is unable to make use of this because the information is in attribute values,
which were ignored. Furthermore, even if the product had attribute value searching, the
name part of the name/value pair is actually an attribute value rather than an attribute
name. Instead, the HTML section group has been enhanced with special detection of the META
tag. Like any other section, the META pairs which should be indexed are declared using
add_section. However, the tag portion has a special syntax: META@<name>: With this tag syntax, the HTML section group will interpret as: and index "Mr. Roboto" as the contents of the field section
"author". The query looks like any other section query: to find the document. META name/value pairs which have not been added to the section group are ignored and
not indexed. META tags which do not have NAME or CONTENT attributes are also ignored. One problem with the XML section group in 8.1.5 is the inability to distinguish between
tags in different DTD's. For instance, perhaps you have a DTD for storing contact
information: Appropriate sections might look like: Which works out fine... until you start getting a different kind of document in the
same table: Now your address section, originally intended for street addresses, starts picking up
email addresses, because of tag collision. 8.1.6 allows you to specify doctype limiters to distinguish between these tags across
doctypes. You simply specify the doctype in parentheses before the tag: Now when the XML section group sees an address tag, it will index it as the address
section when the document type is contact, or as the email section when the document type
is mail. If you have both doctype-limited and unlimited tags in a section group: Then the limited tag applies when in the doctype, and the unlimited tag applies in all
other doctypes. Querying is unaffected by this -- the query is done on the section name, not the tag,
so querying for an email address would be done like: which, since we have mapped two different kinds of tags to the same section name, finds
documents independent of which tags are used to express the email address. Another commonly-requested feature in the 8.1.6 XML section group is the ability to
index and search within attribute values. For instance, take a document with the
following: in 8.1.5, the author information, like all attributes, is ignored and cannot be indexed
or searched. In 8.1.6, the XML section group offers a new type of section called an
attribute section, which nominates interesting attribute values to index: The syntax is very similar to the other add_section calls -- the first argument is the
name of the section group, the second is the name of the section, and the third is the
tag, in the form <tag_name>@<attribute_name>. This tells interMedia Text to index the contents of the author attribute of the
comment tag as the section "author". Query syntax is just like any other
section: finds the document. Attribute text is considered to be invisible, though, so the
following: does NOT find the document, somewhat like field sections. Unlike field sections,
however, attribute section within searches can distinguish between occurrences. Given the
document: the query: will not find the document, because "jeeves" and "bertram" do not
occur within the SAME attribute text. Attribute section names cannot overlap with zone or field section names, although you
can map more than one tag@attr to a single section name. Attribute sections do not support default values. Given the document: and attribute section: the query: does not hit the document, although in XML semantics, the "bar" element has a
default value for its "rev" attribute. Attribute sections allow you to search the contents of attributes. They do not allow
you to use attribute values to specify sections to search. For instance, given the
document: You can find this document by asking: which is equivalent to "find me all documents which have a comment element whose
author attribute's value includes the word jeeves". However, there is no way to ask for something like: in other words, "find me all documents where interMedia appears in a comment
element whose author is jeeves". This feature -- attribute value sensitive section
searching -- is planned for future versions of the product. Because the section group is defined before creating the index, 8.1.5 is limited in its
ability to cope with changing structured document sets; if your documents start coming
with new tags, or you start getting new doctypes, you have to re-create the index to start
making use of those tags. 8.1.6 allows you to add new sections to an existing index without rebuilding the index,
using alter index and the new add section parameters string syntax: For instance, to add a new zone section named tsec using the tag title: To add a new field section named asec using the tag author: This field section would be invisible by default, just like when using
add_field_section. To add it as visible field section: Dynamic add section only modifies the index meta-data, and does not rebuild the index
in any way. This means that these sections take effect for any document indexed after the
operation, and do not affect any existing documents -- if the index already has documents
with these sections, they must be manually marked for re-indexing (usually with an update
of the indexed column to itself). This operation does not support addition of special sections. Those would require all
documents to be re-indexed, anyway. This operation cannot be done using rebuild online,
but it should be a fairly quick operation. The ultimate in coping with changing structured document sets is the new AUTO section
group. The AUTO section group is like the XML section group, but requires no
pre-definition of sections. Instead, any non-empty tag is automatically indexed as a zone
section with the section name the same as the tag. For instance, given a document like
this: The auto sectioner will create zone sections named book, author, title, and
description. Queries like this: will find the document. Because sections are created dynamically from the tags, the
auto sectioner is ideal when the structure of the documents is not known to the index
creator, or when they are expected to change so that pre-definition with dynamic add is
impractical. The only quirk to watch out for is that section names are case-insensitive, but tags in
XML are case-sensitive. For instance, given the above document, the query finds the document, as does Both queries would also find a document: even though "description" and "DESCRIPTION" are different tags in
XML. You cannot add zone, field, or attribute sections to an AUTO_SECTION_GROUP in this
version. Furthermore, because the AUTO_SECTION_GROUP may encounter documents using the
tags PARAGRAPH or SENTENCE, it also does not support special sections. Even when using the AUTO section group, there may be some tags which are so common that
indexing them would be a waste of time, since they are not useful for searching. For such
tags, the AUTO section group has stop sections, which indicate tags to ignore -- much like
stop words indicate words to ignore. Stop sections use the new add_stop_section call: which indicates to the named AUTO section group that <tag> should be ignored and
not indexed as a section. add_stop_section can be used only on AUTO section groups. The
number of stop sections per AUTO group is unlimited. The tag is case-sensitive, so: is different from and these stop different things. If you have only: there may still be a section named "DESCRIPTION", if another document uses
"Description" or "DESCRIPTION" as the tag. For this reason, a query
that does "within <stop section>" does not error. The tag also supports doctype limiters. So, for instance, if you wanted to stop tag
"foo", but only in document type "bar": Stop sections can be added to an existing index dynamically, using the parameters
string syntax: as in: Unlike stop words, adding a stop section dynamically does not apply to documents
already indexed. This is partially to be consistent with the other dynamically add section
calls, but mainly because there could be other tags using that section name due to
case-insensitivity of section names. The AUTO section group also automatically indexes attribute values as attribute
sections named <tag>@<attr>. For instance, given: You can find this document with the query: since the author attribute is automatically indexed as an attribute section. 8.1.6 also adds nested within queries -- also known as hierarchical section searching.
For instance, given a document with: You can distinguish between the two types of titles using a nested within: for instance, which is not allowed in 8.1.5. 8.1.6 provides this feature with two
important caveats. First, this only works with zone sections. Attribute or field sections
-- even visible field sections -- cannot be used in a nested within. Second, because of the way interMedia Text indexes zone sections, equal sections
cannot be distinguished. For instance, the document: can be found with either: or This applies only when the sections bound exactly the same range. Any intervening text
allows interMedia Text to distinguish between the sections -- for instance: Another aspect of this is that nested within does not imply an exact parent-child
relationship. In XQL, a query string with: means that the B section is a first-level child of the A section. This not equivalent
to nested within: does not mean that the A section is a child of the B section -- it could be a
grandchild or great-grandchild. Nested within is more like the XQL: The BASE_LETTER function of the BASIC lexer is designed to translate any accented
characters to their unaccented representation for indexing and query. In 8.1.5, this
feature uses an NLS function which is sensitive to the language part of the NLS_LANG
setting. This means, for instance, that BASE_LETTER does not work for an AMERICAN
database, because the NLS function does not work in this language. For 8.1.6, based on our
input, NLS has provided a generic base lettering function which works in all languages.
This means that the BASE_LETTER function now works no matter what language your database
uses, which is ideal for multi-lingual and foreign-language databases. This new function requires the NLS file lx30044.nlb to be installed, so if base
lettering is not working in 8.1.6, check for this file. The French knowledge base and lexicon is now public in 8.1.6. This allows indexing and
generation of themes for French documents. Normally, the BASIC lexer will choose which
knowledge base to use (French or English) based on the NLS_LANG setting for the session.
If, however, your NLS_LANG setting does not match your documents -- your documents are in
French and your env is in English -- then this will choose the wrong lexicon, or give an
error (if your environment is neither English nor French). For such situations, the BASIC_LEXER has a new THEME_LANGUAGE setting to override
environment detection. This attribute can be set to 'ENGLISH' or 'FRENCH' to choose a
lexicon explicitly: This lexer now produces French themes independent of the environment. This attribute
can also be set to 'AUTO', or left unset. In such cases, the lexicon is chosen based on
the environment. 8.1.5 theme indexing produces a maximum of 50 themes per document. This means that long
documents may be difficult to find with a theme query, since lesser themes tend not to
make it into the top 50. 8.1.6 solves this by removing the 50-theme limit for theme indexing -- all document
themes are produced and indexed. Theme generation, via document services, however, is not
changed, and is still subject to the 50-theme maximum. In 8.1.5, the JAPANESE_VGRAM_LEXER works only if your database character set is JA16EUC
or JA16SJIS. The KOREAN_LEXER works only in KSC5601. These lexers are enhanced in 8.1.6 to
also work in UTF-8. This allows multi-lingual databases to finally use the Japanese and
Korean lexers. The biggest 8.1.6 addition in the lexer area is the new MULTI lexer, which allows
documents of different languages to be stored in a single table. The MULTI lexer chooses a
language-specific lexer for each row, based on a language column. The first step with the MULTI lexer is the creation of language-specific lexer
preferences for each language in the multi-lingual table. As an example, let's create
English, French, and German lexers: Once the language-specific lexer preferences are created, they need to be gathered
together under a single MULTI lexer preference. First, we need to create the MULTI lexer
preference, using the MULTI_LEXER object: Now we must add the language-specific lexers to the multi lexer preference using the
new add_sub_lexer call: For our example: This nominates the german_lexer preference to handle german documents, the french_lexer
preference to handle french documents, and the english_lexer preference to handle
everything else, using 'DEFAULT' as the language. The MULTI lexer decides which lexer to use for each row based on a language column.
This is a character column in the table which stores the language of the document in the
text column. For instance, our table looks like: To index this table, we need to use the MULTI lexer preference, but we also need to
specify the name of the language column: The MULTI lexer then reads the value of the language column for each row, and uses the
associated language-specific lexer for that language. The language column is case-independent, and can contain either the NLS name or
abbreviation of the language. If the table uses some other value for the language, you
need to specify that alternate value as the fourth argument when adding the sub lexers: for instance, to use the ISO 639-2 string for german instead. Note that these alternate
values are in addition to the NLS names and abbreviations, so "D" and
"GERMAN" in the language column still select the german lexer. This means that
alternate values cannot be NLS language names or abbreviations. The DEFAULT lexer is used when the language of the document is unknown or is not mapped
to a language-specific lexer. The DEFAULT lexer is also used to lex stopwords. A DEFAULT
lexer MUST be specified, or you will get an error when using the MULTI lexer preference in
create index. You cannot specify an alternate value for the DEFAULT sub lexer -- this does
not make sense. At query time, the MULTI lexer chooses a language-specific lexer to lex the query
tokens. This is based on the NLS_LANG setting for the query session. Thus, a query session
in the FRENCH language will use the lexer for FRENCH. A more explicit method of specifying
the query language may appear in future versions of interMedia Text. The 8.1.5 stemmer and fuzzy_match attributes are set for a single, specific language at
create index time, making it unable to handle multi-lingual tables. 8.1.6 provides the new
AUTO setting for STEMMER and FUZZY_MATCH: During queries, when set to AUTO, the stemmer or fuzzy match expander will examine the
language setting of the session and choose the most appropriate setting for that language.
A user with NLS_LANG French, for instance, will get French stemming and fuzzy matching.
This mapping is not configurable in this release -- you cannot specify a particular
stemmer or fuzzy match cluster to use for each language. This setting is designed for use with the MULTI lexer, which switches lexers based on
the query session's language setting. Together, they allow multi- lingual tables to be
indexed and queried. All versions of interMedia Text and ConText offer wildcard searching -- a
contains for any word matching a pattern. This is done by expanding the pattern against
the main index table using LIKE, and searching on all the resulting words. ConText limited the size of the expansion to around 2500 words. interMedia Text
8.1.5 has no limit, but when the pattern is unselective, it uses up memory quickly. 8.1.6 re-establishes the expansion limit, but with a configurable maximum expansion --
the new WILDCARD_MAXTERMS attribute.: Wildcard patterns matching more words than this will receive a "query too
complex" error. The default value, if unset, is 5000 words. Patterns which start with a wildcard pose another problem for wildcard searching.
Because wildcard searches are processed by expanding the pattern using a LIKE, they force
a full-table scan of the main index table. This type of query, sometimes called a
left-truncated search, takes a long time. For those applications which require faster left-truncated searching, 8.1.6 offers the
new SUBSTRING_INDEX option. This is set like any other boolean attribute: When set, the index will include a new pattern table ($P) which stores rotations of all
unique words in the main index table. Using the pattern table instead of the main index
table allows interMedia Text to process left- truncated searches much much faster. The query-time savings come at the cost of increased index time and space. Indexing
time can take as much as four times as long, and use up to 25% more space, so this option
should be used only when left-truncated wildcard searches are frequent and need to be
fast. Since we have a new index table when SUBSTRING_INDEX is set, the BASIC storage object
gets a new attribute -- P_TABLE_CLAUSE, which is tacked on the end of the create table
statement for the P table: We estimate that the P table should have approximately the same number of rows as the
$I table. ConText V2.x supported parallel index creation, using multiple ctxsrv processes. This
was removed in 8.1.5 because tighter integration with the kernel made it unfeasible. While
we continue to work for full support in future versions, 8.1.6 adds a temporary workaround
which allows parallel index creation with two caveats. First, in order to use 8.1.6 parallel indexing, the base table must be a partitioned
table -- each parallel indexer will work on its own partition, so the parallel degree
should not exceed the number of partitions. Second, because of the limitations of the kernel-interMedia Text interface, we
must use dbms_job to spawn the parallel indexers. This means that the database must be
running with job_queue_processes > 0, and that the parallel degree cannot exceed this
number. Otherwise, the syntax is just as you would expect: This will create the index with parallel degreee 3. Note that because we use dbms_job
to spawn parallel indexers, so there is some lag before all indexers are running,
proportional to job_queue_interval. If logging is active, multiple log files will be created by this -- one for each
parallel indexer. Those log files will be named <logfile>_<process#>. If, for
instance, you did: you could expect files "mylog", "mylog_1", "mylog_2",
etc. PL/SQL functions for sync and optimize have been added. The preferred method for sync
and optimize is alter index rebuild online. Ideally, these could be called in a dbms_job.
However, separate sync and optimize jobs can conflict because two alter index operations
cannot be run at the same time for a single index. The new PL/SQL functions resolve both problems by proving an alternate PL/SQL API for
sync and optimize: idx_name is the name of the index to sync. The effective user must be the owner of the
index or ctxsys. If effective user is ctxsys, idx_name can be NULL, which will cause this
call to examine all context indexes for pending DML. This uses the same code as ctxsrv. We
now recommend processing pending DML through a dbms_job -- an individual job for each
index, a ctxsys job for all indexes, or some combination of the two. This eliminates the
need for ctxsrv, and thus eliminates the problem of starting ctxsrv each time the database
starts. The ctxsrv program itself is now deprecated, and may be removed in some future
release. idx_name is the name of the index to optimize. optlevel is 'FAST' or 'FULL' for the
optimization level. There are also symbols CTX_DDL.OPTLEVEL_FAST and CTX_DDL.OPTLEVEL_FULL
for these values. maxtime is the maximum time, in minutes, to spend on optimization. This
applies only to FULL optimization. By default, maxtime is unlimited. You can also pass in
NULL or the symbol CTX_DDL.MAXTIME_UNLIMITED. 8.1.6 has a new algorithm for processing accumulate queries. The changes mostly resolve
scoring inconsistencies the 8.1.5 algorithm has in certain cases. The new algorithm also
handles weighted children better than its predecessor. 8.1.6 can process nested withins -- also known as hierarchical section searches --
which allows queries like this: Please refer above for details and caveats. Many IR systems include a feature called vocabulary browse. This takes an input word
and displays words in the text index which are alphabetically close. This feature is
useful for users to correct misspelled words in the query. By including a rough document
count for each word, it also allows the user to eliminate unselective words.
Administrators may also use it to correct misspellings in document text. 8.1.6 provides a new function for this -- browse_words: index_name is the name of the index to browse. seed is the start word. This must be a single word, and not a phrase. restab is the name of a result table which should have the schema: The results of the browse are placed in this table. Existing contents are not deleted
before browse. browse_id is a numeric identifier for the operation. Output rows in the result table
will have this value in the browse_id column. If not specified this defaults to 0. numwords is the length of the produced list, in words. This must be between 1 and 1000.
If not specified, this defaults to 10 words. direction specified the direction of the browse. 'BEFORE' means the seed word and words
alphabetically before the seed, 'AFTER' means the seed and words alphabetically after the
seed, and 'AROUND' means the seed and words both before and after. There are also symbols
CTX_QUERY.BROWSE_BEFORE, CTX_QUERY.BROWSE_AROUND, and CTX_QUERY.BROWSE_AFTER for these
values. This defaults to 'AROUND' An example might look like: Note that the DOC_COUNT is an upper bound which does not take into account old
information from deleted or updated documents. FULL optimization removes this old
information from the index, and makes these counts exact. Browse also has a variant which delivers the result in a PL/SQL table instead of a
result table: Here, resarr is a PL/SQL table of type ctx_query.browse_tab: The results are placed in the table, ordered alphabetically, with the alphabetically
first word at index 1. Existing elements of the table are deleted upon entry into the
function. All document services work on single documents. To specify the document, these
interfaces take in a primary key specification. This may not be convenient when the table
uses a composite primary key or you have only the rowid. 8.1.6 allows you to specify a
rowid instead. The document services interfaces have not been changed for this feature. The textkey
argument can now take either a rowid or a primary key spec. The key type of the document
services determines how the value is interpreted. You set the key type using the new set_key_type call: After which, all document services will expect rowid values. This setting has session
duration and scope. To set it back: there are also symbols (CTX_DOC.TYPE_ROWID and CTX_DOC.TYPE_PRIMARY_KEY) defined for
these values. The default is primary key mode, but you can change the default using the new system
parameter CTX_DOC_KEY_TYPE. For instance, if you want your system to default to ROWID
input: This has system scope and permanent duration. Note that this only sets the default --
sessions can always override this via set_key_type. In 8.1.5, the document services use result tables to store their results. If you don't
need to store the result of the operation, result table I/O is wasteful, and application
code must be written to manage these temporary rows. When you do need to store the
results, the schema of the result tables may not be sufficient for your needs -- you may
need to store a varchar2 primary key, for instance. 8.1.6 introduces in-memory variants of all document services which avoid result tables
and instead output to PL/SQL variables. This avoids unneeded I/O and allows the
application to more easily process the transient results. We'll look at these new
functions individually, starting with: The result argument is a CLOB locator. The document will be filtered and the filtered
text placed into this CLOB. If the locator passed in is NULL, a temporary CLOB is
allocated for you. It is up to the app to deallocate it. The result argument is a CLOB locator, just like in the filter call. The highlight call can return multiple highlights, so a single PL/SQL variable isn't
sufficient. Instead, the restab argument is a PL/SQL table of type ctx_doc.highlight_tab: The first highlight will be placed at index 1. Existing contents of the table when
passed in are deleted. Like highlight, ctx_doc.themes can return multiple themes. restab is a PL/SQL table of
type ctx_doc.theme_tab: The first theme is placed at index 1. Existing contents of the table when passed in are
deleted. The themes are not guaranteed to be placed in the theme_tab in any particular
order. The in-memory version of gist is the only call which deviates from its result-table
version. Unlike the result-table version, which returns multiple gists -- one per
point-of-view -- the in-memory version returns a single gist, selected by the pov
argument. If no pov is specified, the GENERIC gist is generated. result is a CLOB locator
as in ctx_doc.filter or ctx_doc.markup. The thesaurus functionality in 8.1.5 supports file import and export, but once a
thesaurus is in the database, it's difficult to change -- the API allows only the addition
of new phrases and relations. 8.1.6 rounds out the API with new maintenance functions which allow changing, renaming,
and removal of phrases and relations. There are also calls for adding new phrases and
relations which are hopefully less confusing than the existing functions. Let's look at the new additions call-by-call. This call has two variations. To rename a thesaurus, op is set to 'RENAME' and operand
is set to the new name: To truncate a thesaurus -- that is, remove all phrases and relations -- op is set to
'TRUNCATE' and the operand is not needed: 'RENAME' and 'TRUNCATE' also have symbols defined -- CTX_THES.OP_RENAME and
CTX_THES.OP_TRUNCATE. This call has three variations. To change a phrase, set op to 'RENAME': This leaves all the relations intact -- it just changes the phrase. You can also use
this to add, change, or remove qualifiers: The second variation makes a phrase the preferred term in its synonym ring: If the phrase's synonym ring already has a preferred term, this operation overrides it.
The old preferred term becomes a non-preferred synonym and the named phrase becomes the
preferred term. The third variant changes the scope note on a phrase: The operand overwrites whatever scope note was on the phrase, so this can be used to
add, change, or remove a scope note. The operations all have symbols: CTX_THES.OP_RENAME, CTX_THES.OP_PT, and
CTX_THES.OP_SN. This call removes a phrase from a thesaurus. BT*/NT* relations are patched around the
dropped phrase. A relation is established between each narrower term and each broader
term. For instance, a simple example: and you do: then the resulting thesaurus looks like: For multiple broader and narrower terms: when C is removed, the thesaurus looks like this: This patching is done for all hierarchical relations -- BT, BTI, BTP, BTG, etc.
Associative (RT) relations are not patched. If A RT B and B RT C, and B is removed, then
no A RT C relation is created. Synonymity is also not affected. If the removed phrase is
the preferred term, no phrase in the ring is elevated to preferred term. This call creates a relation between phrase and relphrase. You can do this in 8.1.5
using create_phrase, but the usage is confusing. For instance, if you want to create the
thesaurus: It looks like: Two problems here. The first problem is that you need two calls to create a thesaurus
with one relation. The second problem is that in the second call, DOG and ANIMAL are in
opposite order of how they look in the thesaurus file. This is confusing and unintuitive. create_relation solves both by automatically creating unknown phrases or relphrases,
and adhering closely to thesaurus file order. The above thesaurus is created using
create_relation like this: create_relation also allows you to add translations to phrases, using <lang>: as
the relation: Again, very similar to how it would look in the thesaurus file. create_relation is the preferred syntax now, and the rel and relphrase arguments of
create_phrase are now deprecated and should not be used. Users are discouraged from using
these arguments. This is the opposite of create_relation, and removes a relation between two phrases,
leaving the phrases and their other relations untouched. For instance, the sequence: results in mythes having the phrases A and B, but no relations between them. You can remove BT*/NT* and RT relations. You can also use SYN to remove the phrase from
its synonym ring: --> A, B, and C are synonyms --> B and C are synonyms, but A is not in the ring or PT to remove preferred term status, but leave the phrase in the ring: --> A, B, and C are synonyms, and A is the preferred term --> A, B, and C are synonyms, but the ring no longer has any preferred term Finally, you can use <lang>: to remove translations: relphrase is actually optional. If omitted, all relations of that type are removed from
the phrase. For instance: drops all broader term relations of A. drops all french translations of A. One last security note for all the above functions -- only the owner of the thesaurus
and ctxsys is allowed to modify the thesaurus, although all users can use a thesaurus
during query. The thesaurus is really meant to be used during a contains query, with the thesaurus
operators. However, 8.1.5 does include a simple PL/SQL expansion API. The functions work
just like their query counterparts, and return the expansion as a single string. For
instance, given the thesaurus: and the PL/SQL: would print out something like: which allows you to see the expansion. However, the single string expansion must be
parsed if you want the individual terms, for use in a web interface or your own thesaurus
dumping mechanism. 8.1.6 adds PL/SQL table expansion for such applications. Expansions are placed into a
PL/SQL table like this: xrel holds the relation of the input phrase to the output phrase (BT, NT, SYN, etc).
The input phrase is also in the table, with xrel of PHRASE. For SYN and TRSYN expansions,
the preferred term has xrel of PT. xlevel tells you the distance from the input phrase to the output phrase. This is used
mainly for hierarchical (BT*/NT*) expansions. For instance, in a thesaurus like: and an NT expansion on A for 3 levels, the resulting table looks like: For non-hierarchical expansions, xlevel is 1 for all expanded terms, except the phrase
itself, which is always 0, and translations of synonyms in TRSYN, which always have xlevel
of 2. xphrase is the output phrase itself. This will include the qualifier in parentheses, if
one exists for the phrase. The table API is like the existing string API, but instead of returning the expansion,
it takes a table as the first argument. Our example above, using the new API, looks like: and prints out: To complete the API, 8.1.6 adds two new expansion functions: returns the scope note for the given phrase, and returns thesaurus top terms into a PL/SQL table. This is different from TT. TT returns
the top term for a given phrase. THES_TT searches the entire thesaurus and returns all top
terms. A top term for THES_TT is defined as a term which has narrower terms but no broader
terms. This is designed to be used in browsing applications, which may wish to start with
the thesaurus top terms. Note that because THES_TT searches the entire thesaurus, it may
take some time to complete for large thesauri. If the application needs the thesaurus top
terms frequently, we strongly suggest caching the results.
This is a technical overview of the improvements found in Oracle Text (the
product formerly known as interMedia Text) version 9.0.1. This is intended
for an audience fairly familiar with previous versions of the product.
If you are new to the product, you should familiarize yourself with the
product before reading this document. The official documentation, especially
the new Application Developer's Guide, is a good place to start, as are the
interMedia Text technical overviews for 8.1.5, 8.1.6, and 8.1.7, in order.
This document is only a summary of new features and internal changes. As
always, please refer to the official documentation for detailed information.
Three big new features stand out in the set of 9.0.1 improvements: local
partitioning, XML path searching, and document classification. With local
partitioning, large text indexes are easier to manage and faster to create,
maintain, and query. XML path searching enables sophisticated queries which
can reference and leverage the embedded structure of XML documents. The new
CTXRULE indextype is the inverse of the ConText indextype -- instead of
using a text query to find documents, you use a document to find queries.
The rest of the improvements are smaller changes, but were driven by user
feedback, and make using the product a little easier, a little faster, or a
little more robust. These include new, more sophisticated lexers for Japanese
and Korean, an INSO filter timeout, per-row logging during indexing to find
faulty documents, transportable tablespace support, and an on-demand filter.
Oracle Text is now integrated with the Database Migration Assistant (DBMA),
so the Text upgrade should happen as part of the overall database upgrade
procedure. This section details the manual upgrade process, in case something
goes wrong or you choose to not use DBMA.
This process assumes that you are starting with a valid 8.1.7 ctxsys data
dictionary, and that the system is not being used for index, DML, or query.
Use SQL*Plus to run all scripts.
1. run
2. run
It is not necessary to run anything else. The
The script is not expected to encounter errors other than ORA-955 ("name is
already used by an existing object", for public synonyms). After the script
is complete, all ctx-created objects owned by ctxsys should be
For iMT veterans who may be wondering, the path is not a typo. The upgrade
scripts, which used to be in their own sub-directory, are now found in the
interMedia Text 8.1.7 shifted all code to trusted callouts, eliminating the
need for SQL*Net listener configuration (and there was great rejoicing).
Sorry, extproc is back again. But just for one new document service
function --
A brief description of setting up the listener is below; complete details
are in the Oracle 9i Server Administrator's Guide.
1. Add an entry to the
DBSID is the database SID.
2. Add the following to the listener
3. Since the
A quick way to test the Net8 configuration is to do:
from SQL*Plus, as ctxsys. If you get the error:
then things aren't set up correctly. OTN and Oracle Support have helpful
debugging guides for extproc setup, compiled problem-by-problem, and paid for
in past headaches.
Nothing needs to be done to upgrade existing text indexes. These should
still work without any problems in 9.0.1. However, we made ctxcat DML a bit
more efficient, so you might choose to re-constitute the triggers for ctxcat
indexes. All you have to do is rename the index:
You can, of course, rename it back to the original name if you choose.
A local partitioned index is a partitioned index on a partitioned table
with a one-to-one mapping of index partitions to table partitions. It's like
splitting up a large table and index into a set of smaller tables and indexes.
Local partitioned indexes have been around since partitioning support was
first added to the database. However, this support was limited to b-tree and
bitmap indexes -- domain indexes (including ConText indexes) could only be
global indexes. Kernel's extensible indexing framework has been enhanced in
9i, allowing local partitioned text indexes for the first time.
We won't discuss why table partitioning is good for large data sets --
that's covered pretty well in the general database Concepts manual. Instead,
we'll list some reasons why a local partitioned text index is better than a
global index:
Let's use the following partitioned table as an example:
To create a local partitioned index, simply specify the
The
are both OK. By default, the index partitions will have the same name as the
corresponding table partitions, so in our examples above the index partitions
are named
The partition clauses must come immediately after the
Each partition clause can optionally have a parameters clause:
These partition-level parameters clauses can be used to specify index memory
and storage attributes on a per-partition basis. We'll discuss the difference
between the index-level and partition-level parameters clauses below.
There are a few limitations to be aware of when creating local partitioned
ConText indexes:
Now let's take a closer look at what happens under the covers when a local
partitioned index is created:
First, the index global meta-data is created. Like a regular ConText index,
it parses the index-level parameters clause, determines the preferences to use
for all classes, then deep-copies the preference values to index meta-data.
In this version, the index meta-data is shared by all index partitions. This
means that you cannot, for instance, specify a Japanese lexer for partition 1
and a Basic lexer for partition 2. The documents in the table are assumed to
be partitioned by contiguous numeric qualities like date or id, rather than
by document type. The existing heterogeneous data support (charset, format,
and language column) can be used to work around this limitation.
Since all the index partitions share common meta-data, the partition
parameters clause does not allow anything except
Once the index meta-data is created, we iterate over the table partitions.
(Parallel index creation is covered below)
For each table partition, an index
partition is created. Creating an index partition is not that different from
creating a regular ConText index. There are four stages: meta-data, internal
table creation, population, and b-tree index.
The partition meta-data is usually just a single row in
The second step in creating the index partition is internal table creation.
Each index partition gets its own set of DR$ tables. The naming scheme is
somewhat different --
The tablespace and storage for these internal tables are determined mainly
by the storage preference specified in the index-level parameters clause.
However, you can override this by using the partition parameters clause.
Modifying our example above:
The internal tables for
The third step in creating the index partition is population. This is the
real work -- reading the documents, inverting them, and dumping the index
data to the $I table. This is the same as a global index population, except
that the datastore will use a partition-extended tablename; we are indexing
only a particular partition of the base table here.
Index memory size determination works like storage preference determination.
First, we check if
The fourth and final stage of index partition creation is the creation
of the $X index on the $I table. The tablespace and storage for this index
comes from the same storage preference used for internal table creation.
After these four stages are complete, it goes to the next table partition
and starts all over. When all table partitions have been indexed, the index
creation completes, and you can do contains queries.
Normally the index partitions are created serially -- one at a time. More
powerful systems may be able to create index faster by creating the partitions
in parallel (inter-partition parallelism). This is not as simple as adding a
The strategy for creating a local partitioned index with inter-partition
parallelism is:
Normally, step 1 is accomplished using the
This will create the index meta-data, then create partition meta-data, tables,
and $X index for each partition. The partitions are processed serially, but
since we're not doing index population, this should take only a few seconds
per partition.
For step 2, rebuilding the index partitions in parallel, the database has
a nifty utility --
Instead, you have to manually submit the rebuild jobs. The easiest way
to do this is to simply open multiple SQL*Plus sessions and, in each one,
run scripts that have statements like:
Issuing alter index rebuild partition like this will drop the internal
tables, re-create them, do the indexing, then re-create the $X index. So,
there's a little extra work involved because we've created the tables already
during create index, but it's not that wasteful. You can specify an index
memory size like this:
We are working with extensible framework development to simplify parallel
local domain index creation for 9i Release 2.
There is no difference between local partitioned text index queries and
global text index queries -- the syntax is the same. However, local
partitioning can change the performance characteristics of certain types of
queries. We'll examine this in this section using the following test table:
populated with synthetic data. There are 100,000 rows. Every row has the
token
We will be comparing two runs -- one with a global ConText index on this
table, and one with a local partitioned index. The timing numbers given
are for comparison against each other only, and do not represent tuned,
publishable benchmark numbers.
The contains score is calculated from several statistics, including number of
documents containing the query token, and number of documents in the corpus.
In a local partitioned index, each partition is independent, so these numbers
are partition-scope. That is, it uses the number of documents in the partition
that contain the token, and the number of documents in the table partition. As
a result, the
Each partition of a local partitioned ConText index is an almost-independent
index. If many index partitions have to be scanned to fulfill a query, local
index query time can be much slower than global index query time on the same
data, because essentially you are scanning many indexes.
A plain text query like this:
with a global index shows a single domain index scan:
The local partitioned index, on the other hand, includes a partition iterator:
This shows that the query plan is looping over the index partitions. For each
partition, it is doing a separate domain index scan.
Each domain index scan incurs some fixed amount of overhead, plus a query
time which is proportional to the amount of indexed data and number of hits.
The partitions of the local index have much less data than the global index, so
that portion of the query time is lessened. However, the overhead is much more
because the same fixed overhead is incurred once per partition. This slowdown
increases as the number of partitions increases.
The following table illustrates this effect. We ran successive tests with
the same data at different levels of partitioning -- for instance, the first
test partitioned the 100,000 rows into 10 partitions of 10,000 rows, the second
test partitioned the 100,000 rows into 50 partitions of 2,000 rows, etc. up to
1000 partitions. We created a local ConText index each time, and ran the query
above, for
As the number of partitions increases, the overhead for scanning all those
index partitions swamps the query time -- at 1000 partitions, the local
index is about 250 times as slow.
Local partitioning may not be looking too good so far. Keep in mind, though,
that this is the worst case scenario -- where local partitioning is weakest.
After all, it makes no sense to split up your index unless you're going to
leverage that split -- and this is what we turn to next.
Also, these times are for selecting all 1000 hits. The runtimes would be
faster if you were interested in only the first 10 hits, for instance. In the
50 partition case, for instance, the query can get 10 hits from the first
partition alone, and would stop without scanning the other 49 partitions. This
would yield times very close to global index.
We've seen that local partitioned indexes have the disadvantage when all the
partitions have to be scanned; the flip side of this is that they have a
distinct advantage when you only have to scan a handful of partitions.
The query class that most obviously benefits from local partitioning is
mixed text and structured, with structured condition on the partition key.
Finding news articles from this week that contain some phrase, for instance.
With global indexes, mixed queries work by getting all the text hits, then,
for each hit, evaluating the structured condition. If most of the text hits
don't match the structured condition, ConText does a lot of wasted work.
With local indexes, performance can be improved using partition pruning.
Partition pruning acts as a first-level filter, avoiding query on those
partitions that will never meet the structured criteria -- this results in
a higher text hit to structured hit ratio, and less wasted work in ConText.
For instance:
With a global ConText index, the query plan is:
The ConText index produces a list of rowids in the table which contain the
token
With a local partitioned ConText index, the query plan does not look much
different from the global index:
And, in fact, it works pretty much the same -- ask the ConText index for a
list of rowids containing
The difference in this case is that partition pruning has occurred. Since
we have a local partitioned index, instead of one big index, we have 10 small
indexes. The optimizer is smart enough to know that 9 of them will never get
any hits (because of the structured condition on
Time for global index: 4.07. Time for local partitioned index: 0.09
The example above prunes to a single index partition, but this technique can
still be useful with multiple partitions. The query plan simply iterates over
the selected index partitions. For instance:
Involves two partitions, using the query plan:
Time for global index: 3.59. Time for local partitioned index: 0.20
We are scanning multiple partitions, so there's increased overhead, as the
section "Simple Text Queries" above illustrated. The query takes about twice
as long as the pruned-to-single-partition query above (which took 0.09).
However, scanning two index partitions is still a lot better than producing all
the text hits and filtering out the ones that don't match the range.
In order for true partition pruning to happen, the query must be parsed,
and the structured condition must be literals. If the structured condition
uses bind variables, the optimizer will be forced to produce a plan that
iterates over all index partitions. However, our informal experiments showed
that at execution time there was no performance difference between the
pruned query plan and the iterator query plan. Evidently, the SQL execution
engine's partition iterator recognizes that certain partitions will not yield
hits, and skips over them without opening an index scan. So, using bind
variables should not be cause for alarm. If you want to be sure, you can
use literal values, or use partition-extended tablenames in your select.
Score-sorted queries are generally slower on local partitioned indexes than
on global indexes. The need to scan multiple partitions vs. a single index
makes it slower just to produce the results, as we saw above in
"Simple Text Queries", but the major factor is
the sorting difference.
With a global index, ConText can "push down" the sort condition and return
the rows in sorted order:
The contains produces the 10,000 rows containing
A local partitioned index, on the other hand, is split up, so we cannot
produce a list of all the hits and sort internally. Instead, the local
index scans every partition individually, then does a regular
The
Because of this effect, if your application does a lot of text-only, score-
sorted queries, local indexes are not a good choice. If you have score-sorted
mixed query that can do partition pruning, however, local indexes can offer
performance advantages even with the sort order by. For instance, taking
the same
The global index does a lot of wasted work producing and sorting rows that get
filtered out by the text condition. The global index here takes 3.61 sec,
while the local index finishes in only 1.31 sec, thanks to partition pruning.
Even better, if your structured condition prunes down to a single partition,
then the score sorting WILL be pushed down:
Note the absence of the
Another situation where local partitioning can improve performance is sort by
partition key. Selecting news articles that contain some phrase, with most
recent articles first, for instance. Or, in our example:
We've chosen
With a global index, the ConText index produces all the hits, then does a
This takes 2.34 sec for the first 50 rows.
With a local partitioned index, it is more efficient to do an ordered
partition iteration. The base table is range-partitioned, so the value of
The
On the other hand, with very selective queries, you may have to hit multiple
partitions, and we have already seen how this can put the local index at a
disadvantage. Changing the query to
One note of caution: do not use an inline view to do the top-n hit buffering.
Something like this:
Note that the
While local b-tree index queries can be run in parallel, parallel contains
queries are not supported in this version. To illustrate, let's modify our
test table with a numeric column
We can use the
Unfortunately, parallel query is not supported by the extensible indexing
framework in this version:
Note the lack of any parallel comments in the plan. This capability might be
added by the kernel's extensible indexing framework development in some future
version.
These tests show that local partitioned indexes are much faster than global
indexes as long as you are leveraging the partitioned nature of the base table,
either through significant partition pruning or ordered partition iteration.
Plain text queries are much slower because of the overhead incurred by scanning
multiple partitions -- especially if the number of partitions is large.
You might therefore conclude that, if your application were doing plain text
queries or mixed queries that do not involve the partition key, that global
indexes would be much better. However, do not overlook the maintenance
advantage of local partitioned indexes (discussed in the following sections).
Overall application performance (including admin and indexing time) might still
be better with local indexes, especially if the number of partitions is small.
A major benefit of using partitioned tables is easier maintenance. To quote
the concepts manual, "partitions enable data management operations like data
loads, index creation, and data purges at the partition level, rather than
on the entire table, resulting in significantly reduced times for these
operations".
But what about the indexes? When you do partition-level DDL, a global
ConText index built on the table is a monolithic entity, and knows nothing
about table partitions. The only option is for the whole index to become
unusable, so you have to rescan the entire table and rebuild the index from
scratch -- even though most of the data has not changed.
A local partitioned index, on the other hand, can take more limited and
appropriate action. If you truncate a table partition, for instance, the
index simply truncates the corresponding index partition's internal tables
automatically -- no extra rebuild time required. At worst, just the affected
index partition is marked unusable. That partition must be rebuilt (which
takes less time than a full rebuild because it has just a portion of the data)
but the other index partitions are still available and can be queried.
Rebuilding index partitions is covered below in the section
"Rebuilding UNUSABLE Index Partitions".
We'll now examine the effects of the table partition operations individually.
This renames a table partition. It has no effect on the index. In fact,
the index is not even notified when this happens. The index partition name
does not change to match the table partition name. To rename an index
partition, use alter index:
This removes all data in a table partition. The internal tables of the
affected index partition are truncated. The index partition remains valid
and usable.
This removes the data and definition of a table partition. The corresponding
index partition's meta-data and internal tables are dropped. The remaining
index partitions are not affected. The index partition id's (
This creates a new table partition. This new table partition will be empty,
because with
The name of the new index partition will be the same as the new table
partition name. If that name is already used by an existing index partition,
a system-generated name is assigned instead.
You cannot control the storage characteristics of the new index partition.
It will use the default storage values for the index. You can override this
by rebuilding the partition.
This reconstructs the table partition's data. For example, the command above
moves
This takes an existing partition and splits it into two adjacent partitions.
The ConText index first drops the existing partition's meta-data and internal
tables, then creates two new index partitions. The internal tables for these
new index partitions will be created, but they will be empty.
After the operation, the space taken up by the old index partition is free,
and the two new index partitions are empty and
The two new partitions will have the same name as their table partitions.
If an existing index partition already has that name, a system-generated name
is assigned instead.
You cannot control the storage characteristics of the two new index
partitions. They will use the default storage values for the index. You can
override this when rebuilding the partitions.
Oracle8i interMedia Text 8.1.6 - Technical Overview
Garrett Kaminaga, Product Development, Database and Application Server Technologies, Oracle HQ
Table of Contents
Overview of 8.1.6 Improvements
Upgrading 8.1.5 to 8.1.6
Automated Upgrade
select * from ctx_version
Manual Upgrade
this grants new, needed database privileges to ctxsys
this brings your ctxsys data dictionary to 8.1.6
this re-creates all public and private package headers
this re-creates all public and private package bodies
this re-creates the index type body
Upgrading Existing Indexes
Datastore Improvements
USER_DATASTORE
begin
ctx_ddl.create_preference('mystore','user_datastore');
ctx_ddl.set_attribute('mystore','procedure','foo');
end;
procedure foo (in rowid, in out clob)
ctx_ddl.set_attribute('mystore','output_type','varchar2');
procedure foo (in rowid, in out varchar2)
NESTED_DATASTORE
create type nt_rec as object (
lno number, -- line number
ltxt varchar2(80) -- text of line
);
create type nt_tab as table of nt_rec;
create table mytab (
id number primary key,
dummy char(1),
doc nt_tab
)
nested table doc store as myntab;
insert into mytab values (1, null, nt_tab());
insert into table(select doc from mytab where id = 1)
values (1, 'this is the first line');
insert into table(select doc from mytab where id = 1)
values (2, 'this is the second line');
ctx_ddl.create_preference('mynt','nested_datastore');
ctx_ddl.set_attribute('mynt','nested_column','doc');
ctx_ddl.set_attribute('mynt','nested_type','scott.nt_tab');
ctx_ddl.set_attribute('mynt','nested_lineno','lno');
ctx_ddl.set_attribute('mynt','nested_text', 'ltxt');
create index myidx on mytab(dummy)
indextype is ctxsys.context
parameters ('datastore mynt');
select <nested_text>
from table(select <nested_column>
from <base_table>
where rowid = <current_row_rowid>)
order by nested_table_id, <nested_lineno>
select id from mytab where contains(dummy, 'first and second')>0
Filter Improvements
INSO_FILTER Character Set Support
Format Column
create table hdocs (
id number primary key,
fmt varchar2(10),
text varchar2(80)
);
insert into hdocs values (1, 'binary', '/docs/myword.doc');
insert into hdocs values (2, 'text', '/docs/index.html');
create index hdocsx on hdocs(text)
indextype is ctxsys.context
parameters ('datastore file_datastore filter inso_filter
format column fmt');
Charset Column
create table hdocs (
id number primary key,
fmt varchar2(10),
cset varchar2(20),
text varchar2(80)
);
insert into hdocs values (1, 'text', 'JA16EUC', '/docs/tekusuto.euc');
insert into hdocs values (2, 'text', 'JA16SJIS', '/docs/tekusuto.sjs');
create index hdocsx on hdocs(text)
indextype is ctxsys.context
parameters ('datastore file_datastore filter inso_filter
format column fmt
charset column cset');
CHARSET_FILTER
Section Group Improvements
BASIC_SECTION_GROUP
HTML_SECTION_GROUP
<META NAME="author" CONTENT="Mr. Roboto">
ctx_ddl.create_section_group('mysg','html_section_group');
ctx_ddl.add_field_section('mysg','author','meta@author');
<META NAME="author" CONTENT="Mr. Roboto">
<META@AUTHOR>Mr.Roboto</META@AUTHOR>
Roboto within author
XML_SECTION_GROUP Doctype Limiter
<!DOCTYPE contact>
<contact>
<address>506 Blue Pool Road</address>
<email>dudeman@radical.com</email>
</contact>
ctx_ddl.add_field_section('mysg','email', 'email');
ctx_ddl.add_field_section('mysg','address','address');
<!DOCTYPE mail>
<mail>
<address>dudeman@radical.com</address>
</mail>
ctx_ddl.add_field_section('mysg','email','email');
ctx_ddl.add_field_section('mysg','address','(contact)address');
ctx_ddl.add_field_section('mysg','email','(mail)address');
ctx_ddl.add_field_section('mysg','sec1','(type1)tag1');
ctx_ddl.add_field_section('mysg','sec2','tag1');
radical within email
XML_SECTION_GROUP Attribute Sections
<comment author="jeeves">
I really like interMedia Text
</comment>
ctx_ddl.add_attr_section('mysg','author','comment@author');
jeeves within author
jeeves
<comment author="jeeves">
I really like interMedia Text
</comment>
<comment author="bertram">
Me too
</comment>
(cryil and bertram) within author
<!DOCTYPE foo [
<!ELEMENT foo (bar)>
<!ELEMENT bar (#PCDATA)>
<!ATTLIST bar
rev CDATA "8i">
]>
<foo>
<bar>whatever</bar>
</foo>
ctx_ddl.add_attr_section('mysg','barrev','bar@rev');
8i within barrev
Attribute Value Sensitive Section Search
<comment author="jeeves">
I really like interMedia Text
</comment>
jeeves within comment@author
interMedia within comment where (@author = "jeeves")
Dynamic Add Section
add zone section <section_name> tag <tag>
add field section <section_name> tag <tag> [ visible | invisible ]
alter index <indexname> rebuild
parameters ('add zone section tsec tag title')
alter index <indexname> rebuild
parameters ('add field section asec tag author')
alter index <indexname> rebuild
parameters ('add field section asec tag author visible')
AUTO_SECTION_GROUP
<book>
<author>Neal Stephenson</author>
<title>The Diamond Age</title>
<description>
Decades into our future ...
</description>
</book>
diamond within description
diamond within description
diamond within DESCRIPTION
<DESCRIPTION>Diamond Earring</DESCRIPTION>
AUTO_SECTION_GROUP Stop Sections
ctx_ddl.add_stop_section(<group name>, <tag>)
ctx_ddl.add_stop_section('mysg','description')
ctx_ddl.add_stop_section('mysg','DESCRIPTION')
ctx_ddl.add_stop_section('mysg','description')
ctx_ddl.add_stop_section('mysg','(bar)foo')
add stop section <tag>
alter index <indexname> rebuild
parameters ('add stop section author')
AUTO_SECTION_GROUP Attribute Sections
<book author="Edmund Wells">
<title>Rarnaby Budge</title>
</book>
wells within book@author
Nested Within
<book>
<title>Star Wars</title>
</book>
<video>
<title>Star Wars<title>
<video>
(Star Wars within title) within video
<A><B>foo</B></A>
(foo within A) within B
(foo within B) within A
<A><B>foo</B>bar</A>
A/B
(x within A) within B
A/*/B
Lexer Improvements
BASIC_LEXER Generic Base Lettering
BASIC_LEXER THEME_LANGUAGE
ctx_ddl.set_attribute('mylexer','theme_language','french');
BASIC_LEXER Theme Admissibility
JAPANESE/KOREAN_LEXER UTF-8 Support
MULTI_LEXER
ctx_ddl.create_preference('english_lexer', 'basic_lexer');
ctx_ddl.set_attribute('english_lexer','index_themes','yes');
ctx_ddl.create_preference('french_lexer', 'basic_lexer');
ctx_ddl.set_attribute('french_lexer','base_letter','yes');
ctx_ddl.create_preference('german_lexer', 'basic_lexer');
ctx_ddl.set_attribute('german_lexer','composite','german');
ctx_ddl.set_attribute('german_lexer','alternate_spelling','german');
ctx_ddl.set_attribute('german_lexer','mixed_case','yes');
ctx_ddl.create_preference('global_lexer','multi_lexer');
ctx_ddl.add_sub_lexer(<multi_lexer>, <language>, <lang-specific lexer>)
ctx_ddl.add_sub_lexer('global_lexer', 'german', 'german_lexer');
ctx_ddl.add_sub_lexer('global_lexer', 'french', 'french_lexer');
ctx_ddl.add_sub_lexer('global_lexer', 'default','english_lexer');
create table globaldoc (
doc_id number primary key,
lang varchar2(10),
text clob
);
insert into globaldoc values (1, 'german', 'Ich bin Berliner');
insert into globaldoc values (2, 'french', 'Je suis Parisian');
insert into globaldoc values (3, 'english', 'I''m from New Yawk');
create index globalx on globaldoc(text)
indextype is ctxsys.context
parameters ('lexer global_lexer
language column lang');
ctx_ddl.add_sub_lexer('global_lexer', 'german', 'german_lexer', 'ger');
Wordlist Improvements
AUTO Setting for STEMMER and FUZZY_MATCH
ctx_ddl.create_preference('mywl','basic_wordlist');
ctx_ddl.set_attribute('mywl', 'STEMMER', 'AUTO');
ctx_ddl.set_attribute('mywl', 'FUZZY_MATCH', 'AUTO');
WILDCARD_MAXTERMS
ctx_ddl.create_preference('mywl','basic_wordlist');
ctx_ddl.set_attribute('mywl', 'WILDCARD_MAXTERMS', '2500');
SUBSTRING_INDEX
ctx_ddl.create_preference('mywl','basic_wordlist');
ctx_ddl.set_attribute('mywl', 'SUBSTRING_INDEX', 'TRUE');
Storage Improvements
P_TABLE_CLAUSE for BASIC_STORAGE
create table DR$<indexname>$P (
pat_part1 varchar2(61),
pat_part2 varchar2(64),
primary key (pat_part1, pat_part2)
) organization index
<p_table_clause>
Index Creation and Maintenance
Parallel Indexing
create index textidx on doctab(text)
indextype is ctxsys.context
parallel 3
ctx_output.start_log('mylog')
PL/SQL Sync and Optimize
ctx_ddl.sync_index(<idx_name>)
ctx_ddl.optimize_index(<idx_name>, <optlevel>, <maxtime>)
Query Language Improvements
New Accumulate Scoring
Nested Within
(dog within title) within book
BROWSE_WORDS
ctx_query.browse_words(<index_name>, <seed>, <restab>,
<browse_id>, <numwords>, <direction>)
browse_id number
word varchar2(64)
doc_count number
exec ctx_query.browse_words('myidx','dog','myres', 5);
select word, doc_count from myres order by word;
WORD DOC_COUNT
-------- ---------
CZAR 15
DARLING 5
DOG 73
DUNK 100
EAR 3
ctx_query.browse_words(<index_name>, <seed>, <resarr>,
<numwords>, <direction>)
type browse_rec is record (
word varchar2(64),
doc_count number
);
type browse_tab is table of browse_rec index by binary_integer;
Document Services Improvements
Rowid Input
ctx_doc.set_key_type('ROWID');
ctx_doc.set_key_type('PRIMARY_KEY');
ctx_adm.set_parameter('CTX_DOC_KEY_TYPE', CTX_DOC.TYPE_ROWID);
In-Memory Document Services
ctx_doc.filter(<index_name>, <textkey>, <result>, <plaintext>)
ctx_doc.markup(<index_name>, <textkey>, <text_query>, <result>,
<plaintext>, <tagset>, <starttag>, <endtag>,
<prevtag>, <nexttag>)
ctx_doc.highlight(<index_name>, <textkey>, <text_query>,
<restab>, <plaintext>)
type highlight_rec is record (
offset number;
length number;
);
type highlight_tab is table of highlight_rec index by binary_integer;
ctx_doc.themes(<index_name>, <textkey>, <restab>, <full_themes>)
type theme_rec is record (
theme varchar2(2000);
weight number;
);
type theme_tab is table of theme_rec index by binary_integer;
ctx_doc.gist(<index_name>, <textkey>, <result>, <glevel>, <pov>,
<numparagraphs>, <maxpercent>)
Thesaurus Improvements
Thesaurus Maintenance
ALTER_THESAURUS
ctx_thes.alter_thesaurus(<tname>, <op>, <operand>)
ctx_thes.alter_thesaurus('oldthesname','RENAME','newthesname');
ctx_thes.alter_thesaurus('mythes','TRUNCATE');
ALTER_PHRASE
ctx_thes.alter_phrase(<tname>, <phrase>, <op>, <operand>)
ctx_thes.alter_phrase('mythes','old phrase','RENAME','new phrase');
ctx_thes.alter_phrase('mythes','crane','RENAME','crane(bird)');
ctx_thes.alter_phrase('mythes','crane(bird)','RENAME','crane(stork)');
ctx_thes.alter_phrase('mythes','crane(bird)','RENAME','crane');
ctx_thes.alter_phrase('mythes','laser','PT');
ctx_thes.alter_phrase('mythes','ships','SN', 'sea-going vessels');
DROP_PHRASE
ctx_thes.drop_phrase(<tname>, <phrase>)
A
BT B
B
BT C
ctx_thes.drop_phrase('mythes','B');
A
BT C
A B
\ /
C
/ \
D E
A B
|\/|
|/\|
D E
CREATE_RELATION
ctx_thes.create_relation(<tname>, <phrase>, <rel>, <relphrase>);
dog
BT animal
ctx_thes.create_phrase('mythes', 'dog');
ctx_thes.create_phrase('mythes', 'animal', 'BT', 'dog');
ctx_thes.create_relation('mythes', 'dog', 'BT', 'animal');
ctx_thes.create_relation('mythes', 'dog', 'french:', 'chien');
DROP_RELATION
ctx_thes.drop_relation(<tname>, <phrase>, <rel>, <relphrase>);
ctx_thes.create_relation('mythes', 'A', 'BT', 'B');
ctx_thes.drop_relation('mythes', 'A', 'BT', 'B');
ctx_thes.create_relation('mythes', 'A', 'SYN', 'B');
ctx_thes.create_relation('mythes', 'A', 'SYN', 'C');
ctx_thes.drop_relation('mythes', 'A', 'SYN');
ctx_thes.create_relation('mythes', 'B', 'SYN', 'C');
ctx_thes.create_relation('mythes', 'B', 'USE', 'A');
ctx_thes.drop_relation('mythes', 'A', 'PT');
ctx_thes.drop_relation('mythes', 'dog', 'french:', 'chien');
ctx_thes.drop_relation('mythes','A', 'BT');
ctx_thes.drop_relation('mythes','A', 'french:');
Maintenance Security
PL/SQL Table Expansion
animal
NT dog
NT cat
NT bird
declare
exp varchar2(80);
begin
exp := ctx_thes.nt('animal',1,'mythes');
dbms_output.put_line(exp);
end;
{animal}|{dog}|{cat}|{bird}
type exp_rec is record (
xrel varchar2(12),
xlevel number,
xphrase varchar2(256)
);
type exp_tab is table of exp_rec index by binary_integer;
A
NT B
B
NT C
C
NT D
xrel xlevel xphrase
------ ------ -------
PHRASE 0 A
NT 1 B
NT 2 C
NT 3 D
declare
exp ctx_thes.exp_tab;
begin
ctx_thes.nt(exp, 'animal', 1, 'mythes');
for i in 1..exp.count loop
dbms_output.put_line(lpad(' ',2*exp(i).xlevel)||
exp(i).xrel||' '||exp(i).xphrase);
end loop;
end;
PHRASE animal
NT dog
NT cat
NT bird
sn(<phrase>, [<tname>]);
thes_tt(<restab>, [<tname>])
Oracle8i interMedia Text 8.1.7 - Technical Overview
Garrett Kaminaga, Product Development, Database and Application Server Technologies, Oracle HQ
This is a technical overview of the improvements found in interMedia Text,
version 8.1.7. This is intended for an audience familiar with version 8.1.6.
If you are new to interMedia Text, please start with the 8.1.5, then 8.1.6
technical overviews.
This document is only a summary of new features and internal changes. As
always, please refer to the 8.1.7 interMedia Text documentation addendum
for detailed information.
==============================================================================
==============================================================================
Table of Contents
Overview of 8.1.7 Improvements
Upgrading 8.1.6 to 8.1.7
Catalog Index
Datastore Improvements
Filter Improvements
Lexer Improvements
Section Group Improvements
Stoplist Improvements
Wordlist Improvements
Index Creation and Maintenance
Query Improvements
Knowledge Base Improvements
Document Services Improvements
Thesaurus Improvements
==============================================================================
==============================================================================
Overview of 8.1.7 Improvements
The improvements in version 8.1.6 of interMedia Text were mainly in the
areas of structured document support and heterogenous document sets. While
8.1.7 includes some improvements which extend heterogenous document set
handling, the focus is performance and ease-of-use.
For performance, 8.1.7 offers a new indextype which offers fast mixed
query for small, simple text fields, prefix indexing for faster wildcard
searching, and many internal tweaks -- suggested by exhaustive lab
analysis on real-world data -- that improve query performance.
For ease-of-use, 8.1.7 has migrated entirely to the oracle shadow process,
eliminating the need for extproc setup. It also provides a new datastore
for easy multi-column concatenation, primary-key-less indexing, and many
other changes designed to reduce the amount of work needed to create and
maintain an interMedia Text application.
Other interesting features include a fuzzy score passthrough, a new
tokenizing document service, and German composite query improvements.
==============================================================================
==============================================================================
Upgrading 8.1.6 to 8.1.7
Automated Upgrade
-----------------
interMedia Text is not integrated with Oracle's Database Migration Assistant
tool (DBMA) in this version. If you have a prior version of interMedia Text
installed, you must upgrade manually.
Manual Upgrade From 8.1.5
-------------------------
This process assumes that you are starting with a valid 8.1.5 ctxsys data
dictionary, and that the system is not being used for index, DML, or query.
Please use SQL*Plus to run all scripts, and NOT svrmgrl.
1. run ?/ctx/admin/upgrade/s0801060.sql as SYS
this grants new, needed database privileges to ctxsys
2. run ?/ctx/admin/upgrade/u0801060.sql as CTXSYS
this upgrades the ctxsys schema to 8.1.6
Now follow the steps below for upgrade from 8.1.6 to 8.1.7.
Manual Upgrade From 8.1.6
-------------------------
This process assumes that you are starting with a valid 8.1.6 ctxsys data
dictionary, and that the system is not being used for index, DML, or query.
Please use SQL*Plus to run all scripts, and NOT svrmgrl.
1. run ?/ctx/admin/upgrade/s0801070.sql as SYS
this grants new, needed database privileges to ctxsys
2. run ?/ctx/admin/upgrade/u0801070.sql as CTXSYS
this upgrades the ctxsys schema to 8.1.7
3. run ?/ctx/admin/dr0typec.pkh as CTXSYS
this creates the CTXCAT index type, which is new for 8.1.7
4. run ?/ctx/admin/dr0pkh.sql as CTXSYS
this re-creates all public and private package headers
5. run ?/ctx/admin/dr0plb.sql as CTXSYS
this re-creates all public and private package bodies
6. run ?/ctx/admin/dr0type.plb as CTXSYS
this re-creates the CONTEXT index type body
7. run ?/ctx/admin/dr0typec.plb as CTXSYS
this creates the CTXCAT index type body, new for 8.1.7
8. run ?/ctx/admin/dr0itypc.sql as CTXSYS
this creates the bindings and index type for CTXCAT, new for 8.1.7
9. check for any invalid ctxsys objects and alter compile as needed
Upgrading Existing Indexes
--------------------------
Nothing needs to be done to upgrade existing 8.1.6 context indexes.
These should still work without any problems in 8.1.7. However, you may
choose to re-build the $X index on the $I table. Indexes built with 8.1.7
will have a $X index comprised of 5 columns. Indexes that were built with
8.1.5 or 8.1.6 have a $X index comprised of 3 columns. While a $X with the
old definition will still work, we believe the new $X index offers better
performance for large data sets.
A utility script -- ?/ctx/sample/util/dr817rx.sql -- is provided to rebuild
existing $X indexes. It should be run as ctxsys, and should be passed the
index owner and index name, as in:
sqlplus ctxsys/ctxsys
...
SQL> @dr817rx scott myindex
Please ensure that ctxsys' TEMP tablespace has sufficient space to
re-create the $X index.
==============================================================================
==============================================================================
Catalog Index
What is the Catalog Index?
--------------------------
The context index type is designed for collections of news articles,
technical reports, resumes -- relatively large, coherent documents.
This does not describe the majority of database text, which is usually
short text fragments -- names, item descriptions, addresses, etc.
While a context index can be applied to these text fragments, it's usually
overkill. Filtering, section searching, relevance ranking, and the rest of
the rich feature set that makes sense when applied to document collections
are less useful when applied to text fragments. Furthermore, the maintenance
demands of a context index -- asynchronous DML, optimization -- are too
troublesome for documents that are just a few words long. As a result, these
text fragments are usually unindexed or b-tree indexed, limiting search to
exact equality or prefix searches with LIKE.
Text fragments need text searching. Since the context index is not
optimal for fragments, we designed one that is -- a catalog index.
Using a different index structure and sacrificing some of the context index's
flexibility and feature set, the catalog index provides text indexing that
is free from maintenance overhead and text searching that has structured
query support and faster response time than the context index.
The description of the catalog index type in this section is a high-level
overview. If, after reading this section, you find yourself wondering why
ctxcat has a particular limitation, or how it all works, please refer to
"Ctxcat Index Internals". If you would like to get an idea of ctxcat indexing
speed, required space, or query performance, please see "Ctxcat Performance
Overview". Both are separate documents which should be available from
wherever you got this document.
Creating a CTXCAT Index
-----------------------
As an example in this chapter of the overview, we'll use an on-line
auction site which holds its items in a table like this:
create table auction (
item_id NUMBER PRIMARY KEY, -- auction item identifier
item_desc VARCHAR2(80), -- free-form item description
price NUMBER, -- current price of the item
end_date DATE -- end time of the auction
);
We would like to catalog-index the ITEM_DESC field. The catalog index is a
domain index type just like the context index. This means that to create a
catalog index, simply use create index and specify the catalog index type
(CTXCAT):
create index auctionx on auction(item_desc)
indextype is ctxsys.ctxcat;
You can specify preferences using a parameters clause, just like the context
index. However, the keywords you can use in a ctxcat index parameters
string are limited to:
INDEX SET
LEXER
MEMORY
STOPLIST
STORAGE
WORDLIST
Index set is new and specific to ctxcat -- we'll discuss this a bit later.
ctxcat does not support the datastore, filter, or section group keywords,
so it is limited to plain-text, unstructured data stored directly in database
columns. Language, format, and charset columns are also not supported,
meaning that the data must be homogenous. By extension, this also disallows
use of the multi-language stoplist and multi-language lexer.
The CTXCAT Query Language
-------------------------
Once the ctxcat index is created, you query using the operator catsearch
instead of contains:
select item_desc from auction
where catsearch(item_desc, 'oracle', null)>0;
This finds all rows in auction which have "oracle" in the item_desc text.
Like contains, the first argument is the text column, and the second is the
query string. The third argument we'll talk about below -- just ignore it
for now. There is no score label argument, because catsearch does not
provide scoring -- it is a simple boolean search.
The catsearch operator uses a much simpler query language than contains,
with the intent that web-style queries can be passed through to catsearch
without extra parsing. The query language rules are:
* Multiple words are treated as an AND.
* Vertical bar is used for OR.
* A leading plus sign on words is ignored and discarded
* A leading minus sign excludes words (must have included words)
* Double quotes delimit phrases.
* Parentheses group operations.
Or, in comparison:
catearch Query Equivalent contains query
-------------- -------------------------
A B A & B
A | B A | B
+A -B A ~ B
"A B" A B
"A B" C (A B) & C
(A B) | C (A & B) | C
AND, OR, AND-NOT, and PHRASE are the only functions currently available in the
catsearch query language. catsearch does not support any of the other
features found in the context query language such as accum, within, fuzzy,
stem, wildcard, thesaurus, near, etc.
Index Maintenance
-----------------
Unlike context indexes, ctxcat indexes require no special maintenance.
DML is completely transactional. On an insert, the new row is immediately
indexed and the information inserted into $I. Newly inserted rows can be
found by catsearch queries immediately -- even before they are committed.
Deletes and updates are also transactional. The transactional DML processing
means that a ctxcat index will cause DML on the base table to slow down. This
is just the effect of a b-tree index, but the impact of a ctxcat index is much
larger.
Optimization is also unnecessary for ctxcat indexes. The index structure
ensures that it is always largely optimal.
Structured Query Support
------------------------
The ctxcat index also provides structured query support, so that query
criteria for other columns can be efficiently combined with the text search.
Let's continue our auction example. A plausible requirement is to allow
people to limit their auction search to a price range. We could certainly
add this criterion to the top-level query:
select item_desc from auction
where catsearch(item_desc, 'oracle', null) > 0
and price < 15
and it will work, but may be slow as the text results are individually post-
filtered to verify the price range. It would be much more efficient to
process both the text and range criteria at once - just like a concatenated
b-tree index on two columns is faster than two separate indexes when querying
on two column criteria. ctxcat indexes support something similar using
index sets.
We start by creating an index set:
ctx_ddl.create_index_set('aucset');
An index set, like a stoplist, is only a holding object. We have to add
content in order to define its behavior. Index sets hold indexes. Each
index is an ordered list of base table columns which will be used in mixed
query. In our case, we want to perform a mixed query with price criteria,
so we add an index on price to the index set:
ctx_ddl.add_index('aucset','price');
The index set can take up to 99 indexes, each made up of ordered lists of
columns from the base table. However, there are two important restrictions
on these columns:
1. only allowable types are NUMBER, DATE, and CHAR/VARCHAR2 with maximum
length <= 30 bytes
2. NULL values are not allowed. A NULL in a user column will cause an
index error and the row will not be indexed.
So far we have created our index set and added a "price" index. When we
use this index set when we create our index:
create index auctionx on auction(item_desc)
indextype is ctxsys.ctxcat
parameters ('index set aucset')
ctxcat includes the "price" column of the base table in the text index. This
is not the same as the widely-used context technique of concatenating multiple
columns' data using user_datastore. ctxcat does it quite differently -- see
technical description document more details.
Once price is part of the $I index, we can reference it in the third
parameter of catsearch:
select item_desc from auction
where catsearch(item_desc, 'oracle', 'price < 15') > 0
which allows the ctxcat index to process both the text query and structured
predicate at the same time.
Note that you can't put just any structured criteria in there. It is
restricted and simplified. The restrictions are:
* The left-hand side (the column name) must be a column named in
at least one of the indexes of the index set
* The left-hand side must be a plain column name -- no functions,
expressions, etc.
* The operator is limited to: <, <=, =, >=, >, BETWEEN, and IN
* The right-hand side must be composed of literal values -- no functions,
expressions, other columns, subselects, etc.
* Multiple criteria can be combined with AND. OR is not supported.
So all these are supported:
catsearch(text, 'dog', 'foo > 15')
catsearch(text, 'dog', 'bar = ''SMITH''')
catsearch(text, 'dog', 'foo between 1 and 15')
catsearch(text, 'dog', 'foo = 1 and abc = 123')
And these are NOT supported:
catsearch(text, 'dog', 'upper(bar) = ''A''')
catsearch(text, 'dog', 'bar LIKE ''A%''')
catsearch(text, 'dog', 'foo = abc')
catsearch(text, 'dog', 'foo = 1 or abc = 3')
Of course, this is still a database, so if you really need them you can
always move these more sophisticated criteria out of the catsearch and back
in the main select -- it may be a bit slower, but the query can be processed.
You can also use the third argument to order the results:
select item_desc from auction
where catsearch(item_desc, 'oracle', 'order by price') > 0
Note that this query does not have an order by clause. The rows will
come back from catsearch in price without it. In fact, adding an order by
clause will actually slow the query down. The third catsearch argument is
not integrated with the oracle SQL query processor, so it will not be aware
that the rows coming back from catsearch are already in order. As a result,
it will go through an unneeded SORT ORDER BY.
You can combine structured criteria and order by, and also use the
ASC and DESC keywords, as long as all columns are going in the same
direction:
catsearch(text, 'dog', 'order by foo desc, bar desc')
catsearch(text, 'dog', 'foo = 15 order by bar')
Some combinations of structured criteria and order by might not be able to
be processed efficiently by the catsearch index. When this happens, the
query will fail with the error:
DRG-10844: index cannot execute this structured predicate efficiently
This can happen when, for instance:
* a column not in the index set is referenced
* order by columns are going in different directions
* the order by column list is not found in the index set
as well as other, more arcane occasions. When this error occurs, you can
simply move parts of the structured criteria or order by out of the
third argument and back in the main select.
Creating Catalog Indexes -- Further Notes
-----------------------------------------
The catalog index does not support document services (highlighting,
markup, themes, gists) or query services (explain, query feedback,
browse words). Support may be added in a future release.
The catalog index does not support theme indexing -- if given a lexer
preference with INDEX_THEMES set, it will ignore this setting.
In order to support Japanese, the catalog index must be built with prefix
indexing -- this feature is new for 8.1.7 and is covered later in this
document.
The catalog index has its own set of system parameters for setting
preference defaults. This allows an installation to have one set of
default preferences for context indexes, and another set for ctxcat
indexes. The new system parameters are:
DEFAULT_CTXCAT_LEXER
DEFAULT_CTXCAT_STOPLIST
DEFAULT_CTXCAT_WORDLIST
DEFAULT_CTXCAT_STORAGE
DEFAULT_CTXCAT_INDEX_SET
While you can specify a wordlist preference for ctxcat indexes, most of
the attributes do not apply, since the catsearch query language does not
support wildcarding, fuzzy, and stemming. The only attribute of the wordlist
preference that is useful is PREFIX_INDEX, for Japanese data.
ctxcat uses a $I table to store the text information, just like context.
However, ctxcat $I tables have many more rows and take up much more space
than context $I tables for the same data. Ensure that the indexing session
has sufficient rollback and adequate storage space.
Index memory can be decreased to reduce rollback usage. ctxcat indexes have
a different structure than context indexes, and do not produce sub-optimal
indexes with a low index memory setting. ctxcat indexing time is also
unaffected by the indexing memory unless set very low.
The ctxcat index will create several regular (b-tree) indexes on its $I
table. Because they are created in a PL/SQL stored procedure owned by CTXSYS,
these b-tree indexes are created using the temporary tablespace of CTXSYS.
Ensure that this is set to a tablespace with sufficient working space.
For further technical and quantitative information, see the separate
documents "Ctxcat Index Internals" and "Ctxcat Performance Overview".
How to Choose
-------------
In most cases, the choice between ctxcat and context indexes will be easy to
make. The limitations of ctxcat preclude its use in many situations, while
its zero maintenance and structured query support make it clearly superior in
others. Use ctxcat for text fragments and context for information retrieval.
ctxcat and context are complements, not competitors.
There is possible overlap is on medium-size (500 - 2000 bytes) unstructured
plain text documents which need very simple text searching and structured
query support -- resumes which need to be keyword searched and ordered by
date, for example. The application needs to balance context's indexing speed
and storage efficiency against ctxcat's structured query support. Unless the
performance of the context index is unacceptable, the context index should
probably be preferred in such cases. While the application may be able to
bear ctxcat's limitations now, context offers more flexibility if the
application requirements expand in the future.
Finally, if you simply cannot decide, you can always elect to create both.
Because they are different index types, it is possible to create both a
context and a ctxcat index on the same column.
==============================================================================
==============================================================================
Datastore Improvements
Multi-Column Datastore
----------------------
Many customers use the user datastore to concatenate multiple table columns
into a single virtual document, for more efficient mixed query. Some cases
where user datastore is used in this way are good candidates for the ctxcat
index type, discussed above, but many will need to stick with the context
index for its more powerful query language and more extensive feature set.
(The question of ctxcat vs. context section search is addressed further in
"Ctxcat Performance Overview")
Since concatenation of multiple base table columns is such a popular use of
the user datastore, we've created a datastore specifically for this purpose:
the multi_column_datastore.
Using the multi column datastore is easier than doing the same thing in
the user datastore -- you simply create your preference:
ctx_ddl.create_preference('mymulti','MULTI_COLUMN_DATASTORE');
then set your column list:
ctx_ddl.set_attribute('mymulti', 'COLUMNS', 'column1, column2, column3');
create the index using the multi-datastore:
create index myindex on mytable(columnx)
indextype is ctxsys.context
parameters('datastore mymulti')
and at indexing time, the engine will create a virtual document from
all the specified columns like this:
Oracle Text 9.0.1
Technical Overview
Table of Contents
Overview of 9.0.1 Improvements
Upgrading 8.1.7 to 9.0.1
Manual Upgrade From 8.1.7
?/ctx/admin/s0900010.sql as SYS
this grants new, needed database privileges to ctxsys
?/ctx/admin/u0900010.sql as CTXSYS
this upgrades the ctxsys schema to 9.0.1
u0900010.sql
script calls
other scripts to upgrade the indextypes (t0900010.sql),
recompile all packages
(dr0pkh.sql and dr0plb.sql), and recreate the shared
library (dr0ulib.sql).
VALID.
?/ctx/admin directory.
SQL*Net Listener Setup
ctx_doc.ifilter, the on-demand INSO filtering call
(this is covered below). If you don't use this
function, you don't need to
set up the listener.
tnsnames.ora:
extproc_connection_data =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = ipc)
(KEY = DBSID))
(CONNECT_DATA = (SID = ep_agt1)))
ep_agt1 can be named anything.
extproc_connection_data should not be changed.
SID_LIST:
(SID_DESC = (SID_NAME = ep_agt1)
(ORACLE_HOME = /oracle)
(ENVS = LD_LIBRARY_PATH=/oracle/ctx/lib)
(PROGRAM = extproc))
ep_agt1 matches the CONNECT_DATA SID for
extproc_connection_data
in the tnsnames.ora. The PROGRAM section tells
the Net8 listener to
start the external procedure process. The ENVS section,
which
is shown here for UNIX, will ensure that the environment includes
?/ctx/lib in LD_LIBRARY_PATH. This is needed
so that indexing can
use the INSO filters.
extproc_connection_data ADDRESS section specifies
ipc,
make sure that the ADDRESS_LIST of listener.ora
accepts ipc connections.
exec ctx_adm.test_extproc;
DRG-50704: Net8 listener is not running or cannot start external procedures
Upgrading Existing Indexes
alter index <indexname> rename to <newname>
Local Partitioned Indexes
Introduction
Creating Local Partitioned Indexes
Local Partitioned Indexes
Why Local Partitioned Indexes?
With a global index, partition-level DDL invalidates the entire index. You
could truncate one partition, and spend days rebuilding the index, even
though the data in the other partitions hasn't changed. A local
partitioned index truncates the corresponding index partition and the rest
of the index doesn't even notice. This makes managing large sets of data
much easier. This is discussed further below in the section
"Partitioned Table Maintenance"
Oracle Text allows only one session to sync a given index at a time. Users
with large datasets have found this too limiting -- the system can't keep
up with huge amounts of incoming data. With local partitioning, the index
partitions are independent, and can be sync'ed in parallel. We discuss
this further in the section
"Local Partitioned Index Maintenance".
A local partitioned ConText index can take advantage of partition pruning
and partition iteration for faster mixed query and sorted query. In
contrast, a global index wastes time producing rowids in other partitions
which are simply filtered out by the query plan. Query is examined in the
section "Query Considerations".
Creating a Local Partitioned Index
create table ptest (id number, text varchar2(80))
partition by range (id)
(partition p1 values less than (100),
partition p2 values less than (200),
partition p3 values less than (300));
LOCAL
keyword:
create index ptestx on ptest(text)
indextype is ctxsys.context
local;
local keyword must be after the indextype clause.
You can stick a parameters clause before or after the
local keyword:
create index ptestx on ptest(text)
indextype is ctxsys.context
parameters ('filter ctxsys.inso_filter')
local
create index ptestx on ptest(text)
indextype is ctxsys.context
local
parameters ('filter ctxsys.inso_filter')
p1, p2, and p3. You can
override this by specifying individual
partition clauses:
create index ptestx on ptest(text)
indextype is ctxsys.context
local
(partition larry, partition moe, partition curly)
parameters ('filter ctxsys.inso_filter')
local keyword. The
partition clauses correspond to the table partitions in order.
For instance,
our first table partition is p1, so, in our example above, its
index partition
will be named larry, because that's the first partition clause in
the set.
Because it's done solely by order, this means that you can't just name a few
partitions here -- you must specify a partition clause for each table clause.
If your table has 100 partitions, you must put 100 partition clauses here.
create index ptestx on ptest(text)
indextype is ctxsys.context
local
(partition larry,
partition moe parameters ('memory 5m'),
partition curly)
parameters ('filter ctxsys.inso_filter')
Limitations
RANGE partitioned
Other partitioning methods, such as HASH,
COMPOSITE, or LIST, are not
supported for local indexes. You can still create a global index on
tables partitioned by these methods, though.
In this version, only the ConText indextype supports local partitioning.
You will just get an error message if you try to create a local
CTXCAT or CTXRULE index.
Due to the table naming scheme used for local partitioned indexes (more
on this below), the names of local partitioned indexes cannot exceed 21
bytes in length.
Again due to the table naming scheme, the index (and therefore the table)
cannot have more than 9999 partitions.
Local Partitioned Indexing Process
MEMORY and
STORAGE -- use
the index-level parameters clause to specify everything else.
dr$index_partition,
visible through the view CTX_USER_INDEX_PARTITIONS, which has the
columns:
ixp_id with one
exception --
the ixp_id makes up part of the internal tablenames.
IXP_INDEX_NAME
user_ind_partitions. Use this name to identify the index
partition in
all index partition operations.
IXP_TABLE_NAME
IXP_TABLE_PARTITION_NAME
IXP_STATUS
idx_docid_count and idx_status for normal
text indexes, only now these apply
on a partition level, so the values for these columns in
CTX_USER_INDEXES
can be ignored.
DR#<indexname><ixp_id><suffix>. For instance,
index
partition p1 from our example above will have a $I table named
DR#PTESTX0001$I.
The # instead of $ avoids collisions with global ConText index tablenames.
The addition of ixp_id is the reason why local partitioned index
names are
limited to 21 bytes instead of the 25 bytes for global indexes, and the four-
digit decimal representation of the index partition id imposes the limit of
9999 index partitions.
create index ptestx on ptest(text)
indextype is ctxsys.context
local
(partition larry,
partition moe parameters ('storage p2_storage'),
partition curly)
parameters ('storage global_storage')
larry and curly will be
created using the global_storage preference, while
moe will be created using p2_storage.
Specifying a per-partition storage preference uses a little more index meta-
data (stored in table dr$index_values).
MEMORY was specified in the partition-level
parameters
clause. If not, we then check MEMORY in the index-level
parameters clause.
If not there, then we take system parameter DEFAULT_INDEX_MEMORY.
Parallel Local Partitioned Index Creation
PARALLEL clause to the create index statement. Principally this
is because
we currently ignore it for local partitioned text indexes. However, even if
it were supported, it wouldn't create the partitions in parallel. With local
b-tree indexes, specifying a PARALLEL degree still creates the
index partitions
serially, though each partition is parallelized (intra-partition parallelism).
UNUSABLE keyword on
create index.
Unfortunately, this is not supported for domain indexes -- you'll just get
syntax errors. Instead, you can use the Text way of creating an empty index --
the NOPOPULATE keyword:
create index ptestx on ptest(text)
indextype is ctxsys.context
local
parameters ('NOPOPULATE');
DBMS_PCLXUTIL -- which uses
DBMS_JOB to kick off multiple
sessions which rebuild the index partitions. Unfortunately, this does not work
with domain indexes, because the jobs that it submits include the
PARALLEL
clause, which is not supported for domain indexes, and raises a syntax error.
alter index ptestx rebuild partition p1;
alter index ptestx rebuild partition p1 parameters ('replace memory 50M');
Local Partitioned Index
Query Considerations
Query Syntax
create table ptest(a number not null, b varchar2(128))
partition by range(a)
(
partition p1 values less than (10000),
partition p2 values less than (20000),
partition p3 values less than (30000),
partition p4 values less than (40000),
partition p5 values less than (50000),
partition p6 values less than (60000),
partition p7 values less than (70000),
partition p8 values less than (80000),
partition p9 values less than (90000),
partition p10 values less than (100000)
);
EVRY. TM07 occurs every tenth row, between 1
and 7 times within the
row (to give a score range). HSPC occurs every 100 rows, and
KSPC occurs
every 1000 rows.
SCORE
SCORE values for any given row that you get from a
query on a
local partitioned index may be different than if the index were a global index.
However, we do not believe that the partition-scope of the score will have
noticeable adverse effects for most applications.
Simple Text Queries
select * from ptest where contains(b, 'HSPC')>0;
1.1 DOMAIN INDEX PTESTX
1.1 PARTITION RANGE ALL
2.1 DOMAIN INDEX PTESTX
HSPC. HSPC occurs every 100th row, so
there's 1000 hits overall,
and a minimum of 1 hit per partition. Times given below are in minutes:seconds
and are to select all 1000 rows
# of partitions in ptest
10
50
100
250
500
1000
global index
1.14
1.11
1.14
1.11
1.12
1.18
local index
1.34
11.37
21.23
50.42
2:25.48
5:01.82
Partition Pruning
select /*+ FIRST_ROWS(50) */ 12345
from ptest
where contains(b, 'EVRY') > 0
and a between 90000 and 100000
and rownum < 50;
1.1 COUNT STOPKEY
2.1 TABLE ACCESS BY GLOBAL INDEX ROWID PTEST
3.1 DOMAIN INDEX PTESTX
EVRY. Since every row contains this token, this results in
100,000
rowids, in docid order. For each of those rowids, the kernel does a table
access by rowid to get the value of column A. In this testcase,
the rowids
resulted from the ConText index happen to come back in ascending A
order.
As a result,
the ConText index wastes time producing 90,000 rowids that are just thrown
away by the structured condition filter. (This is only for a domain index
scan. This is not true of a functional invocation, though that has its own
problems, discussed in "Sort by Partition Key"
below)
1.1 COUNT STOPKEY
2.1 TABLE ACCESS BY LOCAL INDEX ROWID PTEST
3.1 DOMAIN INDEX PTESTX
EVRY, then for each rowid, do a table
access to get
the value of A, and ensure that the value is between 90,000 and
100,000.
A, the partition
key) so it
only asks p10. All of the rowids produced by p10
fulfill the structured
condition, so none are thrown away.
select /*+ FIRST_ROWS(50) */ 12345
from ptest
where contains(b, 'EVRYS') > 0
and a between 80000 and 100000
and rownum < 50;
1.1 COUNT STOPKEY
2.1 PARTITION RANGE ITERATOR
3.1 TABLE ACCESS BY LOCAL INDEX ROWID PTEST
4.1 DOMAIN INDEX PTESTX
EVERYS is a special token just for this test, which occurs in
every row from
1-79999, and then every 400th row after that. This lopsided distribution is
to force the partition query to hit two partitions.
Score Sorting
select /*+ FIRST_ROWS(50) */ 12345
from ptest
where contains(b, 'TM07', 1)>0
and rownum < 50
order by score(1) desc;
1.1 COUNT STOPKEY
2.1 TABLE ACCESS BY GLOBAL INDEX ROWID PTEST
3.1 DOMAIN INDEX PTESTX
TM07, sorts them
internally, then returns them in order.
SORT ORDER BY
on the amassed results:
select * from (
select /*+ FIRST_ROWS(50) */ 12345 from ptest
where contains(b, 'TM07', 1)>0
order by score(1) desc
) where rownum < 50
1.1 COUNT STOPKEY
2.1 VIEW
3.1 SORT ORDER BY STOPKEY
4.1 PARTITION RANGE ALL
5.1 TABLE ACCESS BY LOCAL INDEX ROWID PTEST
6.1 DOMAIN INDEX PTESTX
SORT ORDER BY makes the query much slower. The global index
query runs
in 1.19 sec, while the local index takes 7.87 seconds. Also, the query syntax
has to be changed to incorporate a view, because of the different semantics
introduced by using a sort instead of internal ordering.
TM07 text query, but adding a two-partition condition:
select /*+ FIRST_ROWS(50) */ 12345
from ptest
where contains(b, 'TM07', 1)>0
and a between 80000 and 100000
and rownum < 50
order by score(1) desc;
select /*+ FIRST_ROWS(50) */ 12345
from ptest
where contains(b, 'TM07', 1)>0
and a between 90000 and 100000
and rownum < 50
order by score(1) desc;
1.1 COUNT STOPKEY
2.1 TABLE ACCESS BY LOCAL INDEX ROWID PTEST
3.1 DOMAIN INDEX PTESTX
SORT ORDER BY. For this query you get
the benefits of
both partition elimination and ConText score sorting. The time for the global
index is 3.51 seconds, while the local index takes only 0.56 seconds. Again,
the order by score pushdown occurs only when you prune to a single partition.
Sort by Partition Key
select /*+ FIRST_ROWS(50) INDEX(ptest ptestx) */ 12345 from ptest
where contains(b, 'TM07') > 0
order by a
TM07 here as a medium-selectivity text query, which
is the most
problematic for this query class. A very selective text query produces few
hits which can be sorted very quickly, while a very unselective text query
produces so many hits that it's faster to do an ordered b-tree scan, invoking
contains as a function for each row -- you'll only have to do a few to fill
the 50-row buffer. Medium-selectivity text queries produce too many rows
to sort quickly, but too few rows to drive off the b-tree. (the example sql
includes an INDEX hint to force driving from the domain index
scan)
SORT ORDER BY to sort them by the structured column. In this
case, we have to
produce and sort 10,000 rows:
1.1 SORT ORDER BY
2.1 TABLE ACCESS BY GLOBAL INDEX ROWID PTEST
3.1 DOMAIN INDEX PTESTX
A
for any row in partition p1 is guaranteed to be less than the
value of A for
any row in partition p2. Therefore, any hits from partition
p1 are guaranteed
to be first in the result set when you order by A, and so it is
correct to
produce, sort, and result the hits from partition p1 before touching any other
partition. This is an ordered partition iteration, and the query plan looks
like this:
1.1 PARTITION RANGE ALL
2.1 SORT ORDER BY
3.1 TABLE ACCESS BY LOCAL INDEX ROWID PTEST
4.1 DOMAIN INDEX PTESTX
SORT ORDER BY is under the partition iterator. This
indicates that it is
sorting on a partition level, rather than a global level. This is actually
more efficient, because it is faster to sort smaller batches (sort is O(n lg n)
if you can recall those dimly-remembered computer science lectures), but the
primary savings in this case are because we can saturate our 50-hit buffer
with the results from the first partition alone. This query takes only 0.20
sec for the first 50 hits, something like 1200% faster than global's 2.34 sec.
KSPC (1/1000 selectivity)
forces the
partition iterator to hit 5 of 10 partitions, and its performance -- 0.34 sec
is more than the global index's 0.30 sec. However, the ordered iteration may
still be a good all-around plan -- it's much faster on the queries that are now
slow, and a little slower on the queries that are already fast, so you should
see an overall improvement as long as the number of partitions is not large.
select * from (
select /*+ FIRST_ROWS(50) INDEX(ptest ptestx) */ 12345 from ptest
where contains(b, 'TM07') > 0
order by a
) where rownum < 50
Produces a very different query plan:
1.1 COUNT STOPKEY
2.1 VIEW
3.1 SORT ORDER BY STOPKEY
4.1 PARTITION RANGE ALL
5.1 TABLE ACCESS BY LOCAL INDEX ROWID PTEST
6.1 DOMAIN INDEX PTESTX
SORT ORDER BY is above the partition iterator.
This means a
global sort, and the timing for this -- 2.63 sec -- is slower than the global
index. This seems to be a quirk in query plan parsing and optimization, and
is currently being investigated (see bug 1800113).
Parallel Query
C and its own local b-tree index:
alter table ptest add (c number);
begin for i in 1..100000 loop
update ptest set c = mod(i, 113) where a = i;
end loop; end;
create index ptestc on ptest(c) local;
PARALLEL_INDEX hint to force parallel execution of
this query:
select /*+ INDEX(ptest ptestc) PARALLEL_INDEX(ptest, ptestc, 4) */ 12345
from ptest
where c = 111
and a between 20000 and 100000;
1.1 PARTITION RANGE ITERATOR PARALLEL_COMBINED_WITH_PARENT
2.1 TABLE ACCESS BY LOCAL INDEX ROWID PTEST PARALLEL_TO_SERIAL
3.1 INDEX RANGE SCAN PTESTC NON-UNIQUE PARALLEL_COMBINED_WITH_PARENT
select /*+ PARALLEL_INDEX(ptest, ptestx, 4) */ 12345
from ptest
where contains(b, 'KSPC')>0
and a between 20000 and 100000;
1.1 PARTITION RANGE ITERATOR
2.1 TABLE ACCESS BY LOCAL INDEX ROWID PTEST
3.1 DOMAIN INDEX PTESTX
Query Considerations Summary
Local Partitioned Index
Table Partition Operations
Partitioned Table Maintenance
Index Maintenance
Query Services
Document Services
Table Partition Operations
RENAME PARTITION
alter table ptest rename partition p5 to pnew;
alter index ptestx rename partition p5 to pnew;
TRUNCATE PARTITION
alter table ptest truncate partition p5;
DROP PARTITION
alter table ptest drop partition p5;
ixp_id in view
ctx_user_index_partitions), which are normally contiguous, will
have a hole.
This doesn't have much effect, and the id will get reused if partitions are
added (through ADD or SPLIT).
ADD PARTITION
alter table ptest add partition p11 values less than (110000);
RANGE partitioning (the only method currently
supported) you can
add partitions only at the end of the range, where there will be no data.
The ConText index automatically creates a new index partition, creating the
meta-data and internal tables. The new index partition will be valid and can
be queried, although there is no data, because the underlying table partition
is empty.
MOVE PARTITION
alter table ptest move partition p5 tablespace testing;
p5 to a new tablespace. Because ROWIDs and object ids
might change
because of this operation, the corresponding index partition will be marked as
UNUSABLE. The internal tables will still exist and contain the
same data, but
the index partition cannot be queried until it is rebuilt. The index has to be
rebuilt from scratch even though the data has not changed. This is because
there is no way to determine the "old" rowid of each row, so it is not possible
to simply rebuild the $R and $K tables.
SPLIT PARTITION
alter table ptest split partition p5 at (45000)
into (partition p5_1, partition p5_2);
UNUSABLE -- they
cannot be
queried until they are rebuilt. (see
Rebuilding UNUSABLE Index Partitions,
below)
MERGE PARTITION
alter table ptest merge partitions p2, p3
into partition p2and3;
This takes two neighbor partitions and merges them into a single partition.
It's the opposite of SPLIT. The local ConText index drops the
meta-data and
internal tables for the two old index partitions, then creates a new index
partition, with empty internal tables. The new index partition is
UNUSABLE
and cannot be queried until it is rebuilt. (see
Rebuilding UNUSABLE Index Partitions, below)
The new index partition will have the same name as the table partition. If an existing index partition already has that name, a system-generated name is assigned instead.
You cannot control the storage characteristics of the new index partition. It will use the default storage values for the index. You can override this when rebuilding the partition.
alter table ptest exchange partition p7 with table p7_ext;
This is the most involved of the partition operations. It takes a specified
table partition and a non-partitioned table and swaps their data. If you do
not specify INCLUDING INDEXES, as above, or you explicitly
exclude indexes:
alter table ptest exchange partition p7 with table p7_ext excluding indexes;
then the corresponding index partition is marked as UNUSABLE and
it must be
rebuilt. If a ConText index exists on the non-partitioned table, then that
is also marked as UNUSABLE and must be rebuilt.
If you include indexes:
alter table ptest exchange partition p7 with table p7_ext including indexes;
Then the index partition will be automatically rebuilt. If there's a ConText index on the non-partitioned table, that will also be rebuilt. Since you will be re-creating two ConText indexes, this may take some time.
The local index and the index on the non-partitioned table are rebuilt with their existing preferences. It will not synchronize or reconcile divergent preferences, so you might find queries on the data return different results after the move.
Certain table partition operations cause associated index partitions to
become UNUSABLE. When an index partition is
UNUSABLE, it cannot be queried.
Queries that involve that partition -- including global queries where no
partition pruning occurs -- will fail. For example:
alter table ptest move partition p5 tablespace testing; select * from ptest where contains(b, 'KSPC')>0; --> gets ORA-29954: domain index partition is marked LOADING/FAILED/UNUSABLE
Partition-pruned queries will still work, though:
select * from ptest partition(p4) where contains(b, 'KSPC')>0; --> OK
You can find which partitions are UNUSABLE by querying
USER_IND_PARTITIONS:
select index_name, partition_name from user_ind_partitions where status != 'USABLE'
Once the UNUSABLE partitions have been identified, you must
rebuild them using
alter index:
alter index ptestx rebuild partition p5;
Note that multiple partitions of a given index can be queried or even rebuilt at the same time. Alternatively, you can use:
alter table ptest modify partition p5 rebuild unusable local indexes;
which will go out itself and find all UNUSABLE partitions in all
local indexes for partition p5, and rebuild them.
Partition rebuild is also useful to reset storage characteristics of partitions which are implicitly created through split, merge, or add; these use the default storage values for the index when first created:
alter index ptestx rebuild partition p5
parameters ('replace storage p5_store');
The amount of index memory to use when rebuilding the index can also be
specified in the parameters string, although the REPLACE keyword
must be used:
alter index ptestx rebuild partition p5
parameters ('replace memory 5m');
During a table partition operation, the entire domain index is unusable. For instance, if in session 1 we do:
alter table ptest move partition p5 tablespace testing;
and, while it is in progress, in session 2 we query:
select * from ptest where contains(b, 'KSPC')>0;
you would see the query plan:
1.1 TABLE ACCESS FULL PTEST
and the query itself would get an error:
DRG-10599: column is not indexed
Even partition-specific queries that do not touch the affected partitions will fail with this error. This is a quirk of the extensible indexing framework -- local b-trees and bitmap indexes do not have this same limitation. This is currently being investigated by extensible indexing framework development.
Most table partition operations are relatively quick, so the window where
queries are unavailable should be small. However, be aware of this effect
during potentially long-running partition operations -- especially
EXCHANGE PARTITION INCLUDING INDEXES, which will force two
ConText index rebuilds.
Local partitioned ConText indexes are not fundamentally different from global ConText indexes -- they still have to be sync'ed to reflect new data, and still should be optimized once in a while to defragment. The only real difference is that a local index is made up of many sets of internal tables; each set is independent, so maintenance is on a partition level.
CTX_DDL.SYNC_INDEX has been extended to take a partition name:
PROCEDURE sync_index( idx_name in varchar2 default NULL, memory in varchar2 default NULL, part_name in varchar2 default NULL );
Actually, the memory parameter is also new, and lets you pass in
the index memory size to use for the sync.
For local indexes, you must specify both an index name and index partition
name in the idx_name and part_name parameters. The
ConText index will then
scan dr$pending for rowids covered by that index partition, and
update just
that index partition's internal tables.
part_name must be specified when idx_name is a
local index. This means
that, in this version at least, you must know which index partitions need to
be sync'ed, and you must issue one sync_index call for each of those. The
view CTX_USER_PENDING has been extended with a
PND_PARTITION_NAME column that
shows the index partition name for each row, so you can find which partitions
need sync with:
select distinct pnd_index_name, pnd_partition_name from ctx_user_pending;
We are investigating enhancing sync_index in future versions to
enable it to
find and sync all out-of-date partitions without user specification.
Only one session can sync a given index partition at a time. However, you can sync different partitions of the same index at the same time. Just open multiple sessions -- either separate SQL*Plus windows, or, even better, use several DBMS_JOBs and multiple SNP slaves. This can improve ingest rates if data flows to multiple partitions, but it won't help something like date partitioning, where new records are all going into the same partition.
Like sync, optimize is also now done at the partition level; so, like
SYNC_INDEX, OPTIMIZE_INDEX has been extended to
take a partition name:
PROCEDURE optimize_index( idx_name in varchar2, optlevel in varchar2, maxtime in number default null, token in varchar2 default null, part_name in varchar2 default null );
For local indexes, you must specify both an index name and index partition
name in the idx_name and part_name parameters.
Only one session can
optimize a given index partition at a time, but you can optimize different
index partitions of the same index at the same time.
Each index partition records its own optimization state (it is not in a user
view, but is stored in dr$index_partition) so running a time-
limited FULL
optimization on an index partition will not interfere with any other index
partition. When FULL optimize on partition 1 runs out of time,
the next
FULL optimize on that partition will pick up where it left off,
no matter what
you do to the other index partitions.
All query services (CTX_QUERY package) that require an index
name --
COUNT_HITS, EXPLAIN, HFEEDBACK, and
BROWSE_WORDS -- have been extended with
a part_name argument. When specifying a local index, you must
specify an
index partition. The operation will be run against the internal tables of that
partition only. So, for instance, you can count the hits in a particular
partition, but to count hits in the whole table, you must call count_hits
multiple times -- once per partition -- and manually add up the individual
return values.
We are investigating multi-partition versions of these calls for future
release. The results can be calculated only on a per-partition level, so there
is a performance impact, but the main stumbling block is that, other than
COUNT_HITS, merging multiple partition results is somewhat
complicated.
There are no changes to document services (CTX_DOC package) for
local
partitioned indexes. The document services operate on a per-document basis,
specifying the document by rowid or primary key. The index is not even used,
except to find out the base table and indexed column, and to know which
indexing objects and attribute values to use. So, the document services don't
really care whether the index is local or global.
Every new version of Oracle Text/interMedia Text/ConText Option has added new structured document searching features, so we've built up a lot of functionality -- 8.1.7 has nested section search, doctype disambiguation, attribute value searching, automatic section indexing, and more. But with the industry embracing XML, demand is growing for even more sophisticated features which are beyond the capabilities of the current section architecture.
9.0.1 introduces a new section type and new query operators which support an XPath-like query language. ConText indexes with XML path searching are able to perform sophisticated section searches that were impossible in previous versions.
In order to use XML path searching, the index must be created with the new path section group:
exec ctx_ddl.create_section_group('mypathgroup','PATH_SECTION_GROUP');
create index myindex on mytable(doc)
indextype is ctxsys.context
parameters ('SECTION GROUP mypathgroup');
You can add only SPECIAL sections to a path section group; you
do not have to
add ZONE, FIELD, or ATTR sections, and
it does not support STOP sections in
this version. Most of the time, you won't even need to create your own -- just
use the default one:
create index myindex on mytable(doc)
indextype is ctxsys.context
parameters ('SECTION GROUP ctxsys.path_section_group');
The path section group is somewhat like the auto section group in that it
automatically indexes all tags and attributes it encounters. For query, you
can still use WITHIN, but the path section group also supports
the more
powerful INPATH and HASPATH operators.
Now we'll talk a little about how path sections are indexed, and how they differ from zone and attribute sections. You can skip this section if you're just interested in usage.
We'll illustrate the difference in index data with this simple doc:
<OUTER><INNER ATTR="attrvalue">text</INNER></OUTER>
The auto section group produces the following in $I:
| TOKEN_TEXT | TOKEN_TYPE | TOKEN_INFO |
|---|---|---|
OUTER |
2 (zone section) |
DOC 1 START 0 LENGTH 1 |
INNER |
2 (zone section) |
DOC 1 START 0 LENGTH 1 |
INNER@ATTR |
5 (attribute section) |
DOC 1 START 0 LENGTH 1 |
ATTRVALUE |
4 (attribute value) |
DOC 1 POS 1 |
TEXT |
0 (plain text) |
DOC 1 POS 1 |
A simple within query like:
select id from mytable where contains(doc, 'text WITHIN inner')>0
can be fulfilled by fetching the info for word TEXT and section
INNER, then
looping through each word offset of TEXT, and checking to see if
it is between
INNER START and START + LENGTH. A simple attribute
query like:
attrvalue WITHIN inner@attr
(we'll just write the text query for brevity) can be fulfilled in much the same way, but using type 4 and 5 tokens instead of types 0 and 2.
This section type cannot support more complex queries. For instance, equal
sections like INNER and OUTER are a problem. The
query:
(text WITHIN outer) WITHIN inner
matches this document. The index data records the same offsets for
INNER and
OUTER, so it is impossible to tell if INNER is
inside OUTER or vice versa.
Another problem is attribute value sensitive section search. For document:
<SHIPMENT> <MEDIA TYPE="dvd">Ice Blue Eyes</MEDIA> <MEDIA TYPE="book">The Monopoly Companion</MEDIA> </SHIPMENTS>
(We won't include the xml declaration and DOCTYPE stuff -- just
pretend that
they are there) If you want to find shipments that include the DVD "Ice Blue
Eyes", you'd have to include both section and attribute criteria:
find documents where "Ice Blue Eyes" occurs within a "MEDIA" section whose "TYPE" attribute has the value "dvd"
Unfortunately, attribute values and sections (types 4 and 5) are completely separate from normal text and sections (types 0 and 2). There is no way to link an attribute section to the particular zone section occurrence in which it occurs.
The path section group solves both problems by indexing the document like:
| TOKEN_TEXT | TOKEN_TYPE | TOKEN_INFO |
|---|---|---|
OUTER |
7 (path section) |
DOC 1 START 0 LENGTH 1 LEVEL 1 |
INNER |
7 (path section) |
DOC 1 START 0 LENGTH 1 LEVEL 2 |
INNER@ATTR |
8 (path attr section) |
DOC 1 START 0 LENGTH 1 OCC 1 |
ATTRVALUE |
4 (attribute value) |
DOC 1 POS 1 |
TEXT |
0 (plain text) |
DOC 1 POS 1 |
Zone sections and attribute sections (types 2 and 5) have been replaced with
path sections and path attribute sections (types 7 and 8). Each occurrence
of a path section has a LEVEL bit which indicates the nesting
level of the tag.
The root tag gets a level of 1, its children get a level of 2, their children
get a level of 3, etc. Including level information solves the equal section
problem, because we can now tell which tag is the outer tag.
Path attribute sections have a OCC part which links the
attribute section
occurrence to a path section occurrence. Here, INNER@ATTR has an
OCC of 1
because it occurs in the first occurrence of INNER. The ability
to correlate
attributes and tags solves the attribute value sensitive section search.
The query interface is through SQL selects, so your XML queries return entire
documents, not just selected parts of them. While we work on better extraction
features for future release, you can explore using the new
XMLType, which has
extraction methods. Just use extraction methods in the select list and
contains in the where clause.
The way path sections are indexed enables more complicated section searches,
but the WITHIN operator is not expressive enough to handle them. Instead of
<text query> WITHIN <section name> (e.g. Ice WITHIN MEDIA )
indexes with a path section group use the INPATH operator in queries:
<text query> INPATH(<path expression>) (e.g. Ice INPATH(//MEDIA) )
but it functions in much the same way, limiting the scope of the text query to certain parts of the indexed documents. The parentheses around the path expression are required. The path expression is more than just a simple section name -- it is a mini query, with a specialized query language. The next section explores the path query language in more detail.
You can still use the WITHIN operator even if you are using the
path section
group. There should be no difference in behavior between the path section
group or auto section group when using WITHIN queries.
The Text path query language is based on XPath, and we will probably continue to use XPath as a guide for future development, but it is NOT XPath. Not all the XPath operators exist in the Text path query language, for instance. Also, the Text path query language operators are case-insensitive, while XPath's are strictly lower-case. There are other semantic differences covered below. Just don't make assumptions about the path language based on XPath expectations.
When specifying tags in path queries, you must specify it exactly as it appears in the document in order for it to match. There are two commonly-made mistakes you should avoid.
First, tag names are case-sensitive so the query "title" does
not match the
tag <TITLE> or the tag <Title>. It
will match only <title>.
Second, there is no namespace support in this version. Take the fragments:
DOC 1 <A xmlns:ORCL="http://www.oracle.com/"><ORCL:B> DOC 2 <A xmlns:ORACLE="http://www.oracle.com/"><ORACLE:B>
<ORCL:B> in DOC 1 is the same tag as
<ORACLE:B> in DOC 2, because their
namespace tags normalize to the same URI. However, when querying for these
tags, you must specify it as written in the document, so "ORCL:B"
to find the
tag in doc 1, and "ORACLE:B" to find it in doc 2.
"B" alone will not find
either tag, nor will something like "http://www.oracle.com/:B".
Future
versions will probably add more sophisticated namespace support.
The simplest INPATH query string is a single tag:
perro INPATH(TITLE)
Like a WITHIN query, this query finds perro where it occurs between
<TITLE> and
</TITLE>. However, unlike a WITHIN query,
<TITLE> must be the top-level tag. Take
these two documents:
DOC 1 <TITLE>Clifford El Gran Perro Colorado</TITLE> DOC 2 <BOOK><TITLE>Años De Perro</TITLE></BOOK>
The query
perro WITHIN TITLE
will find both documents, but the INPATH query will find only
document 1. It
does not match document 2 because there the TITLE tag has a
level of 2.
What's really happening is that no level for the query node is specified, so
it uses the default context, which is always the top level for
INPATH queries.
You can explicitly specify the top level context with slash:
perro INPATH(/TITLE)
or explicitly specify the default context using dot:
perro INPATH(./TITLE)
both are equivalent to the query without the slash. All examples from here will include the top level slash for readability.
A double slash indicates "any number of levels down". So, the query:
perro INPATH(//TITLE)
is looking for perro inside a TITLE tag that occurs
at the top level or any
level below. In other words, this query is equivalent to:
perro WITHIN TITLE
and finds both DOC 1 and DOC 2.
A child tag is a tag which is enclosed within another tag. For instance, in:
DOC 2 <BOOK><TITLE>Años De Perro</TITLE></BOOK>
TITLE is a child of BOOK. We can find this
document using the any-level
tag searching, as in the previous section. But what if the corpus also
contained:
DOC 3 <MOVIE><TITLE>Mi vida como un perro</TITLE></MOVIE>
In order to find only books with perro in the title, we need to
limit the
search to title tags whose parent is a book tag:
perro INPATH(/BOOK/TITLE)
Reading the path right-to-left, we are looking for a top-level
BOOK tag with
a child TITLE tag, which matches only DOC 2.
The single slash is direct parentage. The query above will not find:
DOC 4 <BOOK><DESCRIPTION> <TITLE>Años De Perro</TITLE> </DESCRIPTION></BOOK>
Because here TITLE is not a direct child of BOOK.
TITLE's direct parent is
DESCRIPTION, whose parent is BOOK --
TITLE is a grand-child of BOOK.
To find this doc, you can use the any-level slashes:
perro INPATH(/BOOK//TITLE)
Reading the path right-to-left, we are looking for a top-level
BOOK tag with
some descendant TITLE tag. This query will match both
DOC 3 and DOC 4.
Note that this is not the same as:
((perro WITHIN TITLE) WITHIN BOOK)
First, the INPATH query restricts BOOK to the
top-level. Second, equal
sections are not confused. That is, the query:
((perro WITHIN BOOK) WITHIN TITLE)
would match DOC 4, but the query:
perro INPATH(/TITLE//BOOK)
would not. Path sections know that TITLE is a child of
BOOK, even though
they occur at the same text offsets.
Finally, if you wanted to match only DOC 4 and not
DOC 3 -- that is, you want
to match TITLE only if it is a grandchild of BOOK,
and not a child or great
grandchild, etc. -- you can use the single level wildcard:
perro INPATH(/BOOK/*/TITLE)
The * matches exactly one level, so this path query filters out
DOC 3.
You can combine these ancestor/descendant elements for even more complicated queries:
felis INPATH(//kindgom/*/*/order/family//genus)
You can search within an attribute value using the syntax
<tag>/@<attribute>:
perro INPATH(//MOVIE/@SPANISHTITLE)
matches:
DOC 5 <MOVIE SPANISHTITLE="Mi vida como un perro">My Life As A Dog</MOVIE>
and is equivalent to the query:
perro WITHIN MOVIE@SPANISHTITLE
One limitation resulting from how attributes are indexed is that all attributes must specify their direct-parent tags. The following:
perro INPATH(//@TITLE) perro INPATH(A/*/@TITLE)
are not allowed, because the tag for the title attribute is not specified:
select * from doc where contains(text, 'perro INPATH(//@TITLE)')>0; * ERROR at line 1: ORA-20000: Oracle Text error: DRG-50951: Unable to resolve element name for attribute TITLE
The square brackets are used to impose a condition on a node without changing the path context. For instance, the query:
monopoly INPATH(/auction[image])
is looking for monopoly inside a top-level auction
tag which has an image
tag as a direct child. The search for monopoly occurs within the
entirety
of <auction> and </auction>, and not
just within . This
document will match:
<auction>Sailing Monopoly <image src="...">pic</image></auction>
but will not match:
<auction>Sailing Monopoly</auction>
because there is no image element. The default context inside a
test element
is the tag to which it is applied, so
monopoly INPATH(/auction[image])
is actually the same as:
monopoly INPATH(/auction[./image])
You need the dot to reference the default context. Without the dot:
monopoly INPATH(/auction[/image])
it would mean top-level image tag. This is not supported, and will result in a syntax error.
The existence test for image will match only if image exists
and is a
direct child. It does not match:
<auction>Sailing Monopoly<desc><image src="...">pic</image></desc></auction>
because here image is not a direct child of auction.
You can match this
document using the any-level wildcard, instead:
monopoly INPATH(/auction[.//image])You can also test for attribute existence:
monopoly INPATH(/auction[@reserve])
The test node can be combined with other operators for interesting searches:
monopoly INPATH(/auction[.//image]/title)
The test node does not change context, so the /title applies to
/auction
rather than /auction//image -- this query finds auctions where
monopoly
occurs inside a direct-child title tag, but only if the auction
has an
image tag in it somewhere. For instance, the doc:
<auction> <title>Sailing Monopoly</title> <description> New Sailing Monopoly with custom pewter tokens from USAOpoly <image src="...">here is a picture</image> </description> </auction>
To test for non-existence, use the NOT operator:
monopoly INPATH(/auction[not(taxinfo)])
this query looks for monopoly within an auction element that does not have
a direct child taxinfo. The NOT operator is
case-insensitive in our path
query language. In XPath it only works in lowercase.
The test operator is capable of more than simple existence testing. More useful is attribute value testing, which contrains nodes by the value of their attributes. For instance, given a document like:
<MOVIE> <TITLE LANGUAGE="German">Tiger und Dragon</TITLE> <TITLE LANGUAGE="French">Tigre et Dragon</TITLE> <TITLE LANGUAGE="Spanish">Tigre y Dragón</TITLE> <TITLE LANGUAGE="Mandarin">Wo hu cang long</TITLE> <TITLE LANGUAGE="English">Crouching Tiger, Hidden Dragon</TITLE> </MOVIE>
the query:
dragon INPATH(//TITLE)
will search all language titles. To limit the search to just English titles, you can add an attribute value equality test:
dragon INPATH(//TITLE[@LANGUAGE = "English"])
Only equality and inequality (using !=) are supported. Range
searches are
not supported in this version. The left-hand side must be an attribute or tag,
while the right-hand side must be a literal. The query:
gato INPATH(//A[@B = @C])
is not allowed, nor is something like
gato INPATH(//A["dog" = "cat"]
Only string literals are allowed. Numeric literals, such as
tora INPATH(//MOVIE[@PRICE = 5])
will raise a syntax error. This means that numbers are not normalized. The query above will not match:
<MOVIE PRICE="5.0">Tora! Tora! Tora!</MOVIE>
because the string 5 is not equal to the string 5.0,
although numerically
they are equal.
The equality test is not strict equality -- it uses "contains-equality". Two text fragments are contains-equal if the lexer produces identical index info. Some of the significant ways that this deviates from strict equality are:
MIXED_CASE on, it would consider
foo and FOO to
be equal strings.
WORD1 WORD2, the word
offset of WORD2 is always 1 greater than the word offset of
WORD1 -- it
doesn't matter how many spaces or newlines there are between them. Also,
any non-alphabetic, non-join character is converted to whitespace (and
subsequently ignored). This can confuse names, with
Chase Matthew being
contains-equal to Chase, Matthew, or phrases, with
fruit-plants being
contains-equal to fruit, plants.
Paris in the spring would be contains-equal to the document
Paris: avoid during spring.
The rules for contains equality seem complex, but it works the same as regular text queries hitting document text -- you've probably internalized these rules already. One significant difference between equality and contains, though, is that the equality test always makes sure that the number of words in the attribute value is the same as the number of words in the query string.
dragon INPATH(//TITLE[@LANGUAGE = "French"])
does not match any of these fragments:
<TITLE LANGUAGE="Canadian French">dragon</TITLE> <TITLE LANGUAGE="French Colloquial">dragon</TITLE> <TITLE LANGUAGE="Medieval French Colloquial">dragon</TITLE>
Although each LANGUAGE attribute value has the word
French, there are extra
words. These would match a contains in the attribute value, but they do not
meet the "same number of words" equality criteria.
While docu-head people use a lot of attributes in their DTD's, data-heads prefer child tags. For instance, a docu-head might write:
<MOVIE YEAR="2001" TITLE="Moulin Rogue">...
While a data-head would prefer:
<MOVIE> <YEAR>2001</YEAR> <TITLE>Moulin Rogue</TITLE> ...
To match the data-head version, you can use equality testing on tag values:
moulin INPATH(//MOVIE[YEAR = "2001"])
Tag value equality uses contains-equality just like attribute value testing.
Inequality is also supported in both attribute and tag value equality, using
the != operator:
moulin INPATH(//MOVIE[@YEAR != "2000"]) moulin INPATH(//MOVIE[YEAR != "2000"])
Note that inequality implies existence. The queries above do not match
<MOVIE>Moulin Rouge</MOVIE>
Because the
You can use boolean
and you want to find movies with
You can use parentheses for precedence:
Nested
will never hit any documents, because both
The
will return all documents where the top-level tag is a
Attribute value equality tests and
write:
is matched by the query:
Since we do not have real parent information, we cannot detect that
Highlighting with the
Oracle 9i introduces a new datatype for storing XML -- the
You can create a ConText index on this type, but you need a few database
privileges first:
1. the user creating the index must have
Without this privilege, the create index will fail with:
2.
or turn it on for the session:
Without these, queries will fail with:
These privileges are needed because under the covers a ConText index on an
When an
Other than the database privileges and the special default section group
system parameter, indexes on
Here is a simple example:
ctxrule is a new indextype designed for classification, which is the inverse
of information retrieval. In traditional IR applications, for which the
ConText indextype is designed, you index a set of documents and find documents
with a text query; in classification, you index a set of queries, and find
queries with a document. This inversion enables a new kind of application.
Take news, for example. New articles come in off the wire all the time. Say
you want to get emailed any new news articles that mention Oracle. You could
do something like this:
There are several problems with this approach. First, there can be a lot of
lag time between when a document comes in and when it gets emailed out. You
have to first wait for the periodic sync to index the document, then wait for
the periodic query to run and find it. More importantly, though, this process
is not scaleable as the number of stored queries increases -- the system spends
a lot of time running queries that don't hit any new documents.
With ctxrule, the system looks like this:
Instead of indexing the article text and periodically running stored queries,
index the query strings, and use incoming document text to query the queries.
The lag problem gets solved because we don't have to wait for a sync and for a
periodic query -- the classification gets done in one step instead, and as part
of a query. The scalability problem gets solved because you don't have to run
every query all the time -- the index on the queries allows you to find only
the ones that match the given document.
Now that you know what ctxrule is -- a classification engine that takes a
given document and matches it against stored queries -- you need to know what
it isn't.
Even if ctxrule is not a bleeding-edge, fancy learning system, it's still an
interesting indextype that enables new kinds of text applications. We'll now
examine the index and its use in the following sections.
To create a ctxrule index, specify the ctxrule type in the indextype clause:
The supported datatypes that you can index are
The parameters clause can be used to specify indexing objects:
ctxrule supports the same parameters clause options that ConText does, but with
a twist: the section group and filter preferences are inverted and applied at
query time rather than at indexing time. When you specify a section group in
a ConText index, it is applied to the document during indexing. In ctxrule's
case, the document during indexing is a query string, and it wouldn't make
sense to apply a section group. Instead, the section group is applied to the
document that you use in matches, at query time, since the query document could
be XML or some other structured format.
ctxrule has its own set of system parameters for defaulting some of the
preferences:
ctxrule does not support local partitioning in this version.
ctxrule indexes are queried using the matches operator, passing the entire
document text as the query string. Assuming your document text is stored in
a bind variable :doc_text, the query:
will find matching queries in the
Putting it all together for a simple example:
this query will return queries 1 (the word
The document text used in a matches query can be
The language used for the indexed queries is a simplified form of the ConText
query language which is used in
The expansion operators -- about, stem, thesaurus -- are index-time snapshots.
For instance, suppose that your default thesaurus looks like this:
when you index the query:
when the create index is complete, the matches query:
will hit that indexed rule. This is because at index time the thesaurus was
used to expand
The already-indexed rule will not be affected. The matches query:
is not going to hit the query. In order to reflect the new thesaurus, the
query has to be reindexed:
Now we'll discuss the internal details of how the ctxrule index works. If
you're just interested in usage, you can skip to the next section,
"Performance".
The ctxrule indexing process is:
The datastore delivers query strings to the lexer, which passes them whole
to the query parser. The parser -- in this case, a simplified version of the
parser used for ConText contains queries -- creates a parse tree and sends this
back to the lexer. The lexer normalizes the tokens, breaks the parse tree into
rules (more on what a rule is below), and sends these to the engine. The
engine builds up an inverted index of documents by rule, and stores it in the
index tables.
ctxrule uses $I, $R, $K, and $N tables just like the ConText index. The
major difference is in the structure of the $I table, which includes a new
The
Instead of indexing document lists by token, ctxrule indexes query lists by
rule. A rule is one of the tokens in the query string, called the filing term,
stored in
The idea is that if a document has token X, a select on $I by
We'll illustrate token_extra with a simple example, using the queries:
Query 1 is a single word query. A document is a full match if it contains
the word
Query 2 is an OR query. A document is a full match if it contains the word
Query 3 is an AND query. A document must have both
Documents that contain the word
Query 4 is a phrase. The engine will use the first word of the phrase as
the filing term, and the whole phrase as the
Putting it all together, our $I table looks something like this:
Now that we have our index let's take a look at the query process:
The document is passed through the filter, sectioner, and lexer to tokenize it.
The query engine then iterates over the unique tokens in the document. For
each token, it selects rows in $I which have that token as
Let's walk through the query:
First the filter, sectioner, and lexer tokenize the document:
Next the query engine iterates over each unique token. The first token is
This pulls up:
the
Next we move to the second token,
Again
On the fourth token,
When we get to
To verify that the phrase
On reaching TEXT, the rule:
is returned by the select. The engine quickly checks that
Once the engine has processed all tokens in the document, the merged result
list is resolved from docid to rowid (using the $R table just like ConText) and
the rows returned to the query session.
The typical ctxrule query does a lot more work than your typical contains
query. The document has to be filtered, sectioned, and lexed. The $I table
has to be accessed many more times -- once per unique word in the document.
Each
ctxrule query performance depends mainly on the size of the document. As the
size of the document increases, there are more unique words, each of which
results in a $I query. Performance is also affected by number of unique rules
indexed. As this increases, so does the number of partial matches per $I query
and the engine needs more time to evaluate all the
These timings are informal times done on a workstation. They are not tuned
benchmark numbers, and should be used for intra-chart comparison only. The
documents and rules used are derived from real-world corpora.
When looking to improve performance, there's not much you can do about the
size of your document, so your ability to decrease response time for a
particular document is limited. However, you can get overall throughput
improvement by partitioning and parallel. Instead a single session doing
a matches query on each document as it comes in, round-robin the documents to
separate sessions -- or even separate databases on separate machines.
ctxrule indexes require sync and optimize, just like a ConText index. Simply
use the
Document (
The alter index interface for
You can no longer create CTXCAT indexes on
When the user datastore is employed with an
However, some applications use the user datastore as a multiplexer, where
each document is stored in exactly one of several locations (an arc relation,
for you ERD nuts). For instance, you might have separate tables to store
movies, images, and text so that you could record media-specific attributes:
but still want to be able to efficiently search across all media types -- for
movies, search the description, for images, search the annotation, and for
documents, search the text. You could do this with the user datastore. First
create a combined media table:
then create a multiplexing user datastore procedure, which reads the media
type of each row, and fetches the data from the appropriate location:
However, in this case, that last
For this class of application, the user datastore object has been extended
with two new
To use
and, in our preference creation, set output_type to
Now our user datastore is passing back locators instead of doing copies, so we
should see an improvement in indexing performance.
One limitation is that the
You can accomplish this to a limited extent in the user datastore procedure.
It can create a temporary lob and store the locator in a persistent place, such
as a package variable. When you need to use it, trim the temp lob, construct
the document content, and set the OUT variable to the temp lob locator.
Some customers would like to have a single repository table for all media.
This includes non-text data like images and soundfiles. When creating a
ConText index on these, at best INSO filters them to empty files and you waste
time. At worst, there's no filtering and you end up with a lot of garbage
tokens.
To deal with these, a new
Here's
The first document is marked as
The
The
The INSO filter will occasionally hang or loop on certain documents. In the
past, this would bring indexing to a halt. While we continually work with INSO
corporation to improve filter quality, we have added a timeout so that such
cases can fail individually, allowing the indexing to continue.
You can set timeout as an attribute of the inso filter:
The unit of
The timeout interval is not an upper limit on document filtering time -- a
document can take longer than the timeout interval.
Do not set timeout too low. There is currently no way to override the INSO
timeout on a per-document basis. If it turns out that INSO can complete
filtering a timed-out document, just not make any progress each interval, there
is no way to get that document indexed. The best you can do is manually filter
to text and replace the binary contents with text contents. We believe that
documents not progressing after 2 minutes are very rare.
The user datastore is often used to construct virtual structured documents
by concatenating several columns of the base table. However, this technique
can't really be used when one or more of those columns is binary data, such as
a Word document. The binary data must be converted to text in the datastore
procedure before concatenation.
Unfortunately, there is no way to do this in previous versions. The obvious
attempt -- calling
So for 9i we've created a new document service
No index is needed, since you pass in the
The text is always HTML in this version -- a plaintext option may be added
in the future. Because
The INSO and charset filters currently convert documents to the database
character set when needed. The conversion is done using Oracle NLS functions.
Since 9i NLS improvements add UTF-16 character set support (
However, we have also added UTF-16 endian detection. If the document
character set specification is
An example of using UTF-16 auto-detection:
Written Japanese is problematic for full-text indexing because words are not
space-delimited; sentences are written in one long string of kana and kanji.
Human readers can easily pick out the words, but this is much harder to do
mechanically.
The existing
While this allows full-text retrieval, it has some problems. Index space is
wasted on "tokens" that span word boundaries, because they might never get
queried -- and if they do get queried, it's a false hit. Query is slowed by
the need to combine index data from multiple tokens for each word, and these
tokens are somewhat unselective, because they are character sequences instead
of full words.
9i includes the new
The lexicon is an Oracle-modified version of the EDR lexicon, licensed from the
European Language Resource Association. This is not a case of poor American
geography skills -- just a funny consequence of licensing issues. The lexicon
is not user-modifiable in this version.
Written Chinese poses the same problem as Japanese: lack of whitespace.
Previous versions have two lexers for Chinese: the
Completing our tour of East Asia, we have a new Korean lexer. Unlike Chinese
or Japanese, Korean is mostly whitespace delimited. However, like German,
it has a large number of compound words. Since words change form when combined
in compounds, finding word boundaries is quite difficult. Furthermore, verbs
have a large number of inflectional forms -- literally thousands.
The new lexer,
The
The
The system dictionary should not be modified, but the grammar, user, and
stopword dictionaries can be user-edited for extension and customization. The
rules for editing each one can be found in the files themselves (albeit in
Korean). After modification, it is best to bounce the database or flush the
shared pool to ensure that the changes get reloaded into memory.
Version 8.1.7 introduced the multi-language stoplist. One drawback to this
functionality was that stopwords that appear in multiple languages had to be
added multiple times. Foreign-language documents in the real world sometimes
contain passages in English (we refuse to learn anything else). In order to
stop
which quickly uses up the 4096 available stopword slots.
For this case, we've added a new ALL language for stopwords:
An ALL-language stopword works for any and all document languages, while taking
up just one stopword slot.
Wildcard has been added to the CTXCAT query language, using asterisk. For
instance:
does the same thing as:
Left-truncated wildcards are not supported. They are stripped off and the
remainder of the pattern is expanded.
is the same as the above query for monopol*. Wildcards in the middle of a
token are supported:
CTXCAT queries do not perform well when the number of tokens in queries is
very large. If you expect to use CTXCAT wildcard a lot, we highly recommend
also using prefix indexing; wildcard queries will be one prefix token instead
of multiple normal tokens.
Transportable tablespace is now supported by all three indextypes, as long
as all internal objects are in the tablespace set. With transportable
tablespace, you can do your indexing on a large machine, then quickly push the
results to multiple query server databases for query and DML scalability. To
illustrate, we'll create a table in tablespace
Imagine that the table has a lot of data, and that the index takes a long time
to create and populate the internal tables. Now make the two tablespaces read
only:
then export the two tablespaces:
In order to do this, the text table and all secondary objects for domain
indexes have to be in the tablespace set. Since our text table is in datat
and all secondary objects for the text index are in indext, we meet this
requirement.
Now we use a OS copy or FTP to get the datafiles for datat and indext to the
target system, and import:
The text table and text index are created on the target database very quickly.
The index does not have to be rebuilt -- all the data in the datafiles is
useable, since the table data rowids are not changing. All that needs to be
done is a little bit of meta-data cleanup.
User datastore procedures and preferences are not preserved by this, so
if you intend to do sync or indexing on the target system, be sure to manually
bring those over. Queries should be fine without them.
If you have rows in
If you are refreshing a target system, then the tablespaces already exist.
They must be dropped before importing. Normally you can do:
However, this will not work if the tablespace contains tables with domain
indexes, or domain index secondary objects -- either
then drop the tablespaces.
When something goes wrong during indexing -- a core dump or a hang, for
instance -- you can usually work around it by removing or
To make this easier, 9i has a new rowid printing event. When set, the rowid
of each row will be printed to the logfile. For instance:
the log shows:
The ROWID message is printed just after the datastore has selected the row from
the base table, but just before it is read or processed. This means that if a
problem occurs, the last ROWID printed in the log is the problem document, and
should be removed.
There is currently only one event. If you have a suggestion as to other
events that could be useful, let us know.
The view
This is a technical overview of the improvements found in Oracle Text version
9.2.0. This is intended for an audience fairly familiar with previous versions
of the product.
If you are new to the product, you should familiarize yourself with the
product before reading this document. The official documentation, especially
the Application Developer's Guide, is a good place to start, as are the
interMedia Text technical overviews for 8.1.5, 8.1.6, 8.1.7, and 9.0.1 in
order. (Someday I'll get around to creating an updated overall technical
overview which merges these together)
The goal of this overview is to supplement the documentation in three ways:
Oracle Text is integrated with the Database Migration Assistant (DBMA),
so the Text upgrade should happen as part of the overall database upgrade
procedure. This section details the manual upgrade process, in case something
goes wrong or you choose to not use DBMA.
This process assumes that you are starting with a valid 9.0.1 ctxsys data
dictionary, and that the system is not being used for index, DML, or query.
Use SQL*Plus to run all scripts.
1. run
2. run
It is not necessary to run anything else. The
The script is not expected to encounter errors other than ORA-955 ("name is
already used by an existing object", for public synonyms). After the script
is complete, all ctx-created objects owned by ctxsys should be
Oracle Text also uses the new
The URIType and its subclasses are now indexable. As an example:
URIType indexes use DEFAULT_FILTER_TEXT and DEFAULT_SECTION_HTML. So, with
the default defaults, the above index would be using the NULL_FILTER and the
HTML_SECTION_GROUP.
Behind the scenes, Text is using the GETBLOB method to fetch the contents
of the URI. If you run into problems creating the index, a good sanity check
is something like:
which checks if the rows are readable outside of Text.
Version 9.0.1 added the TIMEOUT attribute to INSO_FILTER. This checked the
size of the output file every TIMEOUT seconds and killed the filter if
no progress was detected. If a file filtered very slowly, the inso
filter could run for a long time -- even exceeding TIMEOUT seconds, as long as
the output file was continually growing in size.
The new TIMEOUT_TYPE attribute allows specification of the timeout style as
HEURISTIC (old 9.0.1 behavior) or FIXED (kill after TIMEOUT seconds no matter
what). The default is HEURISTIC to maintain backward compatibility. While
using FIXED timeout is more predictable, it may cause some long-filtering
documents to be excluded from the index.
By default, stemming is a query expansion operation. At query time, it takes
the input word, generates all possible forms of the word, does a query to get
token_info for each form, then merges the results. This can be inefficient.
To ensure correctness, the stemmer must generate all forms of the input term
-- even strange or really uncommon forms. Since every form results in a query
on $I, each form that does not appear in the index just wastes time. The forms
which do appear in the index incur token info merging overhead.
The new INDEX_STEMS attribute of BASIC_LEXER provides a better method --
index-time stemming. During indexing, each word is stemmed to its base form.
This base form is added to the index (as a type 9 token) along with the
original form. For example:
We'll use the English derivational stemmer in the wordlist and lexer, and
create the index:
The $I shows the base form indexed as type 9, and the original forms as
type 0:
A query for $word is then just a matter of reducing word to its base form and
returning the results for that one type 9 token. This makes stem query
faster at the cost of index time and space. To prove that it is not doing the
expansion here, we can remove the type 0 rows from the index:
Under certain circumstances, the stem operator is forced to fall back to
expansion behavior. If the stem operator output is the input to other
expansion operators, such as ?($(word)) then the stemming must be done as an
expansion in order to get the input to fuzzy. Another case is when the
query-time stemmer does not match the index-time stemmer. For instance, if
in the example above we had used the default wordlist (which uses English
stemmer) then the stem query would be done through expansion for correctness.
CTX_QUERY.EXPLAIN is the best way to examine the query plan and determine if
stem form or expansion is being used.
The USER_LEXER lets you write and plug in your own lexer to handle languages
or other indexing techniques that Text does not support. You specify two
stored functions -- one for index,
one for query -- that take in text, lex into tokens, and pass back the tokens
in an XML-esque document.
For our example, we'll create a very simple Hawaiian lexer. It will
uppercase the input and remove single quotes (glottal stop or okina).
First, we write the index lexer procedure:
(see Appendix B for actual code samples). Like user
datastore procedures, user lexer procedures must be owned by ctxsys and the
indexing user must have EXECUTE permission. For simplicity, our example will
be done entirely in the ctxsys schema.
Our example is also written entirely in PL/SQL. You can write lexers using
stored Java, or an external language like C or Perl, using the extproc
mechanisms (see the PL/SQL User's Guide).
We've used VARCHAR2 for "text" and "tokens" for simplicity, but CLOB is also
possible here for real-world implementations. You can name the parameters
whatever you want -- for readability, I'll refer to them by the parameter
names I've given them in this example, instead of "parameter 1".
Given input "imua i'olani", our user lexer will output in "tokens":
The Text Reference contains the DTD and restrictions for this output format.
It's XML-esque, in the sense that it is XML, but with additional restrictions.
Note that because it's based on XML, those tag names are case-sensitive.
"locneeded" is TRUE when "tokens" output needs to include location data -- the
character offset and length of each token. locneeded will be TRUE most of the
time -- only stopword lexing does not require location data.
Providing location data when locneeded is FALSE, or omitting location data
when locneeded is TRUE results in a lexer error.
ul_idx_proc will be called:
Next, we create a similar query lexer procedure:
"wildcards" is an input table of numbers which tells you which characters of
the passed-in text are wildcard characters. The output format of tokens is
similar to that of the index procedure. ul_qry_proc will be called at
query time, once per query term.
Now we create the user lexer:
and use it in creating an index, like any other lexer:
This is an extremely simple example. Your user lexer can also do composite
words, detect sentence and paragraph boundaries, and do custom number parsing.
See the Text Reference for full details.
Version 9.2.0 implements support for create index online, which allows DML on
the base table during an index creation:
Alter index rebuild online for "replace" and "resume" is also supported.
These should improve usability for installations that cannot afford to delay
changes during a long-running index creation or full rebuild.
The DML that happens during indexing gets logged to the table
dr$online_pending. At the end of the index creation, these pending DMLs are
processed, removing deleted rowids from $R and $K, and moving new rowids to
dr$pending. These changes are therefore not actually added to the index until
the next sync.
Keep the following limitations in mind when using online:
The parallel create index in previous versions used dbms_jobs and could be
applied only when creating global indexes on partitioned tables. The parallel
support has been rewritten in this version, and now supports creating an index
on a non-partitioned table in parallel.
When you issue a create index like:
under the covers, the base table scan is split up by a parallel query:
The database's parallel query facility splits the base table into partitions,
spawns slave processes, and assigns a different partition to each slave. Each
slave text indexes the rows in its portition.
The method of slicing the base table into portitions is determined by the
database's parallel query facility and is not under your (or Text's) direct
control. So is the number of slave processes actually spawned, which depends
on machine capabilities, system load, init.ora settings, and other factors.
It may not match the degree of parallelism requested. See
Notes on Parallel Degree for more
information.
The parallel servers
are P servers -- on my Solaris workstation, they show up named
ora_p<slave#>_<SID> -- like ora_p001_TEXT.
If logging is on, you'll get a seperate logfile for each slave process, with
the slave number appended to the logfile name. For instance, a parallel 2
create index where logging is to file "LOG" will create "LOG_0" and "LOG_1"
files. Be careful though -- each parallel operation opens and closes these
files. So, if you do a parallel index create, followed by a parallel sync
later on, the logfiles from the create can get overwritten, because each sync
slave will re-open its assigned logfile, thereby erasing the existing contents.
It's probably a good idea to back up the logfile or change the logfile name
(through
Also now supported are:
Keep the following limitations in mind when using online:
The new parallel operations -- parallel create index, alter index, sync index,
and optimize index -- use parallel query under the covers. For parallel
create index/alter index/sync, if you specify
parallel degree of N, and you get N servers started, only N/2 of
those servers are actually doing indexing. If you are logging, you will
only see N/2 logfiles. The other N/2 servers are doing the work of
partitioning the base table. So, if you want 4 servers actually indexing, you
should specify a parallel degree of 8.
Parallel optimize does not use the same method. You specify N, it starts N
slaves, and all N of those slaves are doing optimize. However, the workload
across those N may be unequal, and some parallel servers may not actually
have any work to do -- you may see <N logfiles.
The settings of various database parameters also affect the actual degree of
parallelism. With improper settings, you could specify parallel 8 and only
get 4 slaves. The important ones are:
The UNUSABLE keyword is now supported for create index. This will create
the index meta-data and then immediately exit the indexing. With support for
rebuild partition in parallel, you can now use
For instance, to create a local index on ptest, doing 3 partitions at a time,
building each partition in parallel 2, you'd do:
alter table exchange partition quickly swaps data between a non-partitioned
table and a table partition -- it doesn't physically move the data, it simply
reassigns data segments. Using exchange partition, you can quickly coalesce
several independent tables into one large partitioned table. You can have
multiple site databases collect info, transportable tablespace them into a
master database, and use exchange partition to get a monolithic table for data
mining, for example. When exchange partition is used with the "including
indexes" clause, the same fast switching is supposed to happen with the
non-partitioned table's index and the partioned table's local index's partion.
In 9.0.1, an exchange partition including indexes for context indexes forced
a rebuild of both the index on the non-partitioned table and the index on the
table partition. This has been rewritten so that exchange partition now does
data switching, making it much faster.
As an example, let's examine what happens when you do:
First, Text will create a temporary table for each internal table. The
$I temporary table, for instance, will be DRMNPX$I. These temporary tables
are partitioned, with one partition. Then, it issues exchange partition
commands to:
for each internal table, except for $K. $K cannot be partitioned, so instead
we physically move rows between the three $K tables. This data moving will
probably take the majority of the time for most exchange operations. Finally,
the temporary tables are dropped.
In order to use exchange partition including indexes, the internal tables
cannot be partitioned. If, using the storage preference, you have partitioned
any internal table, you'll get an error message if you try to do exchange
partition including indexes. Similarly, if the two indexes do not use the
same indexing objects, you will get an error message during exchange. For
these cases, you will have to use exchange partition EXCLUDING INDEXES and
then manually rebuild the non-partitioned index and the partitioned index's
index partition.
Sync of context indexes can now be run in parallel. Specify the
desired number of slaves in the new parameter
Unlike serial sync, a snapshot too old error during a parallel sync will
cause a failure, so if possible minimize the possibility of such errors by
running parallel syncs during periods where DML activity on the base table
is low.
Parallel sync is not supported for any other indextype.
9.0.1 sync locks each row in the base table during indexing. A long-running
sync operation can therefore inconveniently block users' access to base table
rows for long periods of time. The locking algorithm has been rewritten in
9.2.0 to lock these rows only at the end of indexing. This greatly reduces
the window in which user access is blocked. A side effect of this change is
that updates during sync processing may result in garbage in $I. The sync
process will index the old version of the document, notice the update at
lock time, and be forced to automatically invalidate the just-indexed document.
We do not believe that this side effect will be noticable or even encountered
at most installations.
See Appendix C if you are interested in an in-depth
technical overview of the new sync locking mechanism.
Optimization now writes periodic progress updates to the CTX_OUTPUT log. If
greater granularity is desired, the new event
TOKEN mode optimization does a FULL optimize for a single token. In previous
versions this was limited to type 0 (normal text) tokens, so you could not
affect field section text tokens or zone section tokens, for instance. This
version adds a new
If you don't know the token type number, you can use the new
See
FULL optimize of context indexes has been rewritten, and has the following
two enhancements:
Will not rewrite already-optimized rows. 9.0.1 optimize deletes every row
in $I, removes garbage, defragments, then re-inserts them. This causes a lot
of excess rollback and redo activity when run on already-optimized indexes.
The new optimization method is designed to leave such rows alone.
Optimizes all $N rows. 9.0.1 optimize was limited to removing garbage
for maximum 16,000 docids, and would remove garbage only for those docids
that were marked in $N. So, if you did a time-limited full optimize, then
deleted 10,000 docids, the next time-limited full optimize would not see
those new deletes, and would not remove their information from $I. The new
optimization method reads the full contents of $N at the start of each
full optimize, and removes all information for those docids.
Marking and removal of rows from $N is still capped, so you will see $N count
decreasing 16,000 at a time -- under the covers, we're really optimizing
everything, so this should not be cause for concern.
FULL optimize of context indexes can now be run in parallel. Specify the
desired number of slaves in the new parameter
A parallel time-limited full optimize may result in certain $I rows being
optimized more than once. For instance, let's say you do a parallel 2 full
optimize, and, when time runs out, slave 1 is up to DOG, while slave 2 is
up to DOKUMENT. Because there is no guarantee that all rows between DOG and
DOKUMENT have been optimized, the saved state is the minimum, which is DOG.
The next full optimize will therefore re-optimize any rows between DOG and
DOKUMENT that slave 2 has already examined. However, with the
new FULL optimize method, this should have
a minimal impact if those rows are still optimal.
FAST optimize has not been parallelized, and parallel FULL optimize is not
supported for any other indextype.
While we have not been able to reproduce such a situation in-house, customers
have reported that occasionally a create index or alter index fails and leaves
the context index in an unusable state. The database believes that the index
is in "LOADING" or "INPROGRESS" state and blocks any attempt to recover through
alter index. The only recourse is to drop and recreate the index.
For this situation 9.2.0 adds
This is a last-resort failsafe and is not considered a fully public part of
our API. This procedure can be dangerous if run improperly. It should be run
only under direction from support or development, and only when no other
session is touching the index.
The algorithms for estimating cost and selectivity of context queries have
been tweaked. This should result in optimal plans being generated more often
when using the CBO.
Query templates are an alternative to the existing query languages. Instead
of passing a query string to contains or catsearch, you pass a structured
document which contains the query string in a tagged element. Other elements
in this document control aspects of query processing.
The simplest form of query template is:
You just use the document as the query string, so a complete query looks
something like:
though in real life the query document will most likely be generated by the
application into a pl/sql or bind variable and passed to the contains. You
don't need to do anything to be able to use query templates -- Oracle Text
recognizes that the query string starts with "<query>" and automatically
shifts to query template parsing mode. If it doesn't see that opening tag,
the default query parser is used, behaving like previous versions.
You can control two things using query templates in this version: grammar and
score datatype. An attribute of
The
We plan to add more features to query templates in future releases.
The query template language is XML-like, but not XML. Some differences:
you won't get an error -- but the query will be interpreted as a context
language query string, meaning "the phrase 'cat dog'" instead of "cat and dog".
You can even add invalid elements or attributes like "<foo>" without
affecting query processing.
Thanks to extensible indexing enhancements, you can now parallelize contains
queries on local partitioned context indexes. Parallel query is not supported
for non-partitioned context indexes, or non-context indexes (e.g. ctxcat).
Even for context indexes, the usage is a little tricky, and the performance
may not necessarily be faster for your application.
To examine parallel query, we'll use an "articles" table which is partitioned
by numeric "art_id" column, and which has a local partitioned context index
on "art_text", called "artx". A simple query like:
Uses the query plan:
This is a serial query -- it iterates over the index partitions, and does a
contains query on each one. Contains query on partion 2 is not started until
the contains query on partition 1 is complete.
In order to parallelize, you need to adjust the parallel degree on the index:
This alter index statement only changes a bit of data in the SYS dictionary
tables, so it takes a second or so to run. The parallel degree can also be
specified in the create index statement, but, since the parallel keyword is
ignroed for local indexes (see
Parallel Create Index), it is not seen
often.
Once the index parallel degree is changed, the query plan shows parallel
execution:
This is, for the most part, the only way to get this query parallelized.
The table parallel degree can also trigger parallelism, but only in limited
circumstances (See Appendix D).
There is no hint you can use to force parallel query -- the
To process the contains query. the database will start up some number of
parallel
slaves -- it will try to start 4 slaves (the degree of parallelism on our
index) but the actual number started will depend on the number of slave
processes already running for other sessions, machine capabilities, init.ora
parameters, etc. On solaris, each slave is a database background process like
ora_p001_<SID>. The slaves are each assigned a few partitions. They iterate
over these partitions and do a contains query on each one while the parent
process merges the results.
Parallel queries will deliver the most benefit for queries involving several
partitions, that need all results (this includes score-sorted queries), and
that are run on multi-processor machines. Parallel execution may actually
slow down the query when run on a small machine, or when optimizing for
response time, or system throughput (See
Appendix D).
The natural order of contains queries (that is, the order of rows returned
from a query in the absence of an order by clause) is docid order. 9.2.0
queries reverse this, so that query results are now returned in reverse docid
order. This is on automatically for all contains queries in 9.2.0 -- you don't
need to do anything to turn it on.
In order to return hits in reverse docid order, each term node in the query
pre-decompresses the docid information from the $I row into an array, then
walks the array backward. In order to save memory, the array does not include
word position information, which is instead re-read from the $I data.
We have run numerous benchmark tests, and have not shown any noticeable
performance impact of this extra processing.
The intent is to provide rough reverse-date sorting. If documents are inserted
in chronological order, and sync'ed in chronological order, then reverse
docid order is reverse chronological order. In practice, however, docid order
can get out of sync with chronological order, so don't rely on reverse docid if
you need accurate order.
Some of the ways docid order can get out of sync with chronological order:
Theme indexing for CTXCAT is now supported. The indexed themes can be used
when ABOUT queries are issued through
This version includes a new indextype -- ctxxpath. Although
this indextype is created in Text install, and uses Text code, it
really doesn't have anything to do with Text searching. A ctxxpath
index, which can be built only on
There are three steps to setting up a basic document classification
application:
Version 9.0.1 introduced the ctxrule index type. This automated step 3 of
the process, but it was up to the user to manually write the routing rules.
The new
To use training, first gather a sample set of documents -- you can even use
the real document table, if you like. The document table MUST have a numeric
column with all positive, integer key values less than 4 billion. It doesn't
have to be the primary key, but it should be unique and allow fast access by
id value. We'll call this column the doc_id column, though you can name it
whatever you want.
Next, create a context index on the table. You can use whatever preferences
you want, but
Now create a table to hold the document groups. It's a many-to-many
normalization table, so it needs one column to hold a document id, and one
column to hold a category id. You can name the table and the columns whatever
you like, and have extra columns if you want. However, the doc_id and cat_id
columns both have to be numeric, limited to positive integer values less than
4 billion. It's assumed that you have a category table out there somewhere,
but it is not actually used during training, so technically you don't need one.
Training will perform best if you have a normal b-tree index on the category
id column (or a composite key index where category id is the first column).
Final setup step: create a result table. Recall that the output of
Now we run
If logging is on, progress will be shown in the logfile. When it completes,
the rule table will be filled with rules. As a last step, you can create a
ctxrule index on the result table.
It may not be possible to generate rules in some situations; you won't always
get rows in the rules table. Also, this is a statistical process, so the rules
only approximate your grouping -- if you classified the sample documents using
the produced ruleset, you may not get exactly your doc_cat table back.
See Appendix E for a closer look at the
training process, including coverage of
The
You can see sample output for some ctx_report functions in
Appendix F, if you want to get an idea of
what's included in each report. Now let's take a look function by function:
It also needs to compile a lot of information, for which it uses a session
temporary table. Insertions to the temporary table are committed periodically,
so a single call to
for example:
The FIELD* types will fetch the index meta-data from the dr$ tables, so if
you're going to be calling these often, you might want to cache the values
to avoid repeated lookups.
If you are just interested in one of the static types, it does not need the
index meta-data, and you can pass NULL a the index name, if you like.
Actually, you're probably better off using one of the package constants:
This version re-introduces policies. A policy is a synthetic object like a
preference. You can think of it as a context index minus the index data
-- that is, it's just a collection of index objects and attribute settings.
You create a policy with
Since there is no base table, there is no need for a
The defaulting system works for policy creation. For instance, if you specify
an INSO_FILTER for
Preference values are deep-copied, so you can modify or even drop the
preferences after
You can update an existing policy using
Policies use the same data dictionary tables as indexes, so you can view
existing policies using the views
So what are policies for? Not much in this version. They are used in
the XPath function
As with the earlier overviews, the purpose of this Technical Overview is as follows:
The $I table in Oracle Text is generally the largest index table. It contains all of the indexed words from the source, and for each word it has a set of "postings", consisting of the docid of the row containing the word, and the word positions within each docid. This information (held in the token_info column) is encoded into a binary string. In previous versions of Oracle database, each token_info entry was restricted to somewhat less than 4000 bytes in length, to ensure that it could be held "in line" in the same database block as the token_text to which it refers. This avoids an extra seek which would be necessary if the token_info was in a separate "out of line" LOB.
As postings lists get larger and larger, we need more and more rows in the $I table for each word. We also find that as disk technology improves, the performance of sustained large reads improves much faster than the performance of the seeks necessary to find another row in the table. Therefore in 12c we have provided an option to use much larger token_info entries, making use of Oracle's "secure files" technology.
The option is switched on using the BIG_IO attribute in the BASIC_STORAGE preference. Note that you must be using ASSM (Automatic Segment Space Management) to use this option. ASSM is the default for all tablespaces other than SYSTEM and TEMP, but if your tablespace does not use ASSM, the BIG_IO setting will be ignored, and Text will revert silently to normal BASICFILE lobs.
BIG_IO is switched on as follows:
And here's a testcase which will confirm that BIG_IO is working. We insert the same word into 5000 rows and index it.
With BIG_IO on, we will see just a single big row in the $I table, whereas with BIG_IO off, we would see at least two rows:
In previous versions of Oracle Text, there was always a trade-off between latency and fragmentation of indexes.
Users generally want their indexes to be updated as fast as possible, by having the index synchronization done frequently, or even immediately "on commit". However, since the size of $I "chunks" written to the indexes depend on the number of records synced at a time, this leads to excessive fragmentation of the $I table and the need to run index optimization more often, and for longer.
In 12c we introduce the concept of two-level indexes under the feature name "near real time indexes". This allows you to have a small, fragmented index containing all the recent updates to the table, without having to modify the large "standard" index. The aim is that the small index should be small enough to fit comfortably into SGA memory, so the fragmentation is not really an issue (since memory "seeks" are virtually free).
The feature is turned on with a storage preference STAGE_ITAB set to true:
After doing this, we find we have a new "dollar" table - DR$MY_INDEX$G which has exactly the same layout as the $I table. Similarly, there will be an index on this table DR$MY_INDEX$H, which matches the normal $X index.
If we create the index empty (as above), then add a row to the table commit it (note that we used "sync (on commit)"), we will see that there are rows in the $G table but not the $I table. In fact, let's add two rows, committing/syncing after each:
We can see that there are no entries in the $I table, and four entries in the $G table. That's two for "world" (since the rows were synced seperately) and one each for "hello" and "goodby".
We can optimize this using a new option - MERGE. This will take rows from the $G table and move them to the $I table, optimizing as it does so.
Now we can see that there are three entries in the $I - the two "world" records have been combined - and no entries in $G.
Note that the entries in $I are optimal in terms of fragmentation, they can still contain garbage. If a record is deleted, the $I table may contain pointers to records which no longer exist, or even words that no longer exist in the index at all. In order to remove garbage, it is still necessary to run optimize in FULL mode occasionally.
Now we mentioned earlier that we wanted to keep the $G table in memory. How do we ensure that happens? We could of course just rely on normal SGA caching - if there are regular updates to the index then we can really expect that it will remain in memory. Alternatively, if you have a "keep" pool, which is sized to ensure nothing gets flushed to disk, you can make use of this using appropriate STORAGE attributes, such as:
In the previous discussion, we considered the use of the "MERGE" optimization mode, to move index data from the "in memory" $G table to the "on disk" $I table. At times, it can be beneficial to run this merge manually. But there's an alternative to simplify things - a new ctx_ddl procedure called add_auto_optimize. We can easily set our previous index to be auto-optimized with
When creating queries, it is often useful to have a shared component which is used in many or all queries. These components typically restrict the query to one part of the data set, or implement security criteria. Let's say I want to search for documents which are public or may be viewed by the Sales department. If I'm looking for "sales data" my query might be:
SELECT text FROM mydoc WHERE CONTAINS (text, '( sales data ) AND ( ( SDATA (public=true) OR ( sales WITHIN allowed_depts ) )*10*10 )' ) > 0
Note: In case you're not familiar with the "*10*10" part above, that forces the score of the right-hand-side of the "AND" to the maximum value, and since "AND" scores the minimum of the two sides, ensures that this security clause has no effect on the overall score of the query.
Instead of coding this right-hand-side string in each query, we can save it as a "stored query expression" - let's say we call it SQE1 - and then run the simpler query
This simplifies development, and keeps the size of the query down, which is especially important if you are using the VARCHAR2 version of the CONTAINS function (you can use a CLOB instead, which makes this less of an issue, but CLOBs aren't always easy to handle in client interfaces like JDBC).
Oracle Text has supported SQEs for some time. But for a variety of reasons, developers often want to create temporary SQEs which do not persist in the database. Before, that meant assigning generated names to each SQE, and keeping track of them so they can be deleted at the end of the session so as not to clutter up the system. This isn't always easy to do - often you don't know when a session is finishing so you can clean up properly. So in 12c we've introduced session duration queries:
The other option for duration is CTX_QUERY.DURATION_PERSISTENT.
There are several advantages for session-duration SQEs:
SQEs themselves are "query text" which must be interpreted and run as part of each query they are used in. If you want to speed up the execution of SQE (or indeed any shared query components) you should take a look at the new Query Filter Cache feature as well.
Note the "namespace" of SQEs is shared between SESSION and PERSISTENT SQEs. So any number of session can use SESSION SQEs called "sqe1" so long as none of them stores a persistent SQE with the same name. As soon as an SQE is create with PERSISTENT duration, that name is unavailable for use as a SESSION SQE.
Although it has advantages, there are limitations on what can be fetched back through the RSI. In 11g, you were restricted to ROWID, score and SDATA values. Crucially, you could not use "snippets" in your query. That made it impossible to build a "Google-like" query application without doing separate processing for each row to call ctx_doc.snippet.
In 12c, we have added a snippet clause to the Result Set Descriptor. For example:
The token_info column of the $I table contains the locations of all indexed tokens (words) within the index. The location information consists of two parts: the DOCID which identifies the particular record (or document) that the word appears in, and the OFFSET of the word within that document. Where documents are large, the OFFSET information is often many times bigger in size than the DOCID information, as most words appear many times in each document.
When do we need the OFFSET information? Only for certain query types:
If we now DESCRIBE the dr$my_index$i table, we will see an additional column TOKEN_OFFSETS which contains the offset information:
Oracle Text has long supported the MULTI_LEXER. That allows you to define different lexers for different languages, such that you might, for example, use BASIC_LEXER for English text but CHINESE_LEXER for Chinese text. The MULTI_LEXER is controlled by the LANGUAGE column of the base table, as specified in the parameters clause when you create an index.
Sometimes customers need to be able to specify different lexer attributes (such as printjoins) according to the type of document being processed. This is a similar, but different challenge to the language requirement. To support it, we have introduced the concept of a language independant sublexer, which may be added to the MULT_LEXER. This still uses a "language" column to specify which sublexer to use, but is no longer directly tied to an actual language. Additionally, users can
Note: An important restriction on the Document Lexer is that it must use the BASIC_SECTION_GROUP. The default when you create an index is the NULL_SECTION_GROUP, and this will not work with language independant sublexers - and neither will AUTO_SECTION_GROUP or PATH_SECTION_GROUP.
Now there's a potential issue when searching documents which have been indexed with multiple lexers. How do we know how to lex the search string? Let's say the term "multi-functional" is indexed by two lexers, one of which has "-" as a PRINTJOINS and one which doesn't. The expression will be in the index as two separate terms MULTI and FUNCTIONAL, but also as a single term MULTI-FUNCTIONAL. If the user searches for MULTI-FUNCTIONAL (or, as he has to escape the special character MULTI\-FUNCTIONAL) then we still need to find both of these. With normal language lexers, we know the session query language and can apply the appropriate options from that. With language-independant sublexers, we don't. So the anwer is simple - we lex the query terms with ALL possible lexers, and use all the resulting terms in the query.
If we use ctx_query.explain we can easily see this:
We can see both "MULTI-FUNCTION" and "MULTI FUNCTION" in there - but we can also see something else: There's a special section which has been created called DR$ML (ML for MULTI_LEXER) and a pair of words which represent the different sub-lexers. This is the way Oracle Text identifies which documents have been indexed by which lexer, and avoids false matches caused by the document lexer not matching the query lexer. That's why we need to use BASIC_SECTION_GROUP for the index.
SDATA (Structured-DATA) sections were introduced in 11g. We have made some improvements in 11c.
SDATA sections can be added to an existing index using an "alter index" command such as:
This adds a new SDATA section called "stocklevel" with datatype number. It will automatically added on all future documents where the tagset <stock> and </stockset> are found.
SDATA sections can be updated using the new ctx_ddl procedure update_sdata. For example:
Note the use of sys.anydata to convert a numeric vslue (stocknum) to the "anydata" datatype used by the sdata_value argument.
The update_sdata procedure is synchronous - it takes effect immediately without you needing to sync the index. It is therefore very useful for fast-changing values such as stock levels. Note however that the row itself must have been synced - if you try to update the sdata for a row which has been committed but not sync'ed, you will get:
Finally: ordering by sdata fields. This was previously partly-supported through the user-defined scoring mechanism, but the syntax was cumbersome. You used to have to use a score normalization expression, such as:
Then rely on score sorting. Now, instead, you can supply a set of "orderkey" tags, using either SCORE or SDATA expressions. For example a full query template might have:
You can use any SDATA field here - not just ones that have been specified as ORDER BY columns when creating the text index.
Be aware that any
It's common to define a set of "stopwords" - or words that shouldn't be indexed - for a text index. You can also define the stopclass NUMBERS to avoid indexing numbers. In 12c you can also create a set of regular expression patterns to define stopwords as well. Let's say my application indexes data which has a lot of coded values begining with AZN. If I have 10 million documents, each of which has its own AZN code, then that's 10 million entries in my $I table that I may not want. If that's the case, then I can define a pattern stopclass as follows:
Note the use of the posix character class [:alnum:] meaning any alpha-numeric character. I could also have written the pattern as 'AZN\w*'. It might have been tempting to write:
but we have to be careful here. The .* in this case will match whitespace as well, and the text following the AZN code will all be removed as well.
Remove all numeric strings longer than five digits (so keep years such as 2011, but remove 20112):
Remove any words with at least five digits somewhere within an alphanumeric string:
As mentioned in the #TODO: link SQE section, there are often
situations where parts of queries are used repeatedly. SQEs enable you
to save the text of a query for convenience, but they have no effect
on performance. Any complex expressions in the SQE must be
re-evaluated each time the SQE is run. This typically involves
fetching blocks from the SGA (or even database files), decompressing
postings lists, and applying boolean merge operations between each
list.
The query filter cache, implemented by the ctxfiltercache operator, avoids this reprocessing by storing the final set of DOCIDs for a query expression in #TODO:check SGA memory. This means that when the same query expression is used subsequently, the list of DOCIDs can be used directly from the cache, avoiding the need to fetch from the SGA or do any decompression or merging.
In order to use ctxfiltercache, you must set a non-zero value for the index storage attribute QUERY_FILTER_CACHE_SIZE. This defines how
much memory will be made available for ALL ftxfiltercache queries used for this index.
The filter cache data is invalidated whenever the database is restarted, but less obviously it is also invalidated when
the index is synchronized. Once invalidated, the next query which runs will then re-cache the results. This technique is
therefore probably not useful in systems with very frequently updated indexes.
The syntax for ctxfiltercache is
Where save_score and topN are both boolean (TRUE or FALSE) values.
save_score, as the name suggests, tells the system whether to save the score of the expression in the cache. Frequently, the sort of clauses which you would want to cache are filter clauses, and should not contribute to the score of the query. For example a security clause decides whether or not the user should see a particular item, but is not relevant to the score of that item. In this case you would set save_score to FALSE, and the expression would return 100 - the maximum score value. Since the AND operator always returns the lower of its operands, this ensures that the expression cannot affect the score.
TopN can only be set to TRUE if save_score is TRUE. It specifies whether only the highest-scoring hits should be cached. This saves storage, but means that if the stored hits are exhausted (due to the query being sorted by criteria other than score, for example, or by the application fetching too many rows) then the query will have to revert back to standard operation, which is likely to be slower than not using the filter cache at all.
The snippet functionality introduced in 10g has proved popular with application developers. It allows you to show "keywords in
context", that is, a brief summary of the document with the search terms highlighted. But there is a slight problem... In
order to generate the snippet (or indeed any of the other document services such as gist, themes, markup, hightlight, etc) the
whole document needs to be fetched and processed much as though it were being indexed. If you're trying to do this for a whole
page full of snippets, it can be quite slow, especially if
Additionally, some customers don't even keep the original document text on the search system. They index the documents, then
throw them away, and rely on the indexes returning a pointer to the original document location if necessary.
To solve this, we have introduced the concept of a "forward index". A conventional text index is word-based, and tells us which documents contain that word. A forward index is document-based, and tells us what words make up that document.
We do this with two new tables - the $O table and the $D table. $O (the "offsets" table) stores a compressed list of all the indexed words in the document, with their offsets. This is sufficient for calling ctx_doc.highlight, which only needs to return the offsets and lengths of the words to be highlighted. It is NOT sufficient for snippet, filter or markup, which need to return the actual document text, complete with stopwords, punctuation, etc. For these we need to turn on an additional storage attribute SAVE_COPY, which will store a compressed version of the document itself in the $D table.
SAVE_COPY comes in two flavors, PLAINTEXT and FILTERED. If you choose PLAINTEXT, the document will be stored without HTML tags. This is compact, and ideal for snippet generation, since snippets do not need HTML markup. However, if you want to use ctx_doc.markup, or ctx_doc.filter (both of which need to return HTML) then you will need to chose the FILTERED option, which stores the document as it comes back from the filter.
There is additional knob: SAVE_COPY_MAX_SIZE. This prevents excessive storage use by large documents. If you set this to 4000, then only the first 4000 characters of the filtered or plaintext document will be stored. This is usually enough to find snippets, but if not then the snippet call will return empty.
MOVIE tag does not have a YEAR
attribute or YEAR child element. To
test for non-existence, use the
AND and OR Operators
AND and OR to combine
existence or equality predicates in
a test. Say you have documents like:
<MOVIE>
<TITLE>Big Trouble in Little China</TITLE>
<ACTORS>
<ACTOR>Kurt Russell</ACTOR>
<ACTOR>Kim Cattrall</ACTOR>
</ACTORS>
<DVD>2 DISCS</DVD>
</MOVIE>
china in the title starring
Kurt Russell
and Kim Cattrall that are available on DVD:
china INPATH(/MOVIE[DVD and
.//ACTOR = "Kurt Russell" and .//ACTOR = "Kim Cattrall"]/TITLE)
blue INPATH(/video[DVD and (discount or @rating = "4")])
AND and OR are case-insensitive in our path query
language. In XPath they
must be lowercase.
Nested INPATH
INPATH operators are allowed, but the two are independent -- the
default context of an INPATH is always top level. For instance:
(perro INPATH(A)) INPATH(B)
INPATH's are looking
for top-level
tags, and, except for invalid documents, a document cannot have two different
top-level tags.
The HASPATH Operator
HASPATH operator is not a path query language operator;
it's a ConText
query language operator like INPATH. INPATH is used
when you want to search
for a text query within a path. HASPATH is used when all you want
to do is
test for path existence; it takes a path as its only argument, and returns 100
for a document if the path exists, 0 otherwise.
select id from documents
where contains(doc, 'HASPATH(/movie/dvd)')>0;
movie
element which
has a dvd element as a direct child.
HASPATH can also do tag value equality tests:
HASPATH(//A = "dog")
AND and
OR operators are not currently
supported. You can use the ConText query language AND and
OR, with multiple
HASPATHs to achieve the same effect. Instead of:
HASPATH(A and B)
HASPATH(A) and HASPATH(B)
HASPATH can return false hits when there are empty sections.
Path sections
are recorded with level information, but not true parentage. As a result, a
document like:
<A>
<B>
<C></C>
</B>
<D>
<E></E>
</D>
</A>
HASPATH(//B/E)
E is not
the child of B. The index tells us only that E and
B surround the same
word offsets, and that E is a third-level tag and B
is a second-level tag.
Normally this indicates that E is a child of B.
In this boundary case it
does not. This limitation only applies to empty sections like this -- any
words in the document would ensure correct behavior.
New XML Features
Highlighting
XMLType Indexing
Highlighting
INPATH and HASPATH operators
is not supported in this
version. You can still highlight and markup regular words, and
WITHIN queries,
but use of the path operators will result in an error message. We are working
on support for a future release.
XMLType Indexing
XMLType. This is
a core database feature, and you can find out about the type and its usage in
the XML features manual.
query rewrite:
grant query rewrite to <user>
ORA-01031: insufficient privileges
<user> should be the user creating the index. The
schema that owns the
index (if different) does not need the grant.
query_rewrite_enabled should be true, and
query_rewrite_integrity should
be trusted. You can add them to the init.ora:
query_rewrite_enabled=true
query_rewrite_integrity=trusted
alter session set query_rewrite_enabled=true;
alter session set query_rewrite_integrity=trusted;
DRG-10599: column is not indexed
XMLType column is actually a function-based index on the
getclobval() method
of the type. These are the standard grants you need to use function-based
indexes, as covered in the general Oracle documentation. However, unlike
function-based b-tree indexes, you do not need to calculate statistics.
XMLType column is detected, and no section group is specified in the
parameters string, the default system examines the new system parameter
DEFAULT_XML_SECTION, and uses the section group specified there.
At install
time this system parameter is set to CTXSYS.PATH_SECTION_GROUP,
which is the
default path sectioner. The default filter system parameter for
XMLType is
DEFAULT_FILTER_TEXT, which probably means that the INSO filter
is not engaged
by default.
XMLType columns work like any other
ConText index.
connect ctxsys/ctxsys
grant query rewrite to xtest;
connect xtest/xtest
create table xtest(doc sys.xmltype);
insert into xtest values (sys.xmltype.createxml('<A>simple</A>'));
create index xtestx on xtest(doc)
indextype is ctxsys.context;
alter session set query_rewrite_enabled = true;
alter session set query_rewrite_integrity = trusted;
select a.doc.getclobval() from xtest a
where contains(doc, 'simple INPATH(A)')>0;
The CTXRULE Indextype
The CTXRULE Indextype
select article_id from articles
where contains(article_text, 'oracle')>0
and article_date > :last_time_I_ran_this_query
create table queries (
username varchar2(10),
query_string varchar2(80)
);
insert into queries values ('JSMITH', 'Oracle');
create index queryx on queries(query_string)
indextype is ctxsys.ctxrule;
select username from queries
where matches(query_string, :article_text)>0
Creating a CTXRULE Index
create index queryx on queries(query_string)
indextype is ctxsys.ctxrule;
VARCHAR2 and
CLOB. Since you
are indexing stored query strings, VARCHAR2 will probably be
sufficient.
create index queryx on queries(query_string)
indextype is ctxsys.ctxrule
parameters ('wordlist my_wordlist')
DEFAULT_CTXRULE_LEXER
DEFAULT_CTXRULE_STOPLIST
DEFAULT_CTXRULE_WORDLIST
DEFAULT_CTXRULE_STORAGE
Querying with Matches
select * from queries
where matches(query_string, :doc_text)>0
queries table.
create table queries (
query_id number,
query_string varchar2(80)
);
insert into queries values (1, 'oracle');
insert into queries values (2, 'larry or ellison');
insert into queries values (3, 'oracle and text');
insert into queries values (4, 'market share');
create index queryx on queries(query_string)
indextype is ctxsys.ctxrule;
select query_id from queries
where matches(query_string,
'Oracle announced that its market share in databases
increased over the last year.')>0
oracle appears in the
document)
and 4 (the phrase market share appears in the document) but not
2 (neither
the word larry nor the word ellison appears, and not
3 (there is no text
in the document, so it does not match the query).
VARCHAR2 or
CLOB. It does
not accept BLOB input in this version, so you cannot match
filtered documents
directly. Instead, you must filter the binary content to CLOB
using the INSO
filter. Assuming the document data is in bind variable :doc_blob:
declare
doc_text clob;
begin
-- create a temporary CLOB to hold the document text
doc_text := dbms_lob.createtemporary(doc_text, TRUE, DBMS_LOB.SESSION);
-- call ctx_doc.ifilter to filter the BLOB to CLOB data
ctx_doc.ifilter(:doc_blob, doc_text);
-- now do the matches query using the CLOB version
for c1 in (select * from queries where matches(query_string, doc_text)>0)
loop
-- do what you need to do here
end loop;
dbms_lob.freetemporary(doc_text);
end;
ifilter is discussed later in this
document. Similarly, to match files, you
need to get the text into a CLOB. If your file is text in the
database
character set, you can create a BFILE and load it to a
CLOB using the function
dbms_lob.loadfromfile, or you can use
UTL_FILE to read the file into a temp
CLOB locator. If your file needs INSO filtering, you can load
the file into
a BLOB instead, and call ctx_doc.ifilter as above.
The Rule Language
contains. The rule language
supports the
following:
Operator
Example
ANDdog and cat
ORdog or cat
PHRASEdog sled
ABOUTabout(dogs)
NEARdog ; cat
STEM$dog
WITHINdog within title
THESAURUSSYN(dog)
cat
SYN kitty
SYN(cat)
matches(query_string, 'kitty')>0
SYN(cat) to {CAT}|{KITTY}. If the
thesaurus then changes to:
cat
SYN kitty
SYN gato
matches(query_string, 'gato')>0
update queries set query_string = query_string where query_id = 123;
exec ctx_ddl.sync_index('queryx');
CTXRULE Mechanics
+-----------+ +----------+ +-----------+
| | query strings | | query string | query |
| datastore |--------------->| lexer |--------------->| parser |
| | | |<---------------| |
+-----------+ +----------+ parse tree +-----------+
^ |
| |
|column data |rules
| |
| V
+==========+ +----------+
| | | |
| database |<----------------| engine |
| | rules | |
+==========+ +----------+
TOKEN_EXTRA column:
TOKEN_TEXT VARCHAR2(64)
TOKEN_TYPE NUMBER
TOKEN_FIRST NUMBER
TOKEN_LAST NUMBER
TOKEN_COUNT NUMBER
TOKEN_INFO BLOB
TOKEN_EXTRA VARCHAR2(4000)
TOKEN_EXTRA column is not part of the $X index column list.
TOKEN_TEXT, plus conjunctive criteria stored in
TOKEN_EXTRA.
TOKEN_TEXT = X
is a good first-pass filter -- you may get some that don't match the document,
but you will not miss any potentially-matching rules. Each rule returned by
this select is a partial match -- if the
TOKEN_EXTRA criteria can be fulfilled
by the other tokens in the document, then the rule is a full match.
QUERY_ID QUERY_STRING
-------- ------------
1 oracle
2 larry or ellison
3 text and oracle
4 market share
oracle. In this case, matching TOKEN_TEXT
alone is sufficient, so
TOKEN_EXTRA is NULL:
QUERY_STRING TOKEN_TEXT TOKEN_EXTRA
---------------- ---------- -----------
oracle ORACLE (null)
larry or the word ellison. This can be reduced to
two single-word queries,
each of which has TOKEN_EXTRA NULL:
QUERY_STRING TOKEN_TEXT TOKEN_EXTRA
---------------- ---------- -----------
larry or ellison LARRY (null)
ELLISON (null)
text and
oracle to be
a full match. The engine will choose one of these as the filing term, and
place the other the TOKEN_EXTRA criteria:
QUERY_STRING TOKEN_TEXT TOKEN_EXTRA
---------------- ---------- -----------
text and oracle TEXT {ORACLE}
text will pull this rule up as a
partial
match. The query engine will then examine the TOKEN_EXTRA
criteria, see that
it requires the presence of the word oracle, check if the
document contains
that word, and judge the rule a full match if so.
TOKEN_EXTRA:
QUERY_STRING TOKEN_TEXT TOKEN_EXTRA
---------------- ---------- -----------
market share MARKET {MARKET} {SHARE}
TOKEN_TEXT TOKEN_EXTRA TOKEN_INFO
---------- ----------------- --------------------------------------
ORACLE (null) DOC 1
LARRY (null) DOC 2
ELLISON (null) DOC 2
TEXT {ORACLE} DOC 3
MARKET {MARKET} {SHARE} DOC 4
+-----------+ +----------+ filtered +-----------+
| | document | | doc text | |
| SQL Query |---------->| filter |----------->| sectioner |---------+
| | | | | |--+ |
+-----------+<-------+ +----------+ +-----------+ | |
| | |
| section offsets | |
results | +---------------------------------+ |
| | |
| V |
+==========+ | +----------+ +-----------+ |
| | +--| | | | |
| index |----------->| query |<-----------| lexer |<--------+
| tables | index data | engine | tokens | | plain
+==========+ +----------+ +-----------+ text
TOKEN_TEXT. The
TOKEN_EXTRA for each of those rules is evaluated. If satisfied,
then the query
list is added to a result buffer. The result buffer for all the tokens is
merged to eliminate duplication, and returned to the query session.
select query_id from queries
where matches(query_string,
'Oracle boss Larry Ellison reported increased market share
of text applications')>0
TOKEN_TEXT TOKEN_INFO
---------- ----------
ORACLE OFF 1
BOSS OFF 2
LARRY OFF 3
...
ORACLE, so the query engine executes:
select token_extra, ...
from dr$queryx$i
where token_text = 'ORACLE' ...
TOKEN_TEXT TOKEN_EXTRA TOKEN_INFO
---------- ----------------- --------------------------------------
ORACLE (null) DOC 1
TOKEN_EXTRA is NULL, so this is a full match,
and we add DOC 1 to the
result buffer.
BOSS. This pulls up no rows
in $I, so we
move immediately to the third token, LARRY, which results in:
TOKEN_TEXT TOKEN_EXTRA TOKEN_INFO
---------- ----------------- --------------------------------------
LARRY (null) DOC 2
TOKEN_EXTRA is NULL, so this is a full match,
and we add DOC 2 to the
result buffer. The buffer now contains DOC 1 and
DOC 2.
ELLISON, we match DOC 2 again.
However, DOC 2 is
already in the result buffer, and so we discard this hit as a duplicate.
MARKET, we have a non-null
TOKEN_EXTRA:
TOKEN_TEXT TOKEN_EXTRA TOKEN_INFO
---------- ----------------- --------------------------------------
MARKET {MARKET} {SHARE} DOC 3
market share exists, the query engine
accesses the
occurrence lists for MARKET and SHARE, and finds a
case where the word offset
of SHARE is one more than that of MARKET. Since the
document token information
is all in memory, this is quick and requires no disk access. The
TOKEN_EXTRA
is fulfilled, and DOC 3 is added to the result buffer.
TOKEN_TEXT TOKEN_EXTRA TOKEN_INFO
---------- ----------------- --------------------------------------
TEXT {ORACLE} DOC 4
ORACLE
exists in the
in-memory token list, and adds DOC 4 to the result buffer.
Performance
TOKEN_EXTRA has to be evaluated. So the expected response
time of a
ctxrule query is generally longer than that of a contains query.
TOKEN_EXTRA
criteria.
Furthermore, the number of rows returned by the select increases, so there's
an $R overhead and the time to do whatever it is your application does with the
results. However, the number of unique rules has less impact on query
performance than size of the document:
Indexed
QueriesDocument Size
143
525
1034
2108
4023
8151
16636
32933
64337
100
0.09
0.16
0.25
0.42
0.73
1.32
2.51
4.10
7.15
500
0.09
0.17
0.26
0.46
0.80
1.46
2.84
4.73
7.57
1000
0.10
0.18
0.28
0.51
0.84
1.55
2.84
4.58
7.87
2000
0.11
0.20
0.30
0.56
0.99
1.72
3.27
4.80
8.51
4000
0.16
0.22
0.34
0.71
1.25
2.15
4.28
6.20
10.20
8000
0.13
0.28
0.44
0.87
1.68
2.83
5.78
8.04
13.96
16000
0.15
0.33
0.62
1.17
2.30
3.77
8.34
10.40
19.16
CTXRULE Index Maintenance
ctx_ddl.sync_index and ctx_ddl.optimize_index
calls, passing a ctxrule
index name instead of a ConText index name.
CTXRULE Query and Document Services
CTX_DOC) and query (CTX_QUERY) services
are not supported against
ctxrule indexes.
Miscellaneous New Features
Deprecated Features
SYNC and OPTIMIZE
is no longer supported. We
have discouraged the use of alter index for SYNC and
OPTIMIZE since the PL/SQL
interface for these was added in 8.1.6. Failures during sync or
optimize can
invalidate the index when using the alter index interface, forcing a complete
index rebuild. The PL/SQL API does not carry this risk.
CLOB
columns. Although existing
CTXCAT indexes on CLOB columns created in 8.1.7 are still valid
and can be
queried, creating new ones is not allowed and will result in an error message.
CTXSRV has been deprecated since 8.1.6 -- avoid using this.
Instead, use
CTX_DDL.SYNC_INDEX called from a DBMS_JOB.
CTXLOAD should be used only for thesaurus import and export --
its loading
and updating features are deprecated and should not be used.
BLOB_LOC/CLOB_LOC OUTPUT_TYPE in USER_DATASTORE
OUTPUT_TYPE of
CLOB or BLOB, the
datastore creates a temporary lob, and the user datastore procedure is expected
to copy the document content into it. The system of copying into a temporary
lob was designed for user datastores where the document is being assembled from
parts or its content is being processed before indexing.
create table MOVIE (
movieid NUMBER,
description CLOB,
encoding VARCHAR2(30),
data BLOB
);
create table IMAGE (
imageid NUMBER,
annotation CLOB,
data BLOB
);
create table DOCUMENT (
docid NUMBER,
author VARCHAR2(80),
title VARCHAR2(150),
text CLOB
);
create table MEDIA (
mediatype VARCHAR2(10),
mediaid NUMBER,
searchcol CHAR(1)
);
create or replace procedure myds(rid in rowid, dataout in out nocopy clob)
is
mlob clob;
begin
for c1 in (select mediatype, mediaid from media where rowid = rid) loop
if (c1.mediatype = 'MOVIE') then
select description into mlob from movie where movieid = c1.mediaid;
elsif (c1.mediatype = 'IMAGE') then
select annotation into mlob from image where imageid = c1.mediaid;
elsif (c1.mediatype = 'MEDIA') then
select text into mlob from document where docid = c1.mediaid;
end if;
DBMS_LOB.COPY(dataout, mlob, dbms_lob.getlength(mlob));
end loop;
end;
DBMS_LOB.COPY is wasteful --
since we
already have a lob locator with the data, there's no reason to spend I/O
copying the exact same thing to the temp lob. It would be more efficient to
simply pass this lob locator back to the indexing engine.
OUTPUT_TYPE values: CLOB_LOC and
BLOB_LOC. These settings tell
the user datastore to by-pass temporary lob creation because the user datastore
procedure will pass back a permanent lob locator instead.
CLOB_LOC, our example user datastore procedure needs to
be modified to
get rid of the dbms_lob.copy and instead pass back the lob
locator in dataout:
create or replace procedure myds(rid in rowid, dataout in out nocopy clob)
is
begin
for c1 in (select mediatype, mediaid from media where rowid = rid) loop
if (c1.mediatype = 'MOVIE') then
select description into dataout from movie where imageid = c1.mediaid;
elsif (c1.mediatype = 'IMAGE') then
select annotation into dataout from image where docid = c1.mediaid;
elsif (c1.mediatype = 'MEDIA') then
select text into dataout from document where movieid = c1.mediaid;
end if;
end loop;
end;
CLOB_LOC:
begin
ctx_ddl.create_preference('myds_pref','USER_DATASTORE');
ctx_ddl.set_attribute('myds_pref', 'PROCEDURE', 'myds');
ctx_ddl.set_attribute('myds_pref', 'OUTPUT_TYPE', 'CLOB_LOC');
end;
output_type setting applies to all
rows being
indexed -- you cannot have the datastore pass in a temp locator for some rows
and not pass one in for others. In our example above, we might want to append
author and title information for documents, but still pass back permanent lobs
for movies and images.
IGNORE Format Column Value
IGNORE value has been added to the
format column.
When the format column is set to IGNORE, the document content is
blocked in
the datastore -- its content is never accessed, and it is indexed as an empty
document. It is assigned a docid and the rowid is in $R, but it has zero
impact on $I because no tokens are generated, and near-zero impact on query
time -- the only effect is a slightly larger $R table.
IGNORE in action:
create table myfiles (
format varchar2(10),
filen varchar2(80)
);
insert into myfiles values ('TEXT', '/doc/README');
insert into myfiles values ('BINARY','/doc/imt901.doc');
insert into myfiles values ('IGNORE','/doc/banner.jpg');
create index myfilesx on myfiles(filen)
indextype is ctxsys.context
parameters ('datastore ctxsys.file_datastore
filter ctxsys.inso_filter
format column format');
TEXT, so it bypasses INSO
filtering. The
second document is marked as BINARY, so it gets INSO filtered
before lexing.
The third document is marked as IGNORE, so it is indexed as an
empty document.
The file /doc/banner.jpg is not opened or accessed during the
create index.
IGNORE value is caught by the datastore interface, so all
intervening
layers do minimal work. If you have a user datastore or procedure filter,
for instance, the procedure will not get executed at all for the
IGNORE
documents.
IGNORE value is also useful for working around problem
documents. If
you find that your index cannot be created because of faulty documents, you
can mark those documents as IGNORE, which should allow the index
to be created.
When the problem is resolved, you can change the value to TEXT or
BINARY as
appropriate, and force re-indexing by updating the indexed column to itself.
In previous versions, the row had to be deleted, which is usually unacceptable
because you lose data and affect other parts of the app.
INSO Timeout
begin
ctx_ddl.create_preference('myinso','inso_filter');
ctx_ddl.set_attribute('myinso','timeout','600');
end;
TIMEOUT is seconds. The default is 120, if
TIMEOUT is not
specified. (Note: bug 1819825 makes the default no timeout, but this will be
addressed in the first 9.0.1 patchset) If filtering has not progressed within
that time period, ctxhx
kills itself and logs an indexing error. Those documents (which can be found
by scanning ctx_user_index_errors after create index or sync) can be tested
further, perhaps by manually invoking ctxhx.
ctxhx checks
the file
size every timeout interval seconds. If the file size is growing, then
ctxhx continues. If it is not growing, ctxhx kills
itself. This means that
documents that simply take a long time to filter can be filtered. It also
means that infinite loops in INSO that continually write to the output file
cannot be caught by timeout.
CTX_DOC.IFILTER
ctx_doc.filter inside the user datastore
procedure -- is not
workable. The ctx_doc calls are trusted callouts, as is
indexing, and our
architecture does not currently allow recursive calls. Even if supported,
however, the ctx_doc functions require an index, so you would
have to create a
dummy index on the BLOB column just to be able to call
ctx_doc.filter. This would
just take up space.
CTX_DOC.IFILTER.
It takes a
passed-in BLOB, filters it to HTML using the INSO filter, and
appends the
result to the passed-in CLOB:
PROCEDURE ifilter(
data IN blob,
text IN OUT nocopy clob
);
BLOB directly. It
appends, so you
can lay out your structured columns first, then call ifilter to
append the
BLOB's data -- or call it multiple times for multiple
BLOBs. Here's an
example of ctx_doc.ifilter used in a user datastore procedure:
create procedure ifds(rid in rowid, doc in out nocopy clob)
is
buf varchar2(120) := null;
begin
for c1 in (select id, title, document
from pdfdocs
where rowid = rid)
loop
buf := buf || '<ID>'||c1.id||'</ID>';
buf := buf || '<TITLE>'||c1.title||'</TITLE>';
buf := buf || '<DOCTEXT>';
dbms_lob.writeappend(doc, length(buf), buf);
ctx_doc.ifilter(c1.document, doc);
buf := buf || '</DOCTEXT>';
dbms_lob.writeappend(doc, length(buf), buf);
end loop;
end;
ctx_doc.ifilter always appends to the
existing data in the
text clob, make sure you pass in an empty clob or trim the clob
before the
first call. Unlike the in-memory doc services, ifilter does not
automatically
allocate a temporary lob; passing in NULL or a non-writeable lob
in the text
parameter will result in an error.
ifilter is implemented using a safe callout -- this means that
SQL*Net must
be configured for extproc and running in order to use this
feature. ifilter
uses the safe callout so that it can be called from a user datastore procedure,
which runs as a trusted callout.
CTXHX will be invoked using the default built-in timeout of 120
seconds.
There is no way to override this in this version.
ifilter explicitly calls the INSO filter. There is no way to
have ifilter
use the charset, procedure, or user filters.
UTF-16 Support
AL16UTF16,
AL16UTF16LE), these filters get UTF-16 conversion implicitly.
UTF16AUTO, then the filter will
examine the
first two bytes of the data and choose big-endian or little-endian UTF-16
conversion as appropriate. If the marker bytes are 0xFE,
0xFF,
then these two
bytes are stripped out and little-endian conversion is used. If the marker
bytes are 0xFF, 0xFE, then these two bytes are
stripped out and
big-endian
conversion is used. Anything else assumes big-endian conversion and leaves
the first two bytes intact.
create table conv (
format varchar2(10),
charset varchar2(30),
text varchar2(80)
);
insert into conv values ('TEXT', 'UTF16AUTO', '/doc/msunicode.txt');
insert into conv values ('TEXT', 'UTF16AUTO', '/doc/unixunicode.txt');
create index convx on conv(text)
indextype is ctxsys.context
parameters ('datastore ctxsys.file_datastore
filter ctxsys.inso_filter
format column format
charset column charset');
Japanese Lexer
JAPANESE_VGRAM_LEXER solves this problem by not
indexing words
at all. Instead, it splits up a string of characters into overlapping bits.
A sequence of ABCD (where A, B, C, and D are kana/kanji) might be indexed as
three "tokens": AB, BC, and CD.
JAPANESE_LEXER, which is a hybrid lexer
based on the
CHINESE_LEXER that uses a lexicon to segment written Japanese
into coherent
words, falling back on vgrams only for unrecognized sequences. The segmenta-
tion should make Japanese text query faster and more precise, while the
vgramming preserves recall. Using the JAPANESE_LEXER is pretty
easy, because
it has no attributes to fiddle with in this version:
exec ctx_ddl.create_preference('myjlexer','japanese_lexer');
create index myidx on mytable(text)
indextype is ctxsys.context
parameters ('lexer myjlexer');
New Chinese Lexicon
CHINESE_VGRAM_LEXER for
vgram indexing, and a CHINESE_LEXER, for segmentation, 9i
includes an update
to the lexicon that the CHINESE_LEXER uses, which should improve
its accuracy.
Korean Morphological Lexer
KOREAN_MORPH_LEXER, uses a newly-licensed
technology, 'HAM'.
This analyzes text a sentence at a time, then uses morphology and grammar
lexicons to identify word boundaries and normalize word form. Its word
boundary detection is more accurate than the existing
KOREAN_LEXER. The word
form normalization improves recall without having to expand query terms to all
possible forms.
KOREAN_MORPH_LEXER has a lot of attributes to play with.
They control
what to index and what to throw away -- for instance, NUMBER can
be set to
FALSE to index or to throw away numbers. Generally the defaults
are fine,
so the following will work:
exec ctx_ddl.create_preference('myjlexer',korean_morph_lexer');
create index myidx on mytable(text)
indextype is ctxsys.context
parameters ('lexer myjlexer');
KOREAN_MORPH_LEXER uses four lexicons, all in
?/ctx/data/kolx:
drk2sdic.dat - system dictionary
drk2gram.dat - grammar dictionary
drk2udic.dat - user dictionary
drk2xdic.dat - stopword dictionary
ALL Language for Multi-Stoplist
THE in such cases, you had to do:
ctx_ddl.add_stopword('globallist','the','French');
ctx_ddl.add_stopword('globallist','the','German');
ctx_ddl.add_stopword('globallist','the','Swedish');
...
ctx_ddl.add_stopword('globallist','the','ALL');
CTXCAT Wildcard Searching
catsearch(desc, 'monopol*', '')>0
contains(desc, 'monopol%')>0
catsearch(desc, '*monopol*', '')>0
catsearch(desc, 'mon*pol*', '')>0
Transportable Tablespace
datat, and a text
index on it
with internal objects in tablespace indext, controlled by the
storage
preference:
create table transport(text varchar2(80))
tablespace datat;
begin
ctx_ddl.create_preference('mystore','basic_storage');
ctx_ddl.set_attribute('mystore','i_table_clause','tablespace indext');
ctx_ddl.set_attribute('mystore','k_table_clause','tablespace indext');
ctx_ddl.set_attribute('mystore','n_table_clause','tablespace indext');
ctx_ddl.set_attribute('mystore','r_table_clause','tablespace indext');
ctx_ddl.set_attribute('mystore','i_index_clause','tablespace indext');
end;
/
create index textidx on transport(text)
indextype is ctxsys.context
parameters ('storage mystore');
alter tablespace datat read only;
alter tablespace indext read only;
exp "'/ as sysdba'" transport_tablespace=y tablespaces=datat,indext
file=transport.dmp
imp "'/ as sysdba'" transport_tablespace=y
datafiles=/oracle/home/dbs/datat.dbf,/oracle/home/dbs/indext.dbf
tablespace=datat,indext file=t.dmp
dr$pending for the text index when you do
the
transportable tablespace export, those ROWIDs will materialize in
dr$pending
during the import, so you won't miss anything. At worst you'll have to do a
sync on the target system.
drop tablespace datat including contents;
datat or
indext from our
examples -- you'll get an ORA-29857 error message. You must first drop the
domain indexes manually:
drop index textidx;
Rowid Logging
IGNORE-ing the faulty
documents. However, in previous versions it was very difficult to find out
which documents were faulty.
create table logtest(a varchar2(80));
insert into logtest values ('the quick brown fox jumps over the lazy dog');
insert into logtest values ('Mr. Jock, TV quiz PhD, bags few lynx');
exec ctx_output.start_log('mylog');
exec ctx_output.add_event(CTX_OUTPUT.EVENT_INDEX_PRINT_ROWID);
create index logtestx on logtest(a)
indextype is ctxsys.context;
16:29:00 05/14/01 populate index: CTXSYS.LOGTESTX
16:29:00 05/14/01 Begin document indexing
16:29:00 05/14/01 INDEXING ROWID AAABklAACAAACynAAA
16:29:00 05/14/01 INDEXING ROWID AAABklAACAAACynAAB
...
VER_CODE Column in CTX_VERSION
CTX_VERSION has been changed so that it now has two
columns:
VER_DICT, which shows the version of the CTXSYS data
dictionary, and VER_CODE,
which is actually a trusted callout that queries the linked-in code for the
text version. This should solve the problem of determining which Text patchset
is installed, although this has already been mostly solved with unified RDBMS
and Text patchsets in 8.1.7.
Oracle Text 9.2.0
Technical Overview
Table of Contents
Upgrading 9.0.1 to 9.2.0
?/ctx/admin/s0902000.sql as SYS
this grants new, needed database privileges to ctxsys
?/ctx/admin/u0902000.sql as CTXSYS
this upgrades the ctxsys schema to 9.0.1
u0902000.sql
script calls other scripts to upgrade the indextypes
(t0902000.sql), recompile all packages
(dr0pkh.sql and dr0plb.sql), and recreate the shared
library (dr0ulib.sql).
VALID.
DBMS_REGISTRY facility, using
the component ID CONTEXT. However, since we do not currently have
a validation procedure, a return status of "VALID" from
DBMS_REGISTRY.STATUS does not guarantee installation validity.
The version for DBMS_REGISTRY.VERSION is generated from
VER_DICT of CTX_VERSION. You might notice that this
says 9.2.0.1.0 -- in the past that would have been the version for the first
patchset, but this is the official and expected version for the production
release.
Index Objects
URITYPE Indexes
create table tutype(url sys.httpuritype);
insert into tutype values (
sys.httpuritype.createuri('http://www.oracle.com')
);
create index tutypex on tutype(url) indextype is ctxsys.context;
select * from tutype where contains(url, 'oracle')>0;
select dbms_lob.getlength(a.<uritypecolumn>.getblob())
from <basetable> a;
INSO_FILTER Attribute TIMEOUT_TYPE
BASIC_LEXER Attribute INDEX_STEMS
create table stest(text varchar2(80));
insert into stest values ('RAN');
insert into stest values ('RUNNING');
commit;
exec ctx_ddl.create_preference('stemming_lexer','basic_lexer');
exec ctx_ddl.set_attribute('stemming_lexer','index_stems','derivational');
exec ctx_ddl.create_preference('deriv_wordlist','basic_wordlist');
exec ctx_ddl.set_attribute('deriv_wordlist','stemmer','derivational');
create index stestx on stest(text) indextype is ctxsys.context
parameters ('lexer stemming_lexer wordlist deriv_wordlist');
select token_text, token_type from dr$stestx$i;
RAN 0
RUN 9
RUNNING 0
delete from dr$stestx$i where token_type != 9;
select text from stest where contains(text, '$run')>0;
RAN
RUNNING
USER_LEXER
create or replace procedure ul_idx_proc(
text in varchar2,
tokens in out nocopy varchar2,
locneeded in boolean
) is
...
<tokens>
<word>IMUA<word>
<word>IOLANI<word>
<tokens>
<tokens>
<word off="0" len="4">IMUA<word>
<word off="5" len="7">IOLANI<word>
<tokens>
create or replace procedure ul_qry_proc(
text in varchar2,
wilds in ctx_ulexer.wildcard_tab,
tokens in out nocopy varchar2
) is
...
ctx_ddl.create_preference('hawaiian_lexer','user_lexer');
ctx_ddl.set_attribute('hawaiian_lexer','index_procedure','ul_idx_proc');
ctx_ddl.set_attribute('hawaiian_lexer','input_type','varchar2');
ctx_ddl.set_attribute('hawaiian_lexer','query_procedure','ul_qry_proc');
create table htest(text varchar2(80));
insert into htest values ('imua i''olani');
create index htestx on htest(text) indextype is ctxsys.context
parameters ('lexer hawaiian_lexer');
select token_text from dr$htestx$i;
IMUA
IOLANI
Index Maintenance
Create Index Online
create index myindex on mytable(mycolumn)
indextype is ctxsys.context
parameters ('datastore mydatastore')
online;
Parallel Create Index
create index parx on parbase(text) parallel 4
select /*+ PARALLEL(base 4) */ rowid, text
from parbase base;
end_log then start_log) between parallel
operations.
DBMS_PCLXUTIL.
Notes on Parallel Degree
See the Database Performance Guide, Concepts Manual, or Data Warehousing Guide
for general information on how to set up your database for parallel query.
DBMS_PCLXUTIL Support
DBMS_PCLXUTIL to
get parallel local index creation.
create index ptestx on ptest(text)
indextype is ctxsys.context
local unusable;
exec dbms_pclxutil.build_part_index(3, 2, 'PTEST', 'PTESTX', TRUE);
Exchange Partition
create table npt(id number, text varchar2(80));
insert into npt values(1, 'cat');
create index npx on npt(text) indextype is ctxsys.context;
create table pt(id number, text varchar2(80))
partition by range(id)
(partition p1 values less than (10));
insert into pt values (1, 'dog');
create index px on pt(text) indextype is ctxsys.context local;
alter table pt exchange partition p1 with table npt
including indexes;
Parallel Sync
parallel_degree in
ctx_ddl.sync_index. Like
create index in parallel, this
uses the parallel query mechanism under the covers -- the driving parallelized
query is a join of dr$pending and the base table. See
Notes on Parallel Degree for more
information.
New Sync Locking Mechanism
Optimize Logging
CTX_OUTPUT.EVENT_OPT_PRINT_TOKEN
will print each token as it is being optimized.
Token Optimize Token Type
token_type argument to
ctx_ddl.optimize_index which allows
specification of the token type to optimize. For instance, to optimize the
zone section HEADLINE:
ctx_ddl.optimize_index('myindex', 'TOKEN', token=>'HEADLINE', token_type=>2);
CTX_REPORT.TOKEN_TYPE function to
translate a mnemonic name to token type. For instance:
ctx_ddl.optimize_index('myindex', 'TOKEN', token=>'HEADLINE',
token_type=>ctx_report.token_type('myindex','ZONE SEC'));
or
ctx_ddl.optimize_index('myindex', 'TOKEN', token=>'HEADLINE',
token_type=>ctx_report.TOKEN_TYPE_ZONE_SEC);
CTX_REPORT for more
information on valid inputs to CTX_REPORT.TOKEN_TYPE.
New Full Optimize Method
Parallel Full Optimize
parallel_degree in
ctx_ddl.optimize_index. Like create index in parallel, this
uses the parallel query mechanism under the covers, so the actual degree of
parallelism achieved will depend on machine capabilities, system load, init.ora
parameters, etc. (see Notes on Parallel
Degree) The partitioning of the load is done by hash value on
token, which means that a token's info will never get split across slaves,
and that some slaves may be slower than others (if they have large tokens).
MARK_FAILED
CTX_ADM.MARK_FAILED, a call to
force the index from LOADING to FAILED, thus unblocking alter index.
Query
Optimizer Statistics Tuning
Query Template
<query>
<textquery>cat or dog</textquery>
</query>
select id from mydocuments
where contains(text,
'<query>
<textquery>cat or dog</textquery>
</query>')>0;
textquery, grammar
can be set to either "context" or "ctxcat", and specifies the query language of
the textquery. This allows you to use the ctxcat language for queries on
context indexes, or the context language for queries on ctxcat indexes:
select id from mydocuments
where contains(text,
'<query>
<textquery grammar="ctxcat">cat | dog</textquery>
</query>')>0;
datatype attribute of the empty score element
controls whether the score() function returns an integer, or a floating point
number:
select score(1), id from mydocuments
where contains(text,
'<query>
<textquery>cat or dog</textquery>
<score datatype="float"/>
</query>', 1)>0;
SCORE(1) ID
-------- ---------
4.403 103
7.217 101
<query>
<textquery> foo & bar </textquery>
</query>
instead of
<query>
<textquery> foo & bar </textquery>
</query>
<query>
<textquery grammar = "ctxcat"> cat frog </textquery>
</query>
Parallel Query on Local Indexes
select art_id from articles
where contains(art_text, 'myquery')>0
PARTITION RANGE ALL
TABLE ACCESS BY LOCAL INDEX ROWID ARTICLES
DOMAIN INDEX ARTX
alter index artx parallel 4;
PARTITION RANGE ALL PARALLEL_COMBINED_WITH_PARENT
TABLE ACCESS BY LOCAL INDEX ROWID ARTICLES PARALLEL_TO_SERIAL
DOMAIN INDEX ARTX
PARALLEL_INDEX hint does not work for domain index scans.
Similarly, the NOPARALLEL_INDEX hint does not work -- once you've
changed the index's parallel degree, ALL multi-partition queries run in
parallel. To get serial execution, you have to alter the index again,
with "parallel 1".
Reverse Docid Sorting
Miscellaneous Other Features
CTXCAT Theme Indexing
catsearch, using the new
query template facility. Since ctxcat documents
are not usually very long, you should set PROVE_THEMES to FALSE for the lexer
preference.
CTXXPATH Indextype
sys.xmltype columns, is used to
speed up certain queries using the existsnode method (See the
Application Developer's Guide - XDB).
Classification Training
CTX_CLS.TRAIN automates step 2; it statistically
analyzes document groups and automatically generates ctxrule-compatible rules.
TRAIN will generate rules based on the tokens
produced, so for best results the lexer preference should match the lexer
preference you intend to use for the ctxrule index:
create table docs (doc_id number primary key, text clob);
<<populate the table>>
create index docsx on docs(text) indextype is ctxsys.context;
TRAIN will
consider themes for its generated rules if they are present in the index, but
theme indexing is not required. TRAIN will not consider any
token other than type 0 and 1. So it cannot discover classification based on
subject codes in a non-visible field section, for instance. This may be
added in future releases.
create table cat_doc(
cd_cat_id number,
cd_doc_id number,
primary key (cd_cat_id, cd_doc_id)
) organization index;
<<organize your sample documents into groups>>
<<populate cat_doc table>>
TRAIN is a set of ctxrule-compatible rules. So the result table
must have three columns: a numeric column to hold the cat_id, a varchar2(4000)
column to hold the rule text, and a numeric column to hold the confidence level
-- a score telling you how well the produced rule approximates the grouping:
create table rules(
rule_cat_id number,
rule_text varchar2(4000),
rule_confidence number
);
TRAIN, passing it the names of the tables and columns
we've set up:
begin
ctx_cls.train(
index_name => 'docsx',
docid => 'doc_id',
cattab => 'doc_cat',
catdocid => 'cd_doc_id',
catid => 'cd_cat_id',
restab => 'rules',
rescatid => 'rule_cat_id',
resquery => 'rule_text',
resconfid => 'rule_confidence'
);
end;
/
RULE_CLASSIFIER
preferences.
CTX_REPORT Package
CTX_REPORT functions provide information about text
indexes for
diagnosis and maintenance. The procedures analyze the index, generate a
report, then write the report to a CLOB. For convenience, most have a
function form which allocates a call-duration temp clob and returns it,
so you can do something like:
set long 64000
set pages 0
set heading off
spool myindex_report
select ctx_report.describe_index('myindex') from dual;
spool off;
DESCRIBE_INDEX is a human-readable description of index meta-
data, including the name of the base table, the objects used for each class,
and the values of their attributes. It's useful when you don't know, don't
remember, or wish to verify an index's settings.
(example output here)
DESCRIBE_POLICY is like DESCRIBE_INDEX, but for
policies.
CREATE_INDEX_SCRIPT generates a sql script which includes
preference creation PL/SQL blocks and a create index statement to re-create
a given index. It's great when you need a testcase, but be aware that it
won't include external requirements like a user datastore procedure.
(example output here)
CREATE_POLICY_SCRIPT is like CREATE_INDEX_SCRIPT,
but for policies.
INDEX_SIZE lists the internal index tables and, using queries on
sys data dictionary tables and DBMS_SPACE, shows their allocated
and used sizes.
(example output here)
INDEX_STATS analyzes the $I table and calculates statistics like
the number of unique tokens, the 10 most frequent tokens, average size of
token_info per token, etc. It can also calculate statistics like estimated
row fragmentation, garbage size, and most fragmented tokens.
(example output here)
INDEX_STATS is heavy-duty. In order to calculate the stats,
it needs to scan the entire $I table, which can take a long time for large
indexes. You can track its progress in the ctx_output logfile,
if logging is on. Because it's expected to run for long periods of time,
it's not suitable for calling in a select list, and there is no function
variant.
INDEX_STATS ends the current transaction
implicitly. This also means that you cannot pass a lob locator, because
lob locators cannot span transactions. Instead, pass in NULL and let
ctx_report allocate a session-duration temp clob. Something like:
declare
x clob := null;
begin
ctx_report.index_stats('tdrbprx21',x);
insert into output values (x);
commit;
dbms_lob.freetemporary(x);
end;
/
select * from output;
TOKEN_INFO decodes a token's info in $I. The report shows which
documents contain the token, the word offsets where the token occurs in each
document, and statistical information like average number of occurrences per
document. It's good for verifying the index contents during diagnosis of
problems.
(example output here)
TOKEN_TYPE translates a mnemonic label to a numeric token type.
It has two arguments: an index name, and the mnemonic label. The index name
must be provided in order to lookup meta data for field section types.
Valid input for the mnemonic are:
Input
Meaning
Type Returned
THEMEtheme token
1
ZONE SECzone section
2
ATTR TEXTtext that occurs in an attribute
4
ATTR SECattribute section
5
PREFIXprefix token
6
PATH SECpath section
7
PATH ATTRpath attribute section
8
STEMstem form token
9
FIELD <name> TEXTtext token in field section <name>
16-79
FIELD <name> PREFIXprefix token in field section <name>
616-679
FIELD <name> STEMstem token in field section <name>
916-979
typenum := ctx_report.token_type('myindex', 'field author text');
TOKEN_TYPE_TEXT constant number := 0;
TOKEN_TYPE_THEME constant number := 1;
TOKEN_TYPE_ZONE_SEC constant number := 2;
TOKEN_TYPE_ATTR_TEXT constant number := 4;
TOKEN_TYPE_ATTR_SEC constant number := 5;
TOKEN_TYPE_PREFIX constant number := 6;
TOKEN_TYPE_PATH_SEC constant number := 7;
TOKEN_TYPE_PATH_ATTR constant number := 8;
TOKEN_TYPE_STEM constant number := 9;
Policies
ctx_ddl.create_policy, specifying
preferences for each class, just like the parameters clause of create index:
begin
ctx_ddl.create_policy('german_filtered',
filter => 'ctxsys.inso_filter',
section_group => 'ctxsys.html_section_group',
lexer => 'my_german_lexer',
stoplist => 'd_stoplist',
wordlist => 'd_wordlist'
);
end;
datastore
preference. Other than this, it works like create index.
filter, and NULL for section_group
then the HTML section group will kick in automatically.
create_policy without affecting the policy.
ctx_ddl.update_policy and
drop a policy using ctx_ddl.drop_policy.
CTX_USER_INDEXES,
CTX_USER_INDEX_OBJECTS, and CTX_USER_INDEX_VALUES.
Or, preferably, try out the new function
CTX_REPORT.DESCRIBE_POLICY. Either
way, You'll see that under
the covers, policies are indexes created on the table
dr$policy_tab. On a side note, this table has a language column
so that you can use multi-lexer preferences.
ora:contains, which can be used in the
sys.xmltype methods (see the Application Developer's Guide - XDB).
In the future, we intend to use policies for things like direct document
services and true functional invocation. Watch this space.
Oracle Text 12c Technical Overview
Technical Overviews for previous versions of Oracle Text are available here.
Table of Contents
BIG_IO large TOKEN_INFO option
exec ctx_ddl.create_preference( 'my_storage', 'BASIC_STORAGE' )
exec ctx_ddl.set_attribute ( 'my_storage', 'BIG_IO', 'true' )
create index ... parameters( 'storage my_storage' )
/
If you want to check whether the option is on, you can use dbms_metadata.get_ddl to make sure the $I table is using SECUREFILE for its LOB. For example if you have an index called MY_INDEX, you could run this in SQL*Plus:
set long 50000
set pagesize 0
select dbms_metadata.get_ddl ( 'TABLE', 'DR$MY_INDEX$I' ) from dual
/
CREATE TABLE "DEMO"."DR$MY_INDEX$I"
( "TOKEN_TEXT" VARCHAR2(64) NOT NULL ENABLE,
"TOKEN_TYPE" NUMBER(10,0) NOT NULL ENABLE,
"TOKEN_FIRST" NUMBER(10,0) NOT NULL ENABLE,
"TOKEN_LAST" NUMBER(10,0) NOT NULL ENABLE,
"TOKEN_COUNT" NUMBER(10,0) NOT NULL ENABLE,
"TOKEN_INFO" BLOB
) SEGMENT CREATION IMMEDIATE
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
STORAGE(INITIAL 8388608 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "TS_YS_ASSM"
LOB ("TOKEN_INFO") STORE AS SECUREFILE (
TABLESPACE "TS_YS_ASSM" ENABLE STORAGE IN ROW CHUNK 8192
NOCACHE LOGGING NOCOMPRESS KEEP_DUPLICATES
STORAGE(INITIAL 8388608 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT))
MONITORING
create table my_table (text varchar2(80));
begin
for i in 1 .. 5000 loop
insert into my_table values ('hello');
end loop;
end;
/
exec ctx_ddl.drop_preference ( 'my_storage' )
exec ctx_ddl.create_preference( 'my_storage', 'BASIC_STORAGE' )
exec ctx_ddl.set_attribute ( 'my_storage', 'BIG_IO', 'true' )
create index my_index on my_table (text) indextype is ctxsys.context
parameters ('storage my_storage')
/
column token_text format a15
select token_text, length(token_info) from dr$my_index$i;
TOKEN_TEXT LENGTH(TOKEN_INFO)
--------------- ------------------
HELLO 15023
Whereas if we had BIG_IO set to false, we would see:
TOKEN_TEXT LENGTH(TOKEN_INFO)
--------------- ------------------
HELLO 996
HELLO 3501
HELLO 3501
HELLO 3501
HELLO 3501
Near Real Time Indexes
create table my_table( id number primary key, text varchar2(2000) );
exec ctx_ddl.drop_preference ( 'my_storage' )
exec ctx_ddl.create_preference( 'my_storage', 'BASIC_STORAGE' )
exec ctx_ddl.set_attribute ( 'my_storage', 'STAGE_ITAB', 'true' )
create index my_index on my_table( text ) indextype is ctxsys.context
parameters( 'storage my_storage sync (on commit)' );
insert into my_table values( 1, 'hello world' );
commit;
insert into my_table values( 2, 'goodbye world' );
commit;
select count(*) from dr$my_index$i;
COUNT(*)
----------
0
select count(*) from dr$my_index$g;
COUNT(*)
----------
4
exec ctx_ddl.optimize_index('my_index', 'MERGE')
select count(*) from dr$my_index$i;
COUNT(*)
----------
3
select count(*) from dr$my_index$g;
COUNT(*)
----------
0
exec ctx_ddl.set_attribute ( 'my_storage', 'G_TABLE_CLAUSE', 'storage (buffer_pool keep) )'
exec ctx_ddl.set_attribute ( 'my_storage', 'G_INDEX_CLAUSE', 'storage (buffer_pool keep) )'
Auto Optimize of Near Real Time Indexes
exec ctx_ddl.add_auto_optimize( 'my_index' )
This registers the index in a text data dictionary table, which can be seen through the view ctx_user_auto_optimize_indexes:
select aoi_index_name, aoi_partition_name from ctx_user_auto_optimize_indexes;
AOI_INDEX_NAME AOI_PARTITION_NAME
------------------------------ ------------------------------
MY_INDEX
Optimize merge, when configured in this way, is actually performed by a dbms_scheduler job called DR$BGOptJob, which we can see if we're logged in as a DBA user:
select owner, job_name, program_name from dba_scheduler_jobs where owner='CTXSYS';
OWNER JOB_NAME PROGRAM_NAME
-------------------- -------------------- --------------------
CTXSYS DR$BGOPTJOB DR$BGOPTPRG
Session Duration Stored Query Expressions (SQEs)
SELECT text FROM mydoc WHERE CONTAINS (text, '( sales data ) AND SQE1' ) > 0
ctx_query.store_sqe(
query_name => 'tempsqe1',
text_query => '( ( SDATA (public=true) OR ( sales WITHIN allowed_depts ) )*10*10 )',
duration => CTX_QUERY.DURATION_SESSION
)
Snippet Support in Result Set Interface
Oracle Text introduced the Result Set Interface (RSI) in 11g. RSI is an efficient way of running Text queries since it avoids the overheads of using the SQL layer, and is also very useful for queries which produce summary information that can't easily be coded in a standard SQL "SELECT".
<ctx_result_set_descriptor>
<hitlist start_hit_num="1" end_hit_num="10" order="SCORE DESC">
<rowid />
<score />
<sdata name="title" />
<sdata name="author" />
<snippet radius="20" max_length="160" starttag="<b>" endtag="</b>" />
</hitlist>
<count />
</ctx_result_set_descriptor>
This query will fetch back the rowid, score, two SDATA fields called "title" and "author", and the snippet. The "radius" and "maxlength" arguments are described elsewhere in the #TODO# snippet enhancements section.
In the Result Set you may see multiple snippet segments, such as:
<ctx_result_set>
<hitlist>
<hit>
<rowid>AAAUz8AAAAAADZBAAG</rowid>
...
<snippet>
<segment>running the <b>Oracle</b> Database</segment>
<segment>can be seen with <b>Oracle</b> Text</segment>
<segment>from <b>Oracle</b> Corporation based in California</segment>
</snippet>
It is up to the application to assemble the various snippet segments into something to display - typically they would be concatenated with an ellipsis ("...") between each.
Separate Offsets
When do we not need OFFSET information?
If your searches are mainly in the second list, then you are wasting I/O by fetching all the unnecessary OFFSET information. In that case, you should use the new storage option SEPARATE_OFFSETS.
exec ctx_ddl.drop_preference ( 'my_storage' )
exec ctx_ddl.create_preference( 'my_storage', 'BASIC_STORAGE' )
exec ctx_ddl.set_attribute ( 'my_storage', 'SEPARATE_OFFSETS', 'true' )
create index my_index on my_table( text ) indextype is ctxsys.context
parameters( 'storage my_storage' )
/
Name Null? Type
----------------------------------------- -------- ----------------------------
TOKEN_TEXT NOT NULL VARCHAR2(64)
TOKEN_TYPE NOT NULL NUMBER(10)
TOKEN_FIRST NOT NULL NUMBER(10)
TOKEN_LAST NOT NULL NUMBER(10)
TOKEN_COUNT NOT NULL NUMBER(10)
TOKEN_INFO BLOB
TOKEN_OFFSETS BLOB
So when should you use the new separate_offsets option? It has to be a judgment call based on the types of queries your users or application mostly run (see the lists above), and the size of your typical documents. If the documents are very large, there will be much bigger savings than when the documents are small.
Document Lexer
exec ctx_query.explain( 'my_index', 'multi\-function', 'explain_tab' )
select lpad(' ',2*(level-1))||level||'.'||position||' '||
operation||' '||
decode(options, null, null, options || ' ') ||
object_name plan
from explain_tab
start with id = 1
connect by prior id = parent_id
/
PLAN
------------------------------------------------------------------------------------------------------------------------
1.1 OR
2.1 AND
3.1 WORD MULTI-FUNCTION
3.2 WITHIN
4.1 WORD DR$ML$HYP_DOCS
4.2 SECTION DR$ML
2.2 AND
3.1 PHRASE
4.1 WORD MULTI
4.2 WORD FUNCTION
3.2 WITHIN
4.1 WORD DR$ML$DEFAULT_DOCS
4.2 SECTION DR$ML
SDATA Improvements
Of these, the last is the easiest to cover. You can now have up to 99 SDATA sections. Note that the combined number of FILTER BY and ORDER BY columns (which are implemented under the covers as SDATA) is still limited to 32.
alter index my_index parameters( 'add sdata section stocklevel tag stock datatype number' )
ctx_ddl.update_sdata(
idx_name => 'prod_ind',
section_name => 'stocklevel2',
sdata_value => sys.anydata.convertnumber(stocknum),
sdata_rowid => rid
DRG-50857: oracle error in ctx_ddl.update_sdata
ORA-20000: Oracle Text error:
DRG-11317: rowid AAAW+HAAEAAAUi0AAC is not in the index
Pattern Stopclass
exec ctx_ddl.create_stoplist( 'stop', 'BASIC_STOPLIST' )
exec ctx_ddl.add_stopclass ( 'stop', 'azn values', 'AZN[[:alnum:]]*' )
exec ctx_ddl.add_stopclass ( 'stop', 'azn values', 'AZN.*' )
Some other useful patterns
exec ctx_ddl.add_stopclass ( 'stop', 'fivedignums', '[[:digit:]\,\.]{5,}' )
exec ctx_ddl.add_stopclass ( 'stop', 'wordswith5digitssomewhere', '([[:alpha:]]*[[:digit:]]){5,}[[:alpha:]]*' )
Remove all words of any type which are longer than 20 characters
exec ctx_ddl.add_stopclass ( 'stop', 'longwords', '[[:alnum:]]{20,}' )
One thing to watch: in order to add a stopclass, you must define your own stoplist. This does NOT inherit the standard list of stopwords from the default system stoplist for your language. If defining your own stoplist you will need to explicitly add all the stopwords for your language. These are defined in an appendix to the Text Reference manual, or you can use ctx_report.create_index_script to dump the definition of a standard index, which will list all the stopwords for you.
Query Filter Cache
ctxfiltercache((query expression)[, save_score][, topN)')>0;
Forward Index
BIGRAM mode for the Japanese VGRAM Lexer
The Japanese VGRAM lexer is very efficient in terms of space storage. However, certain queries cannot be satisfied directly from the index, and we need to wildcard searches within the index. This can be quite slow. Using BIGRAM mode means all possible 2-gram tokens are stored, avoiding the need for wildcard searches at the cost of extra storage in the index.