Your search did not match any results.
Unless explicitly identified as such, the sample code here is not certified or supported by Oracle; it is intended for educational or testing purposes only.
|This sample code allows users to upload history into Flashback Data Archive tables and to move history data from other sources into Flashback Data Archive, and to move history from one database to another.||Flashback Data Archive: History Import Scripts|
The TaBle FileSystem (TBFS) exposes a simple relational table (primary key and BLOB column) as a DBFS filesystem that, in conjunction with the dbfs_client, can be accessed by filesystem tools. The TBFS sample code also provides a useful starting template for building new custom filesystems that plug into the DBFS framework. How it works:
TBFS plugs into the DBFS by implementing the DBFS Store Provider Interface (SPI). The SPI methods in TBFS generate a simple, flat, namespace that maps the primary key of the table into DBFS pathnames, and providers read/write access to the BLOB column via DBFS access methods.
The techniques used in the TBFS implementation can be easily adapted to build other custom filesystems.
Follow the instructions in the README.txt supplied with the TBFS. The toplevel script tbfs.sq, when run as SYSDBA, creates and sets up the TBFS.
|DBFS Sample Store|
|This archive contains code to create several PL/SQL programs and to use the PL/SQL hierarchical performance profiler to record the execution of each. It also contains the profiler output for each run, prepared as a set of mutually cross-referencing HTML files. The slides for the presentation that theses examples illustrate are included.||PL/SQL Hierarchical Performance Profiler Demo|
|This code installs implements the "split one column into two" Oracle Database 11g Release 2 Edition-based Redefinition (EBR) exercise that is described by many different authors. First it installs the application ordinarily. Then it makes the application EBR-ready. The online property is proved by running uninterrupted, concurrent done DML, from other sessions, while the application upgrade is done||Self-contained Edition-based Redefinition Exercise|
|This sample uses the regular expression APIs in Oracle Database 10g to parse the raw HTTP stream received from the SGD database, and extract only the DNA sequence. This sequence is stored in a table locally and further analyzed to identify certain enzyme patterns using regular expression functions.||DNA Sample|
|Using a typical user registration scenario, this sample application uses the regular expression functions that are implemented in Oracle Database 10g to validate user inputs.||User Validation Sample|
|This sample demonstrates the use of Oracle Database 10g regular expression APIs to validate, search and extract information from the database using TRIGGERs, PROCEDUREs, CURSORs etc. Using the scenario of User Information system, this sample uses variety of patterns to search and extract user information such as hobbies and interest, location information etc stored in a database table.||User Search Sample|
|Oracle Database 10g introduces support for the FORALL syntax with non-consecutive indexes in collections. The INDICES OF clause allows the FORALL syntax to be used with sparse collections, while the VALUE OF clause is used for collections of indexes pointing to other collections. This sample shows how to use these features in a PL/SQL application.||FORALL|
|The SQL Developer reports are primarily reports on the OWB runtime, there are some simple reports for the design and data quality, but the reports are mostly centered on the runtime; analysis of map/flow timings, audit details etc.||OWB SQL Developer Reports|
|There are two schema-level collection prototypes: VARRAY and (nested) TABLE. Both define one-dimensional ordered arrays of elements of a specified type, and can be leveraged in the creation of user-defined schema-level types.||Multilevel Collections|
|Cursor expressions (sometimes known as cursor subqueries) are an element of the SQL language and pre-Oracle9i were supported in SQL and by certain programming environments but not by PL/SQL. Oracle9i introduced PL/SQL support for cursor expressions. For example, a cursor expression can be used in the SELECT statement used to open a PL/SQL cursor, and manipulated appropriately thereafter. It can also be used as an actual parameter to a PL/SQL procedure or function, which has great significance in connection with table functions.||Table Functions and Cursor Expressions|
|The assigning of values to PL/SQL variables in SQL statements is called binding. The binding of an entire collection at once is called bulk binding. Bulk binds improve performance by minimizing the number of context switches between the PL/SQL and SQL engines.||Bulk Binding|
|While CASE constructs don't offer any fundamentally new semantics, they do allow a more compact notation and some elimination of repetition with respect to what otherwise would be expressed with an IF construct.||Using CASE Statements in PL/SQL programs|
|In order that a SQL statement can be executed, it has to be parsed (checked for syntactic and semantic correctness) and the execution plan has to be calculated. All this costs computational resources. To save on these costs, the Oracle instance maintains a system-wide LRU cache (aka the shared cursor cache) - exposed via v$sqlarea - of previously encountered SQL statements and appropriate derived information so that when the next SQL statement is submitted for parsing it is checked for match against the cached ones. (The definition of the criterea for a match is beyond the scope of this tutorial. Roughly speaking, the current statement must be both textually identical to its match candidate, famously to the extent of whitespace and upper/lower case identity, and the types of the bind variables must match.) When the current statement is matched, the stored derived information (parse tree, execution plan, etc) is reused and computational cost is saved.||Cursor reuse in PL/SQL static SQL|
|Pre 9.2.0, the way to denote the director(ies) for files was via the UTL_FILE_DIR initialization parameter. This suffered from the disadvantages that the instance had to be bounced to make changes to the list of directories and that there was no secutity scheme (all users could access files on all diretories). Version 9.2.0 allows the same mechanism (the DIRECTORY schema object) to be used with Utl_File as is used for BFILEs. You should consider the UTL_FILE_DIR initialization parameter as slated for deprecation.||Overview of Utl_File enhancements introduced in Oracle9i Database Version 9.2.0|
|A PL/SQL RECORD is the datatype that corresponds to a row in a schema-level table. It is the natural construct to use when manipulating table rows programatically, especially when a row is read (via SELECT, UPDATE...RETURNING or DELETE...RETURNING), manipulated programatically, and then recorded (via INSERT or UPDATE) in an another table with the same shape.||Using RECORD binds for SQL in PL/SQL programs|
|See start.htm||Utl_File: Overview of enhancements introduced in Oracle9i Database Version 9.2.0|
|See start.htm||Using RECORD binds for SQL in PL/SQL programs - Oracle 9.2|
|Various, see zip||Associative arrays (index-by-varchar2 tables) - introduced in Oracle Database 9.2.0|
|The Web depends on automatic communication between business sites across the public internet. The HTTP transport mechanism is used to send the request and to receive the reply. Though partners in a particular B2B relationship could define standards for their protocols from scratch, the de facto standard is emerging to use XML for both request and reply. Of course we can expect increasing standardization in future, extending to cover the specifics of the XML encoding. Oracle9i Database Release 1 has technology to allow both the sender and the receiver straightforwardly to implement their services backed by an Oracle database, and using only PL/SQL on top of productized APIs.||Utl_Http Package in Oracle9i Database Release 1|