1. Benchmark Structure
The Oracle R12 E-Business Standard Benchmark follows the E-Business 11i (11.5.10) benchmark model by combining Online transaction execution by simulated users with concurrent Batch processing to model a "Day-in-the-Life" scenario for a global enterprise. Initially, benchmark kits are to be offered in 'Extra-Small,' 'Small,' and 'Medium' sizes with a 'Large' kit to follow
2. Benchmark Execution
Partners executing this kit can obtain instructions, scripts, and a copy of an expanded Oracle database with sample data. They provide their own hardware, diagnostic and analytic tools, and load driving system. Typically, partners tune the online transaction execution and the two batch jobs separately before attempting an auditable run. Note that Oracle may be able to bundle the OATS (Oracle Application Testing Suite) load tool with the Toolkit in the future.
3. Submitting Results
Partners are required to submit the recorded OLTP response times and the Batch execution times along with detailed supporting information for auditing by Oracle, and by an independent third-party auditor. The collection of logs, output of audit scripts, etc. are collected to insure transparency and reproducibility. Disclosure of tuning actions can also directly benefit Oracle's customer base.
4. R12 Vs 11.5.10
For continuity and trending, differences between the R12 and its predecessor benchmark-11.5.10 have been kept to a minimum. However, partner input and Oracle directives have led to a few noteworthy updates.
5. Posting Results
Once a partner's results have passed audit, they will be summarized in a report to be posted on Oracle's website. The report highlights the specific results and provides supporting information about the workload, hardware implementation, system utilization, software versions, tuning steps and so forth. Proper disclosure of how the results were achieved is essential to credibility in the marketplace.
6. Performance Claims
Oracle recognizes that partners invest substantial resources in undertaking the R12 E-Business Standard Benchmark and will wish to share their accomplishments with the marketplace. Nevertheless, Oracle policy is to maintain a level playing field and to insist on civil discourse and technical transparency in performance claims. Performance claims can be made, using the following rules.
Primary Metrics for Comparison
Minimum Data for Disclosure
Price Performance Claims
Allowable Fence Claim Examples
Not Allowed Fence Claim Examples
This is an appeal process for challenging errors in benchmark execution, documentation, benchmark test results, or errors in publishing benchmark claims. An appeal should be carried out in the following steps:
The appealing member representative should send a note explaining the issue to the Oracle chair prior to the next Workgroup Meeting. Oracle encourages the members to work out a resolution through their benchmark member reps. If this does not work, then at the next meeting there would be a status update and formal corrective actions would be requested. Oracle has the option of making a decision on the problem or submitting it to the Benchmark Partner Workgroup for a vote. Issues should be corrected within 14 days, or as agreed upon in advance by the Oracle and the members involved. If issues are not resolved, then corrective action may be necessary which could include formal request for corrective action by Oracle, temporary delisting of the benchmark from the Oracle website or, up to, temporary loss of voting status.