1. Benchmark Structure
The Oracle R12 E-Business Standard Benchmark follows the E-Business 11i (11.5.10) benchmark model by combining Online transaction execution by simulated users with concurrent Batch processing to model a "Day-in-the-Life" scenario for a global enterprise. Initially, benchmark kits are to be offered in 'Extra-Small,' 'Small,' and 'Medium' sizes with a 'Large' kit to follow
Partners can elect to execute the standard benchmark, which includes the OLTP and batch workloads or a partial benchmark consisting of just OLTP execution (potentially for WAN testing) or just Batch execution (avoids the need for load driving systems).
2. Benchmark Execution
Partners executing this kit can obtain instructions, scripts, and a copy of an expanded Oracle database with sample data. They provide their own hardware, diagnostic and analytic tools, and load driving system. Typically, partners tune the online transaction execution and the two batch jobs separately before attempting an auditable run. Note that Oracle may be able to bundle the OATS (Oracle Application Testing Suite) load tool with the Toolkit in the future. Also note that benchmarks can only be executed on supported versions or combinations of Oracle and ‘partner’ software products and hardware products.
The run starts with the ramp-up of the simulated online transaction users. When the desired number of users is 'up and running' in a stable fashion, the Order-to-Cash batch job is started-initiating data sampling. Thirty minutes after the Order-to-Cash job starts, the Payroll batch job is started. The payroll batch job can be started any time after thirty minutes from the time when Order-to-Cash starts. Sixty minutes (one hour) after the Order-to-cash job starts, all batch execution must be complete and the data sampling period ends. The online users are ramped down. This timeline is subject to change as concurrency issues and checkpoints are evaluated. Changes would be communicated through the benchmark workgroup.
This timeline is modified appropriately for the execution of a partial benchmark.
3. Submitting Results
Partners are required to submit the recorded OLTP response times and the Batch execution times along with detailed supporting information for auditing by Oracle. The collection of logs, output of audit scripts, etc. are collected to insure transparency and reproducibility. Disclosure of tuning actions can also directly benefit Oracle's customer base.
4. R12 Vs 11.5.10
For continuity and trending, differences between the R12 and its predecessor benchmark-11.5.10 have been kept to a minimum. However, partner input and Oracle directives have led to a few noteworthy updates.
Four of the ~30 OLTP transactions have been updated from 'Forms-based' to 'Web-based.' Three of the batch processes that had been 'single-threaded only' are now multi-thread capable.
11i (11.5.10) runs were set up for 'maximum user load' (100%), '90% user load,' and '70% user load.' For R12 this is replaced with 'maximum user load' (100%), '50% user load' and 'single-user' reference times. The single user reference time is optional in benchmark submissions until OATS can collect this data.
5. Posting Results
Once a partner's results have passed audit, they will be summarized in a report to be posted on Oracle's website. The report highlights the specific results and provides supporting information about the workload, hardware implementation, system utilization, software versions, tuning steps and so forth. Proper disclosure of how the results were achieved is essential to credibility in the marketplace.
6. Performance Claims
Oracle recognizes that partners invest substantial resources in undertaking the R12 E-Business Standard Benchmark and will wish to share their accomplishments with the marketplace. Nevertheless, Oracle policy is to maintain a level playing field and to insist on civil discourse and technical transparency in performance claims. Performance claims can be made, using the following rules.
Primary Metrics for Comparison
The primary metrics for EBS R12 benchmarks are listed below. The primary metrics are the only metrics that can be published or used in fence claims. No other benchmark performance metrics are allowed for claims or comparison.
Minimum Data for Disclosure
The publication of a performance claim in marketing materials, or any other external publication, must include the corresponding minimum data. Anyone can reference any other published benchmark as long as the minimum data is included in the reference and the claim meets the criteria set in the fence claim section.
Fence Claims
The intent is to document a process for making comparisons or leadership claims with the goal of reducing incomplete or flawed comparisons. Claims must only make reference to the primary metrics. Fence claims should be segmented into the following categories:
Price Performance Claims
Price performance claims are not allowed because pricing information is not part of the eBS benchmark submission. Therefore, performance claims would lead to inaccurate or non-standardized price performance results.
Allowable Fence Claim Examples
Not Allowed Fence Claim Examples
7. Oracle EBS Benchmark Workgroup Meetings:
Workgroup conference calls will be arranged and hosted by Oracle, on a quarterly minimum basis. These meetings will be the focal point for Oracle to share news about benchmark and process changes with the members and for members to submit changes to the overall benchmark process and to this document. Oracle will maintain a process for the members to propose changes. Oracle may elect to implement changes or to submit the proposed change to a vote by the members. Each company that is an active member of the workgroup will have one vote.Revision History:
* Fence Claims Rules last updated in January 2012.
This is an appeal process for challenging errors in benchmark execution, documentation, benchmark test results, or errors in publishing benchmark claims. An appeal should be carried out in the following steps:
The appealing member representative should send a note explaining the issue to the Oracle chair prior to the next Workgroup Meeting. Oracle encourages the members to work out a resolution through their benchmark member reps. If this does not work, then at the next meeting there would be a status update and formal corrective actions would be requested. Oracle has the option of making a decision on the problem or submitting it to the Benchmark Partner Workgroup for a vote. Issues should be corrected within 14 days, or as agreed upon in advance by the Oracle and the members involved. If issues are not resolved, then corrective action may be necessary which could include formal request for corrective action by Oracle, temporary delisting of the benchmark from the Oracle website or, up to, temporary loss of voting status.