Oracle E-Business Suite Standard Benchmarks

Oracle Applications Release 12 (12.2.9) Benchmark Results

Single Instance R12 Batch

Company System # of Workers Units per Hour Benchmark Version Date Submitted Disclosure Report
Order-to-Cash Large
2-Tier  
Oracle (Oracle Cloud Infrastructure) Database Tier: Oracle® (1-Node Compute Node) shape VM.Standard2.16 (16-cores, 240 GB) Applications Tier: Oracle® Compute Node shape VM.Standard2.16 (16-cores, 240 GB) 32 Read complete report to apply performance to sizing. Order-to-Cash 154,295 Lines/Hr (Note that workload is substantially increased from 12.0.4 & 11.5.10 processes) This computed throughput is for selected processes only R12 (12.2.9) Large Model 7-15-20

Detailed Report (PDF)

AWR Report

 

Oracle Applications Release 12 (12.2.7) Benchmark Results

Single Instance R12 OLTP

Company System Work Load Average Response Time Benchmark Version Date Submitted Disclosure Report
OLTP Medium
3-Tier  
Oracle (Oracle Cloud Infrastructure) Database Tier: Oracle® (1-Node VM DB System) shape VM.Standard2.16 (16-cores, 240 GB) Applications Tier: Oracle® VM shape VM.Standard2.16 (16-cores, 240 GB) Online: 3,000 Users Sub-sec (OATS) HR Self-Service Flow R12 (12.2.7) Large Model 4-24-20

Detailed Report (PDF)

AWR Report

 

Oracle (Oracle Cloud Infrastructure) Database Tier: (1-Node VM DB System) shape VM.Standard2.16 (16-cores, 240 GB) ) Applications Tier: Oracle® VM shape VM.Standard2.16 (16-cores, 240 GB) Online: 2,000 Users 1.66 sec (OATS) Order-to-Cash Flow R12 (12.2.7) Large Model 10-29-19

Detailed Report (PDF)

AWR Report

 

Single Instance R12 Batch

Company System # of Workers Units per Hour Benchmark Version Date Submitted Disclosure Report
Order-to-Cash Large
2-Tier  
Oracle (Oracle Cloud Infrastructure) Database Tier: Oracle® (1-Node VM DB System) shape VM.Standard2.16 (16-cores, 240 GB) Applications Tier: Oraclee® VM shape VM.Standard 2.16 (16-cores, 240 GB) 32 Read complete report to apply performance to sizing. Order-to-Cash 90,744 Order Lines/Hr (for metered workers - closest match to reported 11.5.10 processes) This computed throughput is for selected processes only. R12 (12.2.7) Large Model 7-29-19

Detailed Report (PDF)

AWR Report

 

Company System # of Workers Units per Hour Benchmark Version Date Submitted Disclosure Report
Payroll Large/Extra-Large
2-Tier  
Oracle (Oracle Cloud Infrastructure) Database Tier: Oracle® (1-Node VM DB System) shape VM.Standard2.16 (16-cores, 240 GB) Applications Tier: Oracle® VM shape VM.Standard2.16 (16-cores, 240 GB) 30 Read complete report to apply performance to sizing. Payroll 319,943 Employees/Hr (for metered workers - closest match to reported 11.5.10 processes) This computed throughput is for selected processes only. R12 (12.2.7) Large Model 4-9-19

Detailed Report (PDF)

AWR Report

 

Oracle Applications Release 12 (12.2.5) Benchmark Results

Single Instance R12 OLTP

Company System Work Load Average Response Time Benchmark Version Date Submitted Disclosure
Report
OLTP Medium
3-Tier  
Oracle (Cloud) Database Tier: Oracle® Public Cloud 16.2.2 OC4M (8-cores, 120 GB)
Applications Tier: Oracle® Public Cloud 16.2.2 OC4M (8-cores, 120 GB)
Online: 2,400 Users Sub-sec (OATS) HR Self-Service Flow R12 (12.2.5) Extra-Large Model 9-21-17

Detailed Report (PDF)

AWR Report

Oracle (Cloud) Database Tier: Oracle® Public Cloud 16.2.2 OC4M (8-cores, 120 GB)
Applications Tier: Oracle® Public Cloud 16.2.2 OC4M(8-cores, 120 GB)
Online: 1,000 Users 1.43 sec (OATS) Order-to-Cash Flow R12 (12.2.5) Extra-Large Model 9-21-17

Detailed Report (PDF)

AWR Report

Single Instance R12 Batch
Company System # of Workers Units per Hour Benchmark Version Date Submitted Disclosure
Report
Order-to-Cash Large
2-Tier  
Oracle (Cloud) Database Tier: Oracle® Public Cloud 16.2.2 OC5(4-cores, 30 GB)
Applications Tier: Oracle® Public Cloud 16.2.2 OC3M(4-cores, 60 GB)
1, 8, 16 Read complete report to apply performance to sizing. Order-to-Cash 85,763 Lines/Hr (Note that workload is substantially increased from 12.0.4 & 11.5.10 processes) This computed throughput is for selected processes only. R12 (12.2.5) Large Model 09-13-16

Detailed Report (PDF)

AWR Report

Oracle Applications Release 12 (12.1.3) Benchmark Results

Single Instance R12 OLTP

Company System Work Load Average Response Time Benchmark Version Date Submitted Disclosure
Report
OLTP X-Large
3-Tier  
Oracle (Sun)
Database Tier:
Oracle® SPARC M7-8 server equipped with 4 × 4.13 GHz Thirty-Two Core SPARC M7 processors (128-cores) Applications Tier:
All tiers hosted on one server (above) as four discrete environments (4 DB servers & 4 App Tier servers)
Online: 20,000 Users Sub-sec (OATS)
Order to Cash Flow,
Sub-sec (OATS)
iProcurement Flow,
Sub-sec (OATS)
Customer Service Flow,
Sub-sec (OATS)
HR Self-Service Flow,
Sub-sec (OATS)
Financials Flow
R12 (12.1.3) Extra-Large Model 10-19-15

Detailed Report (PDF)

AWR Report 1

AWR Report 2

AWR Report 3

AWR Report 4

Fujitsu
Database Tier:
Fujitsu® M10-4S server equipped with 4 × 3.7 GHz Sixteen-Core SPARC64 X+ processors (only 16-cores active)
Applications Tier:
Fujitsu® M10-4S server equipped with 4 × 3.7 GHz Sixteen-Core SPARC64 X+ processors (only 16-cores active)
Online: 4,000 Users Sub-sec (OATS) HR Self-Service Flow R12 (12.1.3) Extra-Large Model 08-27-14

Detailed Report (PDF)

AWR Report

Oracle (Sun)
Database Tier:
Oracle® SPARC M6-32 server equipped with 8 x 3.6 GHz Twelve-Core SPARC M6 processors (96-cores)

Applications Tier:
All tiers hosted on one server (above) as four discrete environments
Online: 18,500 Users 1.1 sec (OATS) Order to Cash Flow, 1.1 sec (OATS) iProcurement Flow, Sub-sec (OATS) Customer Service Flow, Sub-sec (OATS) HR Self-Service Flow, Sub-sec (OATS) Financials Flow R12 (12.1.3) Extra-Large Model 04-08-14

Detailed Report (PDF)

AWR Report 1

AWR Report 2

AWR Report 3

AWR Report 4

OLTP Large
3-Tier  
Fujitsu
Database Tier:
Fujitsu® M10-4S server equipped with 4 × 3.7 GHz Sixteen-Core SPARC64 X+ processors (only 16-cores active)
Applications Tier:
Fujitsu® M10-4S server equipped with 4 × 3.7 GHz Sixteen-Core SPARC64 X+ processors (only 16-cores active)
Online: 600 Users Sub-sec (OATS) iProcurement Flow R12 (12.1.3) Extra-Large Model 08-27-14

Detailed Report (PDF)

AWR Report

Fujitsu
Database Tier:
Fujitsu® M10-4S server equipped with 4 × 3.7 GHz Sixteen-Core SPARC64 X+ processors (only 16-cores active)
Applications Tier:
Fujitsu® M10-4S server equipped with 4 × 3.7 GHz Sixteen-Core SPARC64 X+ processors (only 16-cores active)
Online: 1,120 Users Sub-sec (OATS) Order to Cash Flow R12 (12.1.3) Extra-Large Modell 08-27-14

Detailed Report (PDF)

AWR Report

Single Instance R12 Batch

Company System # of workers Units per Hour Benchmark Version Date Submitted Disclosure Report
Order-to-Cash Large
2-Tier  
Oracle (Sun)
Database Tier:
Oracle® Public Cloud 16.2.2 OC4M(8-cores, 120 GB)

Applications Tier:
Oracle® Public Cloud 16.2.2 OC3M(4-cores, 60 GB)
1, 8, 11, 16 Read complete report to apply performance to sizing. Order-to-Cash 212,389 Lines/Hr (Note that workload is substantially increased from 12.0.4 & 11.5.10 processes) This computed throughput is for selected processes only. R12 (12.1.3) Large Model 09-15-16 Detailed Report (PDF)
AWR Report
Oracle (Sun)
Database Tier:
Oracle® SPARC T7-1 server equipped with 1 × 4.13 GHz Thirty-Two Core SPARC M7 processor (32-cores)

Applications Tier:
All tiers hosted on one server (above)
1, 10, 64 Read complete report to apply performance to sizing. Order-to-Cash 280,243 Lines/Hr (Note that workload is substantially increased from 12.0.4 & 11.5.10 processes) This computed throughput is for selected processes only. R12 (12.1.3) Large Model 09-17-15 Detailed Report (PDF)
AWR Report
Cisco
Database Tier:
Cisco® UCS™ B200 M4 server equipped with 2 × 2.60 GHz Intel® Xeon™ Fourteen-Core E5-2697 v3 processors (28-cores)

Applications Tier:
All tiers hosted on one server (above)
1, 8, 56 Read complete report to apply performance to sizing. Order-to-Cash 243,803 Lines/Hr (Note that workload is substantially increased from 12.0.4 & 11.5.10 processes) This computed throughput is for selected processes only. R12 (12.1.3) Large Model 09-4-14 Detailed Report (PDF)
AWR Report
Cisco
Database Tier:
Cisco® UCS™ B200 M3 server equipped with 2 x 2.90 GHz Intel® Xeon™ Eight-Core E5 2690 processors (16-cores)

Applications Tier:
All tiers hosted on one server (above)
1, 10, 32 Read complete report to apply performance to sizing. Order-to-Cash 232,739 Lines/Hr (Note that workload is substantially increased from 12.0.4 & 11.5.10 processes) This computed throughput is for selected processes only. R12 (12.1.3) Large Model 09-14-12 Detailed Report (PDF)
AWR Report
Payroll Extra-Large
2-Tier  
Oracle (Cloud)
Database Tier:
Oracle® Public Cloud 16.2.2 OC3M (4-cores, 60 GB)

Applications Tier:
Oracle® Public Cloud 16.2.2 OC3M (4-cores, 60 GB)
24, 4, 2 Read complete report to apply performance to sizing. Payroll 355,114 Employees/Hr (for metered workers - closest match to reported 11.5.10 processes) This computed throughput is for selected processes only. R12 (12.1.3) Extra-Large Model 8-15-16 Detailed Report (PDF)
AWR Report
Oracle (Sun)
Database Tier:
Oracle® SPARC T7-1 server equipped with 1 × 4.13 GHz Thirty-Two Core SPARC M7 processor (32-cores)

Applications Tier:
All tiers hosted on one server (above)
224, 112, 48, 32 Read complete report to apply performance to sizing. Payroll 1,527,494 Employees/Hr (for metered workers - closest match to reported 11.5.10 processes) This computed throughput is for selected processes only. R12 (12.1.3) Extra-Large Model 10-19-15 Detailed Report (PDF)
AWR Report
Cisco
Database Tier:
Cisco® UCS™ B200 M4 server equipped with 2 × 2.60 GHz Intel® Xeon™ Fourteen-Core E5-2697 v3 processors (28-cores)

Applications Tier:
All tiers hosted on one server (above)
56, 28 Read complete report to apply performance to sizing. Payroll 1,125,281 Employees/Hr (for metered workers - closest match to reported 11.5.10 processes) This computed throughput is for selected processes only. R12 (12.1.3) Extra-Large Model 09-04-14 Detailed Report (PDF)
AWR Report
IBM
Database Tier:
IBM® Power System S824 server equipped with 2 × 3.525 GHz POWER8™ Twelve-Core processors (Only 12-cores Active)

Applications Tier:
All tiers hosted on one server (above)
48, 42, 32, 24, 20 Read complete report to apply performance to sizing. Payroll 1,090,909 Employees/Hr (for metered workers - closest match to reported 11.5.10 processes) This computed throughput is for selected processes only. R12 (12.1.3) Extra-Large Model 03-27-14 Detailed Report (PDF)
AWR Report
2-Tier  
Cisco
Database Tier:
Database Tier: Cisco® UCS™ B200 M3 server equipped with 2 x 2.70 GHz Intel® Xeon™ Twelve-Core E5-2697 v2 processors (24-cores)

Applications Tier:
All tiers hosted on one server (above)
48, 24 Read complete report to apply performance to sizing. Payroll 1,017,639 Employees/Hr (for metered workers - closest match to reported 11.5.10 processes) This computed throughput is for selected processes only. R12 (12.1.3) Extra-Large Model 08-13-13 Detailed Report (PDF)
AWR Report
Cisco
Database Tier:
Cisco® UCS™ B200 M3 server equipped with 2 x 2.90 GHz Intel® Xeon™ Eight-Core E5 2690 processors (16-cores)

Applications Tier:
All tiers hosted on one server (above)
32, 20, 16 Read complete report to apply performance to sizing. Payroll 839,865 Employees/Hr (for metered workers - closest match to reported 11.5.10 processes) This computed throughput is for selected processes only. R12 (12.1.3) Extra-Large Model 09-14-12 Detailed Report (PDF)
AWR Report
Oracle
(Sun)
Database Tier:
Oracle® Sun Server™ X3-2L server equipped with 2 x 2.90 GHz Intel® Xeon™ Eight-Core E5 2690 processors (16-cores)

Applications Tier:
All tiers hosted on one server (above)
16, 20, 32 Read complete report to apply performance to sizing. Payroll 789,515 Employees/Hr (for metered workers - closest match to reported 11.5.10 processes) This computed throughput is for selected processes only. R12 (12.1.3) Extra-Large Model 04-09-12 Detailed Report (PDF)
AWR Report

Note: The Oracle E-Business Suite Benchmarking Kit is no longer being provided to partners. The information on this page is provided only for historical reference for previously published benchmarks.

1. Benchmark Structure

The Oracle R12 E-Business Standard Benchmark follows the E-Business 11i (11.5.10) benchmark model by combining Online transaction execution by simulated users with concurrent Batch processing to model a "Day-in-the-Life" scenario for a global enterprise. Initially, benchmark kits are to be offered in 'Extra-Small,' 'Small,' and 'Medium' sizes with a 'Large' kit to follow

Partners can elect to execute the standard benchmark, which includes the OLTP and batch workloads or a partial benchmark consisting of just OLTP execution (potentially for WAN testing) or just Batch execution (avoids the need for load driving systems).

2. Benchmark Execution

Partners executing this kit can obtain instructions, scripts, and a copy of an expanded Oracle database with sample data. They provide their own hardware, diagnostic and analytic tools, and load driving system. Typically, partners tune the online transaction execution and the two batch jobs separately before attempting an auditable run. Note that Oracle may be able to bundle the OATS (Oracle Application Testing Suite) load tool with the Toolkit in the future. Also note that benchmarks can only be executed on supported versions or combinations of Oracle and ‘partner’ software products and hardware products.

The run starts with the ramp-up of the simulated online transaction users. When the desired number of users is 'up and running' in a stable fashion, the Order-to-Cash batch job is started-initiating data sampling. Thirty minutes after the Order-to-Cash job starts, the Payroll batch job is started. The payroll batch job can be started any time after thirty minutes from the time when Order-to-Cash starts. Sixty minutes (one hour) after the Order-to-cash job starts, all batch execution must be complete and the data sampling period ends. The online users are ramped down. This timeline is subject to change as concurrency issues and checkpoints are evaluated. Changes would be communicated through the benchmark workgroup.

This timeline is modified appropriately for the execution of a partial benchmark.

3. Submitting Results

Partners are required to submit the recorded OLTP response times and the Batch execution times along with detailed supporting information for auditing by Oracle. The collection of logs, output of audit scripts, etc. are collected to insure transparency and reproducibility. Disclosure of tuning actions can also directly benefit Oracle's customer base.

4. R12 Vs 11.5.10

For continuity and trending, differences between the R12 and its predecessor benchmark-11.5.10 have been kept to a minimum. However, partner input and Oracle directives have led to a few noteworthy updates.

Four of the ~30 OLTP transactions have been updated from 'Forms-based' to 'Web-based.' Three of the batch processes that had been 'single-threaded only' are now multi-thread capable.

11i (11.5.10) runs were set up for 'maximum user load' (100%), '90% user load,' and '70% user load.' For R12 this is replaced with 'maximum user load' (100%), '50% user load' and 'single-user' reference times. The single user reference time is optional in benchmark submissions until OATS can collect this data.

5. Posting Results

Once a partner's results have passed audit, they will be summarized in a report to be posted on Oracle's website. The report highlights the specific results and provides supporting information about the workload, hardware implementation, system utilization, software versions, tuning steps and so forth. Proper disclosure of how the results were achieved is essential to credibility in the marketplace.

6. Performance Claims

Oracle recognizes that partners invest substantial resources in undertaking the R12 E-Business Standard Benchmark and will wish to share their accomplishments with the marketplace. Nevertheless, Oracle policy is to maintain a level playing field and to insist on civil discourse and technical transparency in performance claims. Performance claims can be made, using the following rules.

Primary Metrics for Comparison
The primary metrics for EBS R12 benchmarks are listed below. The primary metrics are the only metrics that can be published or used in fence claims. No other benchmark performance metrics are allowed for claims or comparison.

  • Number of Online users
  • Number of Order Lines per Hour
  • Number of Checks per Hour

Minimum Data for Disclosure
The publication of a performance claim in marketing materials, or any other external publication, must include the corresponding minimum data. Anyone can reference any other published benchmark as long as the minimum data is included in the reference and the claim meets the criteria set in the fence claim section.

  • EBS version, database version, and OS version
  • Server configuration - Processor chips, cores, threads, frequency, cache and memory
  • Primary benchmark metrics
  • Model size (comparisons of results of different size models is not allowed)
  • Date that the claim is validated, as an "As of" date. Note that this is not the date that the benchmark is published, but is the date that the claim was validated.

Fence Claims
The intent is to document a process for making comparisons or leadership claims with the goal of reducing incomplete or flawed comparisons. Claims must only make reference to the primary metrics. Fence claims should be segmented into the following categories:

  • Number of processor chips, cores or threads
  • Operating Systems
  • Windows
  • Linux
  • Unix

Price Performance Claims
Price performance claims are not allowed because pricing information is not part of the eBS benchmark submission. Therefore, performance claims would lead to inaccurate or non-standardized price performance results.

Allowable Fence Claim Examples

  • A new record was set by achieving the highest Oracle eBS R12 large model size result
    • All 3 primary metrics must show highest result in order for this claim to be made by a vendor
  • Best 4-core performance on Oracle eBS R12 medium kit.
    • Model size and benchmark version is required
  • Highest Oracle eBS R12 medium size result on Linux
    • This claim is valid as operating system is a fence claim
    • No other Linux result of equal user count can be published for this claim to be used. If equal result is published then vendor must use additional fence claim. i.e. processor chips, cores or threads
  • Per core leadership on Oracle eBS R12 Payroll Batch medium model size
    • This claim is valid as core is a fence claim. It must show the checks per hour in the minimum data disclosed. The number of workers/threads should be documented for batch comparisons if the amount is different in the results being compared.
  • Best Linux per processor/chip result on Oracle eBS R12 small model size
    • This claim is valid as operating system and processor chip are fence claims
  • Highest per core Unix performance on Oracle RAC eBS R12 large model size
    • This claim is valid as RAC or single instance database is a fence claim

Not Allowed Fence Claim Examples

  • Best 8-core Oracle eBS R12 medium model sizebased on overall response time
    • Response time is not the primary metric and is therefore not allowable for publication.
  • Price performance leadership on Oracle eBS R12 small kit
    • Pricing data is not a metric in the OASB benchmark, and therefore is not allowable for publication
  • Best overall Oracle eBS result, or Achieves new Oracle world record
    • Too broad of a statement; would require mention of kit size and version
  • Performance claims that extrapolate higher user counts than audited
    • Claims can only be made on published results, no extrapolations or estimates can be published. i.e. The system ran 3000 users @ 45% utilization. This wouldn't allow a claim that the system could run 6000 users at 90%
  • eBS R12 benchmark results should not be compared to 11i or previous benchmark results.
  • RAC results should not be compared to single instance results.
  • Comparisons of performance results of different size benchmark kits are not allowed.
    • Highest users per processor chip at 500 users on Oracle eBS r12 small model size achieves leadership against a competitor's 300 users per processor chip on Oracle eBS r12 medium model size. This would not be a valid or allowed claim due to workload differences.

7. Oracle EBS Benchmark Workgroup Meetings:

Workgroup conference calls will be arranged and hosted by Oracle, on a quarterly minimum basis. These meetings will be the focal point for Oracle to share news about benchmark and process changes with the members and for members to submit changes to the overall benchmark process and to this document. Oracle will maintain a process for the members to propose changes. Oracle may elect to implement changes or to submit the proposed change to a vote by the members. Each company that is an active member of the workgroup will have one vote.

Revision History:
* Fence Claims Rules last updated in January 2012.

APPEAL PROCESS

This is an appeal process for challenging errors in benchmark execution, documentation, benchmark test results, or errors in publishing benchmark claims. An appeal should be carried out in the following steps:

The appealing member representative should send a note explaining the issue to the Oracle chair prior to the next Workgroup Meeting. Oracle encourages the members to work out a resolution through their benchmark member reps. If this does not work, then at the next meeting there would be a status update and formal corrective actions would be requested. Oracle has the option of making a decision on the problem or submitting it to the Benchmark Partner Workgroup for a vote. Issues should be corrected within 14 days, or as agreed upon in advance by the Oracle and the members involved. If issues are not resolved, then corrective action may be necessary which could include formal request for corrective action by Oracle, temporary delisting of the benchmark from the Oracle website or, up to, temporary loss of voting status.