Tuning Messages in an Oracle E-Business Suite – ESB Integration Environment

By Ron Batra [Mar-08]

Introduction and Scope

In a large scale A2A (application-to-application) integration of Oracle E-Business Suite into an Enteprise Application Framework using Oracle SOA Suite, let us consider the sub-case of publication service (mainly Oracle Project Accounting with some other data elements). This service publishes data elements Oracle Enterprise Service Bus (ESB), from where five other applications subscribe to the data elements being published. The focus of this article is to show an iterative process in tuning the messages generated in an event driven architecture framework. Figure 1 shows the conceptual architecture. The focus of this paper is on message sizing and not so much on service invocation and related topics such as adapter selection etc.

Event Driven Architecture and Messaging:

In an event driven architecture, changes to data elements can be transferred in real-time/near real-time through an ESB framework. These data changes are converted into a message that is published to the ESB. In older file transfer based integration methods, delta transactions were simply collected and written to a file and this file was transferred to the target system(s). In most cases, the file-size was never an issue. In an event driven architecture and an ESB framework, message sizing and volume of messages is one of the key design decisions to be made. In most cases, business events cannot be changed, so the focus is on tuning message sizing and throughput. To do this well data cardinatity and affinity needs to be closely examined and implemented in the final design.

Consideration on Integration Design Patterns:

In this scenario of integrating N+ Enteprise Applications, the canonical model design pattern works well – it allows domain value mapping as well as dynamically addition of new subscribers. In this case, while designing the publication service, we started off with 5 subscription services; however that number can change dynamically without any major code rework. It is usually worthwhile to invest time upfront in designing the canonical model and validate rigorously the business requirements that drive the design of the service.

Oracle EBusiness Suite Integration Repository:

The Oracle Integration Repository maintained at http://irep.oracle.com lists integration methods available for each product family in the EBusiness Suite with the most common choices being Concurrent Programs, EDI, Interface Views, Java, Open Interfaces, PL/SQL, Web Services and XML Gateway Maps. In our case of the outbound interface in an ESB framework (publishing to the ESB) - our business requirements drove us to extract data from different product families and combine them into a large message. We chose to develop a system of views and triggers as explained later on in the section.

Data Elements to be published:

The design of the publication service is driven largely by the requirements of the subscribers. However, reading between the tea-leaves and trying to forecast future data needs can often pay off by avoiding redesign of the publication service in the future. Data elements from Human Resources, General Ledger, Purchasing, Project Accounting had to be combined into one large message to be published to the ESB.

 

Module

Key Data Elements

HR

Organization Units, Organization ID, People, Assignments

GL GL Code Combinations, GL Periods
PO Vendors

Project Accounting

Project ID, Project Name, Project Cost, Project Cost Distribution Lines, Periods, Tasks, Expenditures (Items, Groups, Types etc.)

Table 1: Data Elements Listing

Design Approach 1: Complex SQL View and Staging Tables

In the first iteration of the design, we used a combination of views, PL/SQL program units and a parent-child relationship structure created between 4 staging tables. Table 5 has to retrieve a data set based on a Project ID field. This query has to navigate the parent-child relationship between 4 tables for every result-set. A trace of the adapter code found a lot of fetches, even with bind variables; it was obvious there was some room for efficiency here. We traced the SQL calls made to the database using traditional Oracle database tracing methods.

The key point to note here is the number of joins the adapter code had to do, to put together a result set that could be transformed into an XML message. This led to a significant difference between SQL elapsed time and actual execution time. This can increase even more if the Oracle Application Server and Database Server are in different locations and network latency can come into play – and applies to every message published. The average message size was 54KB and we averaged about 10000 messages per day.

Design Approach 2: Adding GROUP BY Clause

In the second iteration, we changed the SQL of the Delta Transactions View, and added a GROUP BY clause. This was done to reduce the number of transactions populated in the cost transactions table. The message size reduced to about 40KB, and we averaged about 15,000 messages per day.

Design Approach 3: Flattened Table Structure with Simplified Logic

To reduce the recursive SQL and the joins coming from the parent-child relationship, we decided to create a flattened (essentially a de-normalized table) where data elements are referenced by project id and date. The temp table creates only records that do not have an ID in table 1. Once a message is published on the ESB, we do not need to store the same data elements in the temp table, and let the ESB queues and its persistence take care of managing data consumption by the publishers.

We found our messages reduced to 14KB and while the number of messages went up, we were able to process 50000 messages in 2 hours. This was well within the acceptable processing window.

Determining message size:

We combined two measurements to determine message size. In an ESB environment, a message is a combination of data elements and an XML structure (tags). The data elements sizing is calculated from the tables where the data originates – eg. VARCHAR2(60) + DATE + NUMBER – one can take the maximum possible and assume that as the upper limit for the data elements. To the bytes obtained from this, we added the size of the XML tags relevant to the data mapping. Since XML is text, this is relatively uncomplicated. However, this is an estimate and not the actual amount that will pass through the ESB, where transport methods and protocols can add their overhead to a message.

Conclusion:

Tuning messages can start from the data extraction layer. Depending on the complexity of the data sets, and related affinity, a publication service can use traditional SQL tuning methods to enable adequate throughout of messages through the adapter. Depending on the integration environment and number of end-points, a high-volume of small sized messages may be processed faster than a smaller number of, but larger sized messages. Message sizing is a key element in an event driven architecture.


Developing an Error Handling Framework using Oracle BPEL and BAM

By Ron Batra and Saumitra Chattopadhyay [Sep-08]

Overview

An efficient error handling framework is essential for the success of any enterprise level Service Oriented Architecture (SOA) integration project. This article shows the development of a framework that integrates a real time dash board, error handling and alerts using Oracle Business Process Execution Language (BPEL) and Business Activity Monitoring (BAM) tools.

This common error handing framework can be used for auditing and error reporting for services and interfaces developed using Oracle BPEL/Enterprise Service Bus (ESB), as well as extensions to the Oracle E-Business Suite developed using other tools. The auditing component allows staff to quickly view the run history of interfaces and other services. The error management component allows users to view errors reported by the BPEL/ESB framework and other services.

Approach

The exception framework consists of several Oracle BAM objects that are utilized by BPEL/ESB services and other non BPEL services. This allows each BPEL service to report auditing information, errors, and basic reporting data in a consistent manner while leveraging code re-use. For each BAM object defined, there is also a corresponding BPEL/ESB service exposed as a web service that allows non-BPEL/ESB processes to report audit and error information to the framework.
This framework also sends notifications to the appropriate roles, in case of any error. The BAM report and dashboard incorporates features - so that only users with the appropriate authorization and authentication permissions will have access to the functionality.

Design Assumptions

This solution is architected using the following assumptions:
· All BPEL services will submit error or audit information to the framework using the BAM objects defined as part of this specification.
· Applications other than BPEL processes that wish to submit errors to the framework have two options :
a. Calling a standard SOAP Web service with an XML payload to submit their data.
b. Writing the audit and error information in a central table, this in turn will be synced with the BAM dashboard using Enterprise link.
· Email notifications will be sent out when errors are encountered. Notification contact information will be stored in the BAM schema. One or many contacts will be possible for each type of error.
· A detailed error table will be created in a separate schema. This will store the detail errors generated by the interfaces
· Interfaces and extensions developed using non-BPEL tools will populate the central exception table. This is important for a project with a large Oracle E-Business Suite development component.
· Errors in BPEL process will be captured in BAM by sensors.
· Errors in non BPEL process will be captured in a central error table.
· Data from the error table will be sent to BAM using Enterprise Link.
· Error report will be published through BAM dashboard on real time basis.
· Error email notification will be sent through BAM alerts.

Conceptual Architecture


Figure 1: Conceptual architecture showing integration of BPEL and BAM in an error handling framework

BPEL Process Error Scenario

In all situations, BPEL services or processes will report errors to the framework using the supplied BAM objects. In a BPEL process environment, errors can occur in the following broad three categories:

1. Errors in Adapters: They are generated by the adapter framework itself. The handling of errors is different for inbound or outbound interaction. In either case, when these errors happen, BPEL sends the error message to BAM using sensors.

2. Errors in BPEL Processes: Two types of errors can happen within the BPEL process.

Technical errors: Any activity can generate some error while executing. They can be caught directly through the use of the built-in try/catch mechanism and the error can be reported using the standard BAM sensor technique.
Business process errors: Any Partnerlink may incur a business process error (e.g. Credit Check could not be performed for a person). These errors can be captured in the BPEL process and reported to BAM using a sensor.

The calling BPEL process must assemble all of the relevant error information prior to passing the data to the BAM sensor.

3. Errors in non-BPEL processes

In any integration project, we need to develop a significant amount of services in the end application which in turn can be invoked as web services from BPEL. These services are developed using the core technology used by that application. For example, if we are integrating E-Business Suite with other applications, we need to develop some services using PL/SQL and the scheduler (Concurrent Manager). Errors within concurrent processes cannot manipulate the BAM objects directly. It is expected that one of two approaches will be used with any concurrent process (PL/SQL) code developed:
· Calling a standard SOAP Web service with an XML payload to submit their data.
· Writing the audit and error information in a central table, and this will be synched with the BAM using Enterprise link.

BPEL Process Audit Scenario

Capturing audit information of the interfaces is very important to provide performance/load/downtime statistics, as well as regulatory in some environments.
Users are typically interested in knowing:
a. Min, max & average transactions passing through the interface on daily/weekly/monthly basis.
b. Peak time for any interface
c. Transactions failed by day/month
d. Min, max & average time taken by Interface

Audit information can be captured in the BPEL process using sensors. At least two sensors, one at the beginning of the flow and another at the end, are required. In addition to this, audit information can be captured from other important activities within a BPEL process.

BPEL Process Report Scenario

In addition to Error and Audit information, users are interested to see reports based on the business data passed through the interfaces. For example in a purchasing interface, users may like to know: Maximum PO amount approved in a day, Total PO Amount approved/rejected.
To get these kinds of reports, we have to capture business data in a report object from BPEL activity using sensors.

Error Correction and Resubmit

After receiving the error alert/notification, in some cases users will like to correct the data in the interface table and resubmit the process from the middleware, instead of resending the message from the end applications. For example, let us say an EDI850 (an Inbound Sales order) sent by a customer fails to create Order in your ERP system, due to some data issue in your system. In this case it may not be feasible to ask the customer to resend the same document. In our scenario the users need a correction from where users will be able to query the error records, analyze the errors, correct the errors (if necessary) in the interface staging table(s) and finally resubmit the process.

Most ERP applications store the inbound data in the interface tables before importing into the base table. If that is not the case, a staging/interface table can be created in the middleware Database server. In either case, error transactions are flagged as ‘Error’ in the interface table. BPEL instance ID is also stored in the interface table, this helps to link records in the Error table with the records in the interface table.

A correction form can be created with the following features:
· User will be able to query records using, Unique Business document number (e.g. Order Number), or BPEL Instance Id.
· Records will be displayed in the form, along with the detail description of the error.
· Users can correct the fields which are allowed to update
· Finally they can resubmit the transaction.
· BPEL Resubmit API can be called from the form to resubmit the old instance
· Or a new BPEL Instance can be started.

Conclusion

Oracle BAM Active Studio is a robust web based reporting tool for creating and delivering reports. From BAM Active Studio, power users can share reports with other users and create alerts for report delivery using email. Reports are either real-time reports, with live data updating on screen, or point-in-time reports. The BAM dashboard can also be linked to any enterprise portal.

BPEL is integrated with BAM on real-time basis using sensors. By combining the capabilities of BPEL and BAM, an efficient and user friendly integration framework can be created that can serve multiple purposes.