|SOA Best Practices: The BPEL Cookbook|
BPEL with Reliable Processing
Learn how to build a reusable, highly reliable business process using BPEL.
Published March 2006
As Web services and BPEL processes proliferate within an organization, quality of service becomes a distinguishing factor in the adoption of a particular service. How do you ensure that the requested functionality will be completed by the service despite all obstacles, such as network failure or unavailable applications? Can the service be leveraged across different business processes? Answers to all these questions determine the reusability of a particular business process. The more robust the business process is, the higher the reusability of the process across multiple applications.
In this installment of The BPEL Cookbook, we will describe a business scenario comprising multiple applications. This scenario demonstrates the need for a BPEL process capable of providing functionality with assurance and how this BPEL process will be leveraged multiple times in different business scenarios. It then walks you through a step-by-step process of building a BPEL process that offers this high quality of service through intelligent retry logic. You also learn how this process can be enhanced through superior exception management, email-based notification, and error logging.
Reusability of a service is the cornerstone of any service-oriented architecture (SOA) strategy. Organizations can derive the true value of any SOA implementation only if they can create a set of reusable services. These services will be then used by different departments or applications in different business contexts. In addition to the actual business value provided, reusability of a specific service is driven by the success history of the service. What is its failure rate? Does it have the ability to overcome network interruptions? Is the service resilient enough to recover from errors and exceptions? The higher the assurance the service can provide about its ability to complete the requested job, the better its chances are of being leveraged in different business contexts.
Consider the scenario shown in Figure 1: An enterprise needs to provision technical documentation of its products to its various partners. The level of access to the documentation depends on the partner type and the product documentation being requested. This information is stored in an Oracle database. As partners join and leave the network, provisioning information is modified (access is added/updated/deleted) through appropriate approvals and updates in multiple enterprise applications.
Figure 1 Entitlement provisioning environment
As entitlements are activated, deactivated, and modified in the entitlements database, notifications must be sent to Documentum. The order in which the entitlement changes are sent must match the order in which they are created. It is critical that no messages are lost, and a complete audit log must be maintained and logged to a central application database log.
BPEL can play a vital role in orchestrating the entitlement activation and deactivation. This BPEL process will work closely with the TIBCO messaging bus to deliver the messages reliably to Documentum. It will also be responsible for error logging and notification. The process has to perform the task efficiently and reliably enough that network interruptions or the unavailability of a Documentum application doesn't break it down. It should be capable of trying again and again to perform its task to completion. How do you develop such a BPEL process with reliable processing?
The rest of this article details a strategy for improving the quality of service of processing, using BPEL. A key aspect of creating retry with data processing is the database. This strategy should be just one piece of the puzzle in improving the reliability and quality of service of the processes running in BPEL.
Let's take a look at the logic for designing such a BPEL process.
Figure 2 BPEL process logic
This BPEL process, which reads a record from the database and process it, is kicked off by Oracle BPEL's database polling adapter. One of the last steps of the BPEL process is to report a success or a failure back to the database. This database procedure then determines if the process needs to be retried, based on both the status and number of retry attempts made, and if it does need to be retried, it reschedules the record to be picked up in the future. The final step in the process is to call the log service. In addition to creating a log entry in the database, this service uses a set of rules that determines if a message with a certain status from a given process should have a notification sent out. If so, it also identifies the e-mail template to use and the distribution list for the e-mail, to which the appropriate information in the log gets added.
This improves the reliability both in cases where the problem may be self-correcting and in cases where human interaction may be needed to fix the problem. In contrast to the partner retry, this approach retries the entire execution of the BPEL process. It is more feature-rich than just creating a simple retry loop within BPEL.This processing model is easy to externally monitor and interact with. If you manage the create date and last modified date of the process in the database, it is possible to run queries against the database to see
Additionally, it is easy to initiate a retry of an aborted record and expedite a scheduled retry.Here are three important things you should do in implementing the above design:
In the next section, you learn how to build such a process.
Building the Sample
Let's build the process described above. First, you create the database tables to support the process, and then you use the BPEL PM Designer to model the process.
The status field should be the same one that is used by BPEL's database polling adapter to identify unprocessed records. It is helpful to create a convention to make it easy to identify records that are still being processed, those that completed successfully, and those that completed with an error. A couple ways to do this are by using number ranges or a prefix convention. Check CREATE_TB_DB_POLL_SOURCE.sql in the sample code download.
The view should then expose only records whose Process Not Before column entry is earlier than the current SYSDATE or null. This is also an easy place to limit the number of records that should be processed. The view can also expose only the primary key of the record to be processed, making the view more lightweight. Check CREATE_VW_ DB_POLL_SOURCE_VW.sql in the attached sample.
The status procedure should have a flag to indicate an error, or a separate procedure can be used to identify an error state. For the error state, the procedure should identify whether the process instance should be retried and, if so, when. It then should update the record accordingly. The typical strategy is to retry a few times with a short interval between tries and then go to a larger number of retries with a much longer interval. Check SET_DB_POLL_SOURCE_FAULTED.sql in the sample code download.
Next, you create the actual BPEL process to handle the database record in a reliable manner.Creating the DB polling process. This involves several steps:
Let's consider the individual elements of this BPEL process. The key pieces are
This is the first set of tasks inside the processing scope. As the name suggests, it is responsible for process initialization and setting global, error, and log variables. As shown below in Figure 11, the following activities are included in the sample code download.
Figure 11 Flow steps to initialize the process
Now we're at the heart of the processing. Take a look at the BPEL process flow below.
Figure 12 BPEL process flow
Process flow. After initializing the variables, the process begins reading the database. It updates the current status to "processing" and then reads the database record. After verifying the correctness of the data, it transforms the message for destination delivery. Before sending the message to the destination, it updates the current status in the database to "sending." Finally, it sends the message to the destination (the JMS bus, in this case). That the process updates the status in the database as it traverses the key points in the flow is especially useful if the process is long-running or has some key risk areas. The read from the database (ReadDB partner link) has been separated from the view that initiates the process (DBPolling partner link). This keeps the view simple and gets past the limitation that joins cannot be used against the views in BPEL.
Exception handling. Each scope contained in a larger scope catches all the exceptions and throws an internally defined fault. For example, when there is an error during reading of the database record, the scope will catch this error, set the error status as "Error while trying to read in data to be processed," and throw this error to the parent scope. This means that every fault is localized to a finer level.
Whenever an error occurs inside the main processing, a fault of internal type is thrown. An outer catch block then catches this fault. One thing to be careful about when using this strategy is not to catch a custom fault in a catchall block, which causes all the custom fault information to be lost. If a custom fault is thrown within a scope, a catch block catching that specific fault should be used in addition to a catchall block that will catch any other errors.
Reusability. It's important to note that the processing logic is based on business needs. In this scenario, this process is providing message delivery to the JMS destination in a reliable manner. However, in real life, processing can vary from updating a bank account to creating a new customer, to synchronizing order data. The reliability offered by this process remains the same. The process can also be reused in a reliable manner.
Reply/Report Final Status
Figure 13 Flow to depict final status update
The processing status is updated in the database, to either SUCCESS or the fault encountered during the process, as shown above. Although this update is optional, it is recommended to enable an outside application to monitor the progress of all the process instances from start to end.
In this example, the reporting of the final status is done with a database procedure (SetFaulted partner link). Although the reporting can be done inside BPEL, deferring the update to a database procedure simplifies the BPEL process.
A report that the final status is unsuccessful triggers retry of the process. If the process has not been retried the maximum number of times, it will be retried after a certain interval.
The logging process gathers processing information and sends it to the centralized logger. The most important part of that information is the severity of the error and the message code. Logging provides the following benefits:
Figure 14 Fault is thrown to identify erroneous processes
As shown in Figure 14, the final rethrow simplifies the identification of problematic process instances. These process instances are relatively easy to locate in the BPEL console. When a fault is rethrown at the end, the process instances get flagged in the console and you can also filter to show just the processes that ended with a fault (Canceled).
This completes the development of the BPEL process. This process should be combined with other practices to provide the best reliability; examples include database monitoring, synthetic transactions, and log monitoring. The process enables you to easily identify the records that have succeeded, ended with a fault, or not completed their processing in a timely manner. All this information is very helpful in a real-life business environment in which each record being processed can be worth thousands of dollars and SLA violations can result in unhappy customers.
This article has demonstrated how to build a reusable business process that performs its task with high reliability. The process you built sends a message to the JMS destination in a reliable manner, and given its high degree of reusability, it can be used to provide any business functionality in a reliable manner.
All business exceptions cannot be caught in a BPEL process. Offering high quality of service does not end at the process level. It has to be combined with efficient monitoring of audit logs, notifying appropriate stakeholders, troubleshooting exceptions at the data and process level, and enabling transparency at every stage of processing. Any reliable process should address all these requirements.
Michael Cardella is a Staff Engineer at Qualcomm CDMA Technologies (QCT). Michael works in the custom applications development team, primarily on Web service- and business process-related applications. Previously he served as principal architect for a leading Web services security and management product.
Jeremy Bolie is a Senior IT Manager at QCT, managing the custom applications and Documentum development team. Jeremy has over 10 years of experience with Java and Oracle technologies, and has been involved with Web services and Service-Oriented Architectures since the late 1990s.Send us your comments