Write for OTN
Earn money and promote your technical skills by writing a technical article for Oracle Technology Network.
Learn more
Stay Connected
OTN Architect Community
OTN ArchBeat Blog Facebook Twitter YouTube Podcast Icon

Oracle Identity Manager 11gR2 Reconciliation Events Processing

by Firdaus Fraz

Available options and the associated limitations for reconciliation and the sequencing of reconciliation events in Oracle Identity Manager

April 2013

Downloads
download-icon13-1Oracle Identity Manager

This article discusses reconciliation and the sequencing of reconciliation events; the functional flow of reconciliation data from an Identity Connector Framework (ICF) based on Oracle Identity Manager (OIM) connector to OIM repository; the pros/cons of reconciliation sequencing; and available options and the associated limitations.

OIM is used for complete identity lifecycle management. In any enterprise environment, the enterprise identities may be scattered across various applications. When the OIM provisioning solution is in place, it interacts with all these applications to manage/administer the identities. OIM communicates to the enterprise applications via OIM connectors and maintains a copy of all the identity data in an internal datastore.

To get all the identity data, OIM uses its reconciliation engine to reconcile data from managed target applications (enterprise applications).

When OIM is first deployed, an initial full reconciliation task is executed from all authoritative and other target systems. Thereafter, an ongoing reconciliation is executed to get data incrementally from all target systems. OIM uses different entities to represent the authoritative versus other target systems data in the OIM datastore.

An authoritative source is a reliable, credible foundation for a particular type of identity (e.g., user, role, organization), the OIM creation of which it also drives. The enterprise applications to which users have access are managed through OIM ( and are referred to as target systems or application instances); target system data is stored in the OIM datastore for each user.

The ongoing incremental reconciliation updates the identity data (users, roles, organizations) in OIM as well as the data corresponding to the user's enterprise application accounts.

Now that we have a fair idea of the basic types of reconciliation in OIM, let's start with the concept of reconciliation sequencing and work towards an understanding of how reconciliation event processing works in OIM, the pros/cons of reconciliation sequencing, available options, and the associated limitations.

As its name suggests, reconciliation sequencing reconciles the data from a target application in a certain order and processes the reconciled data in OIM, maintaining that order. For example, consider a case of authoritative reconciliation. Suppose we are reconciling users from the Lightweight Directory Access Protocol (LDAP) Directory (or any other application). Since this is authoritative reconciliation, each user reconciled from the LDAP Directory will also be created in OIM. Now, consider that each user entity also has an associated manager ID as one of the user attributes. If the reconciliation event for user creation is processed before the user entity for the user's manager is created in OIM, OIM user creation will fail. Here is the reason: at the time of OIM user creation, the "manager ID" field will refer to the user's manager; OIM will try to find the manager entity in OIM and will fail during the process.

We would face no failures, however, if we could create/process such events in sequence, so that the user's manager entity is created in OIM before the reportee's user entity. This is just one example. There are various other scenarios for which reconciliation sequencing makes sense.

The case discussed above could be handled through automatic retry of events or by a custom schedule task; I'll address such solutions in a future article.

OIM10g had an out-of-the-box product feature for enabling reconciliation sequencing. There was a flag in the design console, and a resource object form for enabling recon sequencing for a particular resource defined in OIM. A resource object is a representation of an OIM managed target system. (With the R2 release of OIM, target systems are represented by an OIM entity called "Application Instance."

The feature was discontinued because the extra overhead involved with sequencing reconciliation events led to performance implications.

OIM release 11g brought many architectural changes that improved the product's functionality and performance, including the introduction of asynchronous processing. Consider user creation in OIM: activities essential to user creation (e.g., validation of data used to create the user) are executed in a synchronous manner and things like creating audit entries are executed asynchronously after the user is created.

Next, we will discuss the reconciliation processing in OIM11gR2 and some possible options for sequencing events. Note, however, that one cannot sequence asynchronous processing in OIM. Further, sequencing comes with its own limitations, which we will discuss briefly, below.

Reconciliation Event Processing in OIM

The data reconciled from the target system is read by the OIM ICF connector. The data is then passed to the ICF Integration layer, which is part of the OIM server. The ICF Integration layer invokes the OIM APIs for creating the reconciliation events.

Please refer to OIM documentation for more information on the OIM Identity Connector Framework:

http://docs.oracle.com/cd/E21764_01/doc.1111/e14309/icf.htm

The reconciliation event processing flow-from the OIM ICF connector to the OIM ICF Integration layer to the OIM reconciliation engine-is depicted below.

fraz-oim-reconciliation-fig01
Figure 1: Reconciliation event processing flow

ICF Connector Layer

ICF provides org.identityconnectors.framework.spi.operations.SearchOp interface for performing reconciliation operations.

SearchOp - org.identityconnectors.framework.spi.operations.SearchOp.

All the reconciliation-related operations are performed using search functionality provided by the ICF. Connector bundles, which require supporting reconciliation operations, should implement this interface, which in turn requires a connector developer to provide implementation for two methods:

  1. FilterTranslator<T> createFilterTranslator(ObjectClass oclass,OperationOptions options)
  2. void executeQuery(ObjectClass oclass,T query, ResultsHandler handler, OperationOptions options)
  • The FilterTranslator is for defining the searchFilter.
  • The executeQuery method is being invoked by OIM (scheduled job) through the ICF integration layer.
  • executeQuery implementation should search through the target, and the obtained results must be passed onto "ResultsHandler," which is part of the ICF Integration layer.

org.identityconnectors.framework.common.objects Interface ResultsHandler

handle (ConnectorObject obj)

ICF Integration Layer

ICF provides org.identityconnectors.framework.spi.operations.SearchOp interface for performing reconciliation operations.

SearchOp - org.identityconnectors.framework.spi.operations.SearchOp.

All the reconciliation-related operations are performed using search functionality provided by the ICF. Connector bundles, which require supporting reconciliation operations, should implement this interface, which in turn requires a connector developer to provide implementation for two methods:

  1. FilterTranslator<T> createFilterTranslator(ObjectClass oclass,OperationOptions options)
  2. void executeQuery(ObjectClass oclass,T query, ResultsHandler handler, OperationOptions options)
  • The FilterTranslator is for defining the searchFilter.
  • The executeQuery method is being invoked by OIM (scheduled job) through the ICF integration layer.
  • executeQuery implementation should search through the target, and the obtained results must be passed onto "ResultsHandler," which is part of the ICF Integration layer.

org.identityconnectors.framework.common.objects
Interface ResultsHandler

handle (ConnectorObject obj)

ICF Integration Layer

The ICF Integration layer processes the data that was passed into ResultsHandler, applies the defined transformations defined, and creates a map of attribute name -value pairs, based on the corresponding attribute mapping lookup definition. The attribute name would be either the OIM User Form attributes names or the Application Instance process Form attribute names based on whether the map is being created for provisioning or target/trusted reconciliation.

The reconciliation data is now ready to be passed onto OIM reconciliation APIs. ICF Integration layer allows the defining of a batch size. According to this batch size, the reconciliation data is passed onto the OIM reconciliation bulk API "createReconciliationEvents". The order of the data is the same as the order in which the data was passed into ResultsHandler.

createReconciliationEvents(InputData [ ] input, BatchAttributes
batchAttribs)

Multi-threaded Reconciliation:

In OIM11gR2PS1, the ICF Integration layer was enhanced to have multiple threads submit the reconciliation event creation batch. The number of threads by default is 1 (the number can be configured in the ICF configuration lookup). The number of threads can be increased.

Note: Reconciliation events submitted for creation across threads are not sequenced.

OIM Reconciliation Engine Layer

The OIM reconciliation engine processes the bulk input data and creates a reconciliation event in the same order. Creating a reconciliation event is nothing but creating a record in the OIM reconciliation database tables.

OIM also defines a batch size for reconciliation. The OIM reconciliation engine keeps adding reconciliation events to the batch as they are created and, as soon as the batch size is reached, it submits the batch number (that is unique to the batch) to the JMS queue.

OIM reconciliation Message Driven Beans (MDB) read the JMS queue and receive the batch number from the JMS message. The MDB invoke a database stored procedure, to which they pass the batch number.

The stored procedure uses the batch number to read all the reconciliation events from the recon_Events database table. While doing so, it orders the data using the reconciliation event key. So, in a standalone DB install, the reconciliation events would be processed in the order in which they are created (assuming that the reconciliation event key depicts the correct order).

The reconciliation event key is generated using a database sequence. So, in the case of a database cluster, there is no guarantee that the reconciliation event key will be assigned in the order the record was inserted into the database, because different cluster nodes might be using different database sequences.

Can Reconciliation Sequencing be Implemented?

Out of the box, reconciliation sequencing is not a product feature in OIM from the 11g release onwards. There could be workaround ways of implementing a custom reconciliation sequencing solution, but they will have some limitations.

Though I do not recommend the approach that I am going to describe below, it is one of the possible options. Under a low reconciliation load, the approach might work well, but it is definitely a strict NO for heavy traffic.

Note that this approach will work only for a custom ICF-based connector.

Sequencing can be achieved with the following code flow in the ICF custom connector (in the ICF connector, instead of passing the reconciliation data as input to the ICF Service Provider Interface (SPI) ResultsHandler, invoke the OIM reconciliation APIs directly for each reconciliation record in the desired sequence):

createReconciliationEvent (java.lang.String psObjectName,
java.util.Map poData, boolean pbFinishEvent,
java.lang.String psDateFormat)

processReconciliationEvent (long rceKey)

Define the OIM reconciliation batch size as "0" (zero) for this connector's reconciliation profile.

Limitations of This Approach:

  • We are not submitting the reconciliation events in batch using the bulk OIM reconciliation API createReconciliationEvents, we are submitting them for creation one at a time. Thus, we would not get the benefit of faster processing using a bulk API.
  • We are invoking the processReconciliationEvent, thereby bypassing the OIM reconciliation batching process; this has a huge impact on performance.
  • We are not leveraging the entire ICF SPI framework by invoking OIM APIs directly and not passing data to the ResultsHandler ICF SPI. Therefore, the data processing, transformation, and forming of a data map (according to attribute map lookup) will have to be done explicitly by the connector developer, which would have otherwise been handled automatically by the ICF Integration layer.

About the Author

Firdaus Fraz is Principal Solutions Architect with the Oracle Fusion Middleware Identity Management A-Team. In this role she works with IDM customers and partners world wide to provide guidance on implementation best practices, architecture, use-case design, and troubleshooting.
Blog LinkedIn