This content is submitted by a BigAdmin user. It has not been reviewed for technical accuracy by Sun Microsystems, though it may have been lightly edited to improve readability. If you find an error or would like to comment on the article, please contact the submitter or use the comment field at the bottom of the article. Community submissions may not follow Sun trademark guidelines. For information on Sun trademarks, please see

Managing Identities Efficiently Using SPMLv2

Manish Verma, September 2007


Service Provisioning Markup Language (SPML) deals with user, resource, and service provisioning. It is an extension to the identity management solution space. When identities (mostly users) are created, they need access to the digital and physical assets of the organization in order to become productive. In addition, as soon as identities become invalid, they need to be stripped of their access to the resources and services.

SPML promises to effectively, efficiently, and in a standard and structured way, address these mundane tasks.

My earlier article on this topic, Manage Identities More Effectively with SPML, was based on SPMLv1. In April 2006, the Organization for the Advancement of Structured Information Standards (OASIS) released SPMLv2, which includes many changes. This vastly improved specification makes it worthwhile for user organizations to take a serious look at adopting SPMLv2 for their provisioning tasks.

In this article I will take you through the SPMLv2 specification and explain the features it offers. Finally, I'll provide a small application to demonstrate how some mundane provisioning tasks can be delegated to machines.

Note: The code mentioned in this article is based on the openspml_v2-1.0.tgz toolkit from I tested this code on Apache Tomcat 4.1 and Java 2 Platform, Standard Edition (J2SE) version 1.5.0_07.


This article covers the following topics:

What Ailed SPMLv1?

There were two main drawbacks to SPMLv1:

  • It was bare bones. It had a very limited feature set that did not generate enough interest in the user community.
  • SPMLv1 did not fit the Web Services family of specifications known as WS-*. It did not get backing from some of the key players.

SPMLv2 has a rich feature list. It is also aligned better to the WS-* specifications and has the backing of all major players. It is expected that the adoption rate of this specification will be better than the adoption rate of its predecessor.

Anatomy of SPMLv2

Figure 1 is an overview of how different components of SPMLv2 interact with each other.


Figure 1: SPMLv2 Architecture

Requesting Authority (RA)

The Requesting Authority (RA) or requestor is the client that issues well-formed SPML requests to the Provisioning Service Provider (PSP). Nothing has changed between SPMLv1 and SPMLv2 in this respect, except for the fact that the client now can create SPML requests in one of the two supported formats: XML Schema Definition (XSD) and SPMLv1 schema format. XSD is the new format support introduced in SPMLv2. In SPMLv1, only SPMLv1 schema format was supported.

Provisioning Service Provider (PSP)

A Provisioning Service Provider (PSP) or provider listens to requests from the requestors. It processes the requests and sends back well-formed SPML responses to the requestors. The provider is the entity that makes Provisioning Service Targets (PSTs) or targets available to requestors. I'll discuss targets more in the next section; at this point it is enough to understand that requestors get all their provisioning tasks done through providers.

Provisioning Service Target (PST) and Provisioning Service Object (PSO)

SPMLv2 gives a new meaning to PSTs (targets):

  • Each target is a container for objects that a provider manages. For example, a target could be a LDAP system or any directory service that stores various user accounts (objects).

  • Provisioning Service Objects (PSOs) or objects are the items that ultimately get manipulated by a requestor. A requestor, for example, is not interested in manipulating the directory service (target); it is only interested in manipulating the user accounts (objects) that the targets contain.

  • A target is something that a requestor can discover from the provider. A provider may expose more than one target.

  • A target cannot be manipulated. Requestors cannot add, modify, delete, or otherwise act upon a target. A requestor can only act on the objects that the target contains. For example, a requestor can't do anything to a directory service; however, it can manipulate the user accounts (objects) contained in the directory service.

  • A target may contain a schema that defines the XML structure of the objects. This schema is referred to as the schema entity. Ultimately, it is this schema that the requestor looks up and uses to create a valid SPML document that it sends with its request to the provider for manipulating the object.

  • Even though a target may contain the XML structure of more than one object, it may not support all the objects for manipulation by the requestor. For example, a directory services target may contain two objects: one for the account entity and another for the person entity. It is possible that the target supports manipulation of the account schema entity only and not the person schema entity.

  • SPMLv2 allows schemas to be specified in two formats: XSD and the SPMLv1 schema format. Remember, in SPMLv1, the only format available was the SPMLv1 schema format.

    Each target schema includes a schema namespace. The schema namespace indicates (to any requestor that recognizes the schema namespace) how to interpret the schema. It is up to the requestor to accept and manipulate XML that is valid according to the schema of the target.

  • A target may also expose something called capabilities. Capabilities are new in SPMLv2. Don't spend too much time thinking about capabilities yet. Just remember that there are capabilities and that targets expose capabilities to requestors. I'll explain capabilities in detail later in this article.

A Quick Recap

All that you have learned so far gives you an idea about the lay of the land. The client (RA) sends a request to a provider (PSP). The provider executes the request on an object (PSO) that is contained within a target (PST), and the provider sends the response back to the client. The SPML messages flowing back and forth as requests and responses are in either of two formats: XSD or SPMLv1 schema.

What Exactly Can an RA (Client) Do?

RA (Client) is dependent on the capabilities that a provisioning server exposes. Let me explain what capabilities mean, in the SPML world, before we go any further.

Simply put, capabilities are operations that a provider exposes on behalf of a target to a requestor. Adding a little complexity to this definition, capabilities do not always result in additional operations. Some capabilities may just cause changes in the way the operations of other capabilities can be called by the requestor.

Capabilities are segregated into three groups: Group 1, Group 2, and Group 3.

Group 1: Core Capabilities and Operations

Core capabilities or core operations are mandatory for providers to implement. One of the operations is executed on the provider and the rest are for each target that a provider exposes.

SPMLv2 core operations include the following:

  • The spml:listTargets operation, which allows RAs to query for available targets with a provider. This operation is on the provider.
  • The add, lookup, modify, and delete operations are part of the core capabilities and execute on objects in a target.

All the information required by a requestor to execute the core operations on an object is defined as part of the schema entity, which the target exposes to the requestor.

Group 2: Standard Capabilities and Operations

Standard capabilities are all optional for the providers to implement on the targets that they expose.

SPMLv2 has the following standard capabilities:

  • Async capability
  • Batch capability
  • Bulk capability
  • Password capability
  • Reference capability
  • Search capability
  • Suspend capability
  • Updates capability

One thing to note about standard capabilities: Capability-specific data is associated with the objects of a target that supports that capability and is not part of the schema entity of the object. It is the responsibility of the capability to define the structure of the capability-specific data. Of all the standard capabilities listed previously, only reference capability requires capability-specific data to be stored with an object.

If a target declares support for a capability, then the target must implement all the operations that the capability defines.

Group 3: Custom Capabilities and Operations

The SPMLv2 capability mechanism is extensible. This open capability mechanism allows providers to define additional custom capabilities on targets.

An individual provider or a third party can define a custom capability that integrates with SPMLv2. A provider declares it support for a custom capability in exactly the same way that it declares support for standard capabilities.

In SPMLv1, the extension capability was provided through an "extended operations" mechanism.

Digging Deeper Into Capabilities and Operations

Core (Mandatory) Capabilities and Operations

As mentioned previously, core capabilities are mandatory for the provider and targets to implement. Here is a description of the core capabilities:

  • listTarget -- This operation is for a requestor to determine the set of targets that a provider makes available for provisioning. In addition to the target information, the requestor also gets information about the standard and custom capabilities of each target.

    The listTarget operation cannot be called in an asynchronous manner or batched, since the requestor cannot know if asynchronous capability or batch capability is supported by the provider.

  • add -- This operation enables a requestor to create a new object on a target. Using the add operation, a requestor can also create a hierarchy of objects, which essentially means that new objects can be created under existing objects. Also, when a new object is added, the requestor attaches the standard capability-specific data, if applicable.

  • lookup -- The lookup operation enables a requestor to obtain the XML that represents an object on a target. The lookup operation also gets any capability-specific data that is associated with the object for the requestor. Once the XML representation of the object is available to the requestor, the requestor can manipulate it as required and pass it to other operations, such as modify, as input data.

  • modify -- The modify operation allows addition, replacement, and deletion of the content for an existing object on a target. The modify operation can manipulate both the content that is part of the entity schema as well as the content. Caution must be taken because modify can change the object identifier itself. Hence, it is better that a provider exposes the immutable identifier as the PSO-ID of each object.

  • delete -- The delete operation enables a requestor to remove an object from a target. Any capability-specific data that might be associated with the object is also deleted. The delete operation fails if the object being deleted has another object under it, unless the requestor explicitly specifies that all nested objects must also be deleted.

Standard (Optional) Capabilities and Operations

Here is a description of standard capabilities:

  • Async capability -- Any target that supports async capability essentially allows requestors to call the available operations asynchronously on the supported targets. Note that it may not be possible to call all available operations asynchronously. Async does not have any operation of its own for requesting asynchronous operations.

    Async capability provides two operations that requestors can use to manage asynchronously running operations:

    • status -- This operation allows the requestor to check the status and also the result of an asynchronously running operation.
    • cancel -- This operation allows the requestor to request that the provider stop an asynchronously running operation.

    The spmlasync:StatusRequest and the spmlasync:CancelRequest operations are executed synchronously, because both operations execute on other asynchronous operations. (It would be confusing to have the status of a spmlasync:StatusRequest operation, or to have a cancel operation on a spmlasync:CancelRequest operation for asynchronous operations.)

    Asynchronous operations are resource consuming. There is a risk that they will hog resources, thus impacting the responsiveness of the provider. A provider decides the limit on the size of the result that it can store and also the length of the time for which it keeps the results.

    Finally, the following operations can never be executed asynchronously:

    • spml:listTargets
    • spmlasync:StatusRequest
    • spmlasync:CancelRequest
  • Batch capability -- Batch allows the grouping of multiple operations into a single request. Batch capability defines one operation: batch.

    Grouping operations together in a batch does not mean they become part of a transaction. (See Transactions for more information on transactions.) The batch operation itself cannot be batched. That is, there is no nested batching.

    A requestor can specify to a provider whether it is OK to process all the requests in a batch in parallel, or whether the requests must be processed sequentially in the order in which they are listed. A requestor can also specify what to do if an error is encountered in one of the requests in a batch. The options are to exit on error or to resume on error.

    The response of a request in a batch has positional correspondence, which means that the first response in the batch response corresponds to the first request in the batch request, and so on.

    The following operations cannot be batched:

    • spml:listTargets -- This operation cannot be batched because the requestor cannot know whether the provider supports batch capability until the requestor examines the results of the listTargets operation.
    • spmlbatch:batchRequest -- Nesting of batch operations is prohibited. Batching two async capability operations leads to timing problems.
    • spmlasync:StatusRequest
    • spmlasync:CancelRequest

    In addition, the following operations, with which you are not yet familiar, cannot be batched:

    • spmlsearch:SearchRequest
    • spmlsearch:IterateRequest
    • spmlsearch:CloseIteratorRequest
    • spmlupdates:UpdatesRequest
    • spmlupdates:IterateRequest
    • spmlupdates:CloseIteratorRequest

    None of the search and updates capability operations listed above can be batched, because doing so severely limits the scalability of the provider to serve more requestors. Batch operations are typically asynchronous, so storing the results of asynchronous batches imposes on providers a severe resource burden. Allowing a requestor to nest a search request or an iterate request within a batch would increase the resource crunch manifold.

    I'll discuss the search and updates capability operations later in this article.

  • Bulk capability -- Bulk capability provides a way to manipulate together multiple objects that meet a common criteria. There are two operations defined under bulk capability:

    • bulkModify -- The bulkModify operation applies a specified modification to every object that matches the specified query.
    • bulkDelete -- The bulkDelete operation deletes every object that matches the specified query. This operation fails for objects that have another object under them, unless the requestor explicitly specifies that all nested objects must also be deleted.
  • Password capability -- This is a very specific capability that allows manipulation of a password on an object. This capability defines four operations:

    • setPassword -- Allows a requestor to specify a new password for an object.
    • expirePassword -- Expires the password on the object, which marks the current password as invalid on the object. The requestor can, however, specify the allowed number of successful login attempts before the password expires.
    • resetPassword -- Changes the password to a random value. The random value generated for the password is returned to the requestor.
    • validatePassword -- Tests whether the specified password meets the password policy for a system or application.
  • Reference capability -- Reference capability enables the creation of links between different objects that may be part of different targets. This capability gives a true picture of how objects are related and connected.

    Reference capability defines no operation. A provider declares each type of reference that is permissible between one schema entity and some other schema entity. For example, the reference definition may declare that an Account schema entity is "owned" by a Person schema entity.

    It is possible that for the same reference type (for example, "owned") there may be different entity pairs. For example, while the Account schema entity is "owned" by a Person schema entity, an organizationalUnit schema entity may also be "owned" by the Person schema entity.

    There is always a direction to the reference relation from one schema entity to another schema entity. The inverse reference relation is not defined. If an Account schema entity is "owned" by a Person schema entity, the reference is stored with the Account schema entity. It does not mean that Person refers to Account.

    A reference definition puts no constraints on the number of objects to which an object may refer. For example, an Account schema entity can be "owned" by many Person schema entities. It is essentially a many-to-many relationship.

    Most references are simple in nature. One object's reference to another object carries no additional information. However, it is possible to have a complex reference in which additional information is stored for some type of references. For example, when a user is assigned a specific role, it is possible to attach start date information and end date information for that reference assignment.

  • Search capability -- As the name suggests this capability enables searching of objects on a target based on a query.

    The search capability defines three operations:

    • search -- The search operation returns in its response a first set of matching objects. Search is not batchable. Search also returns any capability-specific data that may be associated with the objects that match the search criteria.
    • iterate -- Each subsequent iterate operation returns more matching objects from the result set that the provider selected for a search operation. This operation is not batchable nor can it be called asynchronously.
    • closeIterate -- This operation allows a requestor to tell a provider that it does not intend to finish iterating the search results. This operation is not batchable nor can it be called asynchronously.

    Search operations, similar to async operations, put a lot of demand on provider resources. There is a risk that these operations may hog the resources, thus impacting the responsiveness of the provider. A provider decides the limit on the size of the result that it can store and also the length of the time for which it keeps the results.

  • Suspend capability -- The suspend operation essentially disables an object persistently. For example, the suspend operation can be used to suspend the privileges of an account while the person is on vacation.

    Suspend capability has three operations:

    • suspend -- The suspend operation allows a requestor to disable an object. It is possible for a requestor to suspend an object from a specific date and time.
    • resume -- The resume operation allows a requestor to re-enable an object that has been suspended. Even if the object is already enabled, the operation will return success.
    • active -- The active operation determines if the object is suspended or not.
  • Updates capability -- This capability gets all recorded updates on an object since a specific date and time. This capability is useful in doing audits on objects, for example, what was changed by whom and when.

    This capability defines three operations:

    • updates -- This operation gets the changes done to objects. A requestor can choose to select a record based on change-related criteria. The requestor can also ask for all changes since a specific date and time. This operation cannot be made part of a batch.
    • iterate -- The iterate operation selects the next set of objects from the result set that the provider selected for an updates operation. This operation is not batchable and cannot be executed asynchronously.
    • closeIterator -- This operation tells the provider that the requestor does not have any need for the results any more; hence, all the resources blocked for this request may be released.

    Update capability, similar to async capability, is also resource consuming at the provider end. The provider defines how large a result set it can keep on behalf of the requestor and for how long.


A traditional transaction happens in tightly coupled systems and has the following characteristics:

  • The transaction has ACID properties:
    • Atomic -- All participants must confirm or cancel.
    • Consistent -- A consistent result is obtained. At no point in time of the transaction does the system go into any undefined state.
    • Isolated -- Effects are not visible until all participants confirm or cancel. The intermittent status of various participants is not visible to the external world.
    • Durable -- Effects of the transaction are stored.
  • Transactions are short lived.
  • Resources are locked for the duration of the transaction.
  • Participants have a high degree of trust in each other and are willing to cooperate with the transaction manager of other participants.

A transaction in a Web Services context has the following characteristics:

  • A transaction may be of long duration, sometimes lasting hours, days, or longer.
  • Participants might not allow their resources to be locked for long durations.
  • The communication infrastructure between participants might not be reliable.
  • Some of the ACID properties of traditional transactions are made not mandatory.
  • A transaction might succeed even if only some of the participants choose to confirm and others cancel out.
  • All participants may choose to have their own coordinator (transaction manager), because of lack of trust.
  • All activities are logged.
  • Transactions that have to be rolled back have a concept of compensation.

For Hard-Core Geeks: Some Sample Working Code

I'll use the same example that I used in my last article on SPMLv1 in which I created an email account for a new employee. In the example, I demonstrated how different systems talk the SPML language to take care of such a mundane task with minimal human intervention.

The sequence of steps that I will take you through for accomplishing the same thing in SPMLv2 are as follows.

Note: I will use the open source SPML 2.0 toolkit provided by

Step 1: Set Up the Infrastructure

Download the open source SPML 2.0 toolkit from

Deploy the sample provider application that comes with the toolkit. You will need a servlet engine. Apache Tomcat will work just fine. The sample provider essentially has three components:

  • Client
  • Servlet to handle the client requests
  • The sample SPML executor, which is essentially the provider (I'll explain more about the provider in the next step.)

To ensure that the sample provider is deployed successfully, run the client application and see the output on the console. The output should show the SPML messages for the requests and the response for all the core operations.

The next two steps (server side) take you through the mechanics of setting up and readying the provider.

Step 2: Set Up the Provider

For our simple sample, we will use the sample provider that comes with the SPML 2.0 toolkit. If you so wish, you can change the functioning of the provider by integrating it with an actual target. The sample provider that comes with the toolkit persists the information to a file in CSV format.

However, we will create our own target to be exposed through the provider. The target will expose the schema for the requestor to create a new email account.

Step 3: Set Up the Target

Create the following XML code and store it in the servlet engine context where you have deployed the sample provider application. The target will use this file as the entity schema when the requestor queries the provider for the targets.

Listing of accountTarget.xml:

<?xml version="1.0" encoding="ISO-8859-1"?>
  storeName='contacts' nameOfTypeNVP="objtype" nameOfUidNVP="objid">
    <osd:objectdef classname='accountInfo' nameOfIdNVP="contactId">
        <osd:nvpdef name='contactId' type='T' required='true'/>
        <osd:nvpdef name='fullName' type='T'/>
        <osd:nvpdef name='email' type='T'/>
        <osd:nvpdef name='description' type='T'/>
        <osd:nvpdef name='project' type='T'/>


In the web.xml file of the sample provider application, change the value of the SpmlViaSoap.spmlExecutors.nvpose.schemaFileURL parameter to http://localhost:8080/nvp/accountTarget.xml. (Change the host name and port according to your local installation.)

This parameter essentially points the provider to the entity schema of the target.

The next two steps (client side) allow you to query the provider for the available targets and execute one sample operation.

Step 4: List the Available Targets

The client interrogates the provider for the targets that it supports along with the schema entity of those targets. Use the listTargets operation to get the supported targets and their schema entities.

Step 5: Execute One Sample Core Operation

Use the targetID to add the email information, per the schema shown in the listing of accountTarget.xml for the new account. Use the add operation for adding the email account.

I have modified the sample client application that comes with the toolkit,, to create a further drummed-down version of the client application named

Both the accountTarget.xml file and the client application are provided with this article. To use them, do the following:

I encourage you to play with these files after you have successfully run the sample out-of-box application that comes with the SPML2.0 toolkit.

The Road Ahead

Identity management continues to hold the attention of the IT fraternity. It is one of the big issues that concern CIOs when they roll out new applications across geographically dispersed locations or when they have to merge the IT infrastructure of newly acquired business entities with their organizations. If this integration task is left to be done as a patchwork, one-off customized solution, it can result in many people remaining unproductive for too long, for want of proper setup of their identities.

SPML makes the task of managing identities efficient and predictable. I recommend that organizations seriously consider making provisioning part of their identity management rollout plan. The first step in that direction is to understand what SPML offers. I hope this article has helped in that regard.

About the Author

Manish Verma is VP Delivery at Fidelity National's software development center in Chandigarh, India. Manish has 14 years of experience in all aspects of the software development lifecycle, and has designed integration strategies for client organizations running disparate systems. Manish's integration expertise is founded on his understanding of a host of technologies, including various legacy systems, .NET, Java technology, and the latest middleware. Prior to Fidelity National, Manish worked as a software architect and technical lead at Quark Inc., Hewlett-Packard, Endura Software, and The Williams Company. You can contact Manish at mverma [at]

Comments (latest comments first)

Discuss and comment on this resource in the BigAdmin Wiki

Unless otherwise licensed, code in all technical manuals herein (including articles, FAQs, samples) is provided under this License.

Left Curve
Popular Downloads
Right Curve
Left Curve Right Curve
Left Curve
More Systems Downloads
Right Curve