Racing to Release: Automated Application Promotion
Pages: 1, 2

I Didn't Change a Thing, Honest!

Now that the role of controlled environments has been defined, let's look at what it takes to actually establish such an environment. Automated provisioning is one such technique.

Automated provisioning

If you have always used the configuration wizard to build environments, you may be wondering how such a process can be automated because the wizard requires so much manual interaction. If this applies to you, you need to learn about WebLogic Scripting Tool (WLST), which enables all the features of the configuration wizard to be controlled by a scripting language. WLST can import the templates used by the wizard and perform through a script the same customizations that the wizard provides by its graphical interface. I'll discuss scripting in WLST in more detail below. Right now, you just need to know—or believe—that environment builds can be scripted.

Table 1 highlighted potential differences between test environments. However, although some or all of these differences may apply in you project, the core of the provisioning will be carried forward from one environment to the next. After all, the same application suite is being deployed to the same platform in each environment. By identifying the differences between environment requirements, you can factor out the common features of a successful deployment within the scripts that steer your automation process. Thus, the provisioning for one environment forms a prototype for all the other environments that will be used to test your suite of applications. When writing a provisioning automation script, you should anticipate the provisioning requirements of all subsequent test environments. To do that, you will need to be clear about the differences expected between the immediate test environment and the future test environments. You will need to draft a table like Table 1, which should be derived from your project's test plans.

Using scripts to control automation of environment build helps to promote consistency. It also permits the provisioning to be put under configuration change management. This is an important benefit. Poorly tested changes to environment configuration can have a negative effect on application performance, and can even cause application outages. By maintaining scripts under change control, scripts can be reverted to previous versions, and proposed changes to scripts can be subjected to procedures that ensure they are validated before being rolled out. Configuration management systems also provide a means of documenting the reasons for change and, as such, they provide a historical view of how the current settings were arrived at; frequently, they provide reminders of some things that won't work—they've been tried before and failed!

An often underestimated advantage of scripted provisioning is that the scripts can capture patterns for provisioning and deployment. Often, the complexity of provisioning a multitiered environment with many servers across multiple machines takes far longer and reveals many more challenges than expected. If care is taken in the authoring of its provisioning scripts, for example comments are inserted to explain the less-than-obvious details, then those scripts can be referenced by future projects to recount how provisioning challenges were addressed in that project, and they may provide a model for equivalent solutions in later projects. At Orbism, we maintain a library of reference solutions for both provisioning and deployment tasks based on and updated by our consultants' field experiences, and these provide an accessible set of model solutions for common challenges in these areas.

Automating deployment

Skeptics may say that deployment is such a trivial operation that there is little to be gained from its automation. However, unless your application is itself quite trivial, scripting its deployment will almost always pay back the effort you put into developing the scripts.

Scripting deployment helps ensure that the test conditions in successive tests is consistent. If manual deployment is used, it is harder to verify that the same steps were taken and taken in the same sequence. If the process is automated, then the steps taken are manifest in the script content. Additionally, deployment scripts themselves can be subjected to configuration management and change management just as provisioning scripts can.

Deployment may be more involved than simply submitting a new set of deployment files to a deployer tool. There are explicit settings in deployment files that define resource requirements for the application components. The correct values for these will be environment-specific, requiring modifications to the deployment files before they can be correctly deployed. When deploying applications to WebLogic Platform 9.x environments, deployment plans are the recommended method for applying such values. This page of the WebLogic Platform 9.2 online documentation discusses using deployment plans when deploying the same application to different target environments. For WebLogic Platform 8.1, the deployment customization may be complex involving unpacking archive files, parsing and editing the exploded content, and then repacking the archive. The complexity is a cogent case for automation. The PO Sample contains sample code and script that illustrates parsing and modification of deployment descriptors for deployment.

Often deployment, and redeployment, into WebLogic domains can produce unexpected exceptions. The BEA Support Patterns site has a section on troubleshooting deployment failure that discusses many common exceptions that can occur when deploying and redeploying applications. In many cases, the recommended remedy involves additional steps to be taken before deployment or redeployment, and these additional steps must be included in the application deployment scripts.

The impact of access controls

A key distinguishing feature of a controlled environment is access to the environment is governed by a policy. Normally, the closer an environment gets to production, the tighter its policy. To illustrate this point, Table 2 represents the environments of Table 1 but list some possible access constraints for them:

Functional Test Developers run provision scripts.
Developers tear down environment.
Both operations and developers may execute runtime administration and monitoring tools.
Developers modify provisioning and deployment scripts.
Developers run deployment scripts to deploy new application versions.
Developers start and/or abort tests.
System Test Deployment team runs provision scripts.
Deployment team tears down environment.
Operations may execute runtime administration tools.
Operations and developers may execute runtime monitoring tools.
Deployment team may modify provisioning and deployment scripts.
Deployment team runs scripts to deploy new application versions.
Test team starts and/or aborts tests.
Performance Deployment team runs provision scripts.
Deployment team tears down environment.
Operations may execute runtime administration tools.
Operations and developers may execute runtime monitoring tools.
Deployment team may modify provisioning and deployment scripts.
Deployment team runs scripts to deploy new application versions.
Test team starts and/or aborts tests.
User Acceptance Deployment team runs provision scripts.
Deployment team tears down environment.
Operations may execute runtime administration and monitoring tools.
Deployment team modifies provisioning and deployment scripts.
Deployment team runs scripts to deploy new application versions.
User representatives start tests.

Table 2: Illustrative Test Environment Policies

Of course, the table represents a huge simplification. A successful project will enable the various teams to collaborate in many ways. For example, administration tools may be used in a performance test environment to tune the system. Although operations staff may be assigned responsibility, it would make practical sense for this to be carried out in conjunction with system architects, application designers, and platform specialists. Even with a collaborative approach where changes are discussed between teams and agreed upon, the actual performance of any specific task should be enforced according to the policy.

Scripting to Automate Promotion

We discussed using scripts instead of wizards to provision environments and how scripts facilitate automation. Here I'll show the main players in the game of automated provisioning for BEA WebLogic Platform. I'll also provide some script samples for those who have not looked at scripting environment builds before.

Scripting languages and tools for BEA WebLogic Platform

Automation, as discussed here, uses scripting tools for configuring the environment. Another approach would be to use virtualization for provisioning, but it is not such a universally available solution. Virtualization, when available, provides a high-speed provisioning alternative to scripting and will be the subject of a future article.

A bird's-eye view of provisioning would need to cover not only WebLogic domain configuration but also the configuration of many other system components, for example:

  • network infrastructure
  • firewalls
  • operating system patches
  • hardware load balancers
  • content switches
  • database administration
  • content management systems
  • security services integration
  • back-end system connectivity

I don't have the space here to discuss end-to-end installation and configuration of all of the above, so I'm deliberately limiting the scope to WebLogic Platform, with the odd extension into Web servers and database servers. The extensions are there to illustrate how other related system components can be provisioned using similar techniques and sharing the same provisioning data, where appropriate. Sharing provisioning data between related components can eliminate many possible provisioning errors.

Many scripting tools are available to assist with automation, and this article is not the place to look for a comparison. As I am considering WebLogic Platform provisioning, I will assume the use of WLST as the primary scripting tool. WLST is a command-line tool that uses the Jython script language, and Jython is itself a Java binding of the Python script language.

In WebLogic Platform 8.1, WLST comes in two versions. The offline version (called, simply, WLST) is used to construct WebLogic domain content, whereas the separate online version (called WLSTOnline) is used to monitor or change the configuration of running domains. In WebLogic Platform 9, a single WLST tool (called WLST) runs in offline or online mode. For clarity, I will talk about using WLST in offline or online mode, and if you are using WebLogic Platform 8.1 you should intepret this as meaning WLST (offline mode) or WLSTOnline (online mode).

In offline mode, WLST provides a scripted equivalent of the domain configuration wizard. Like the wizard. it builds a domain based on a domain template and then uses script to customize the domain incrementally. BEA provides a number of default generic templates that can be used by WLST. BEA also provides a template build tool that enables custom templates to be abstracted from a configured domain. Custom templates can be used by WLST to build new domains similar to that from which the template was built. In offline mode, WLST constructs and writes to a filestore a domain directory structure. When provisioning controlled environments, it may not be possible to write the domain directory directly to the machines that will host the domain servers due to firewall or other access controls. I will visit the topic of accommodating access controls later in this article.

WLST in online mode approximates to a scripted equivalent of the WebLogic admin console Web application. It enables the configuration of an active domain to be inspected and modified in addition to monitoring its runtime state. When running online, WLST does not rely on a template since it always operates on an existing, active domain. WLST online uses JMX to communicate with the administration server (and also the managed servers, but not normally during provisioning). This is very different from offline mode, where WLST is just writing a directory hierarchy. Access controls on JMX may be in place that limit how the administration server JMX interface can be accessed, and provisioning scripts must be able to accommodate such constraints.

Writing adaptable scripts

I identified a key benefit of automated provisioning: You can use provisioning scripts for one environment as a prototype for others. To exploit this, provisioning scripts should be written so that the script behavior adapts to specific environment requirements. To implement truly adaptable scripts, you need a sophisticated scripting language. Fortunately, Jython and Python fit this requirement very well.

If you use WLST, your provisioning scripts are written in the Jython language. When you launch WLST and execute its script, you are actually running the Jython interpreter embedded within the WLST tool. You can also launch the Jython interpreter and load WLST into it to execute WLST commands within your own Jython scripts. The full power of the Jython language is available using either approach.

Here is a snippet from an environment provisioning script, a WLST offline script:

weblogic_listen_port = int(profile[ 'managed.server.listen.port' ])

                         
hostnames = profile[ 'managed.server.hosts' ]
managed_server_count = len( hostnames )

for idx in range( managed_server_count ) :
listen_address = hostnames[idx]
interface_address = listen_address
managedServer = create( managedServerName, 'Server')
managedServer.setInterfaceAddress( interface_address )
managedServer.setListenAddress( interface_address )
managedServer.setListenPort( weblogic_listen_port )

I haven't shown how it is created, but the object profile is a Python dictionary containing key-value pairs that define the variables for each particular environment to be provisioned. These values may have been loaded from one or more external property files, or they could be set by another Jython script that is imported first. Here is an equally simple example of what a Jython import file might look like:

def getProfile () : 

                         
env = {}
env['managed.server.listen.port'] = 7001
env['managed.server.hosts'] = \
[ "hostname_1", "hostname_2", "hostname_3" ]
return env

If this were in a Jython file named environment.py, then the following two lines could be used to initialize the profile object:

import environment
                        
profile = environment.getProfile()

Note how using features of Jython can reduce the number of entries required. It is not necessary to explicitly define how many managed servers are in the environment: This is implicit from the content of the managed.server.hosts element.

I have presented a very simple example, but it illustrates some important adaptable techniques:

  • Variable aspects of the provisioning are imported into the provisioning script. In this case, they have been loaded from property files, but they could be imported equally via a separate Jython script.
  • The script is scalable; it will work equally for a configuration of one or very many managed servers without change.
  • The script exploits a required pattern that is common to all environments; in this case, the pattern is that all managed servers use an interface address that is the same as the listen interface, and they all use the same listen port. Additionally, each managed server's unique server name takes the form <basename><index> where <basename> is common to all servers. In the sample, <basename> is obtained from the dictionary element with key value managedServerNameRoot, and index takes on values from 1. Such patterns for configuration are commonly found in large environments.

Now, consider that the above managed servers are always part of a cluster. In configuring the cluster, you need to set the cluster address. This can either be set to a DNS name, or the cluster address can be a comma-separated list of server IP addresses and port numbers. It is a common experience that only late stage test environments will have DNS servers configured. Let's make some minor additions to the snippet (and add in the Jython import as well):

import environment
                        
profile = environment.getProfile()

weblogic_listen_port = int(profile[ 'managed.server.listen.port' ])
hostnames = profile[ 'managed.server.hosts' ]
managed_server_count = len( hostnames )
cluster_list = ''

for idx in range( managed_server_count ) :
listen_address = hostnames[idx]
interface_address = listen_address
managedServer = create( managedServerName, 'Server' )
managedServer.setInterfaceAddress( interface_address )
managedServer.setListenAddress( interface_address )
managedServer.setListenPort( weblogic_listen_port )
cluster_list = \
cluster_list + listen_address + ":" + str(weblogic_listen_port)

if 'cluster.dns.name' in profile :
cluster.setClusterAddress( profile['cluster.dns.name'] )
else :
cluster.setClusterAddress( cluster_list )

In this modified snippet, you accumulate a comma-separated list of server listen addresses and ports. If a DNS name is provided in the profile (in this one it was not), then you'll use that for the cluster address, but in default you use the accumulated address and port list.

End-to-end provisioning: Sharing configuration data

In most test environments there will be more to provision than just the WebLogic domains. For example, the Apache Web server may be used to serve static HTTP content and also to load-balance HTTP requests to the managed servers in a cluster. The configuration of such other environment components may also be implemented in the provisioning scripts. Here's a slightly extended version of environment.py that includes one configuration value for the Apache Web server:

def getProfile () : 

                         
env = {}
env['managed.server.listen.port'] = 7001
env['managed.server.hosts'] = \
[ "hostname_1", "hostname_2", "hostname_3" ]
env['apache.listen.port'] = 8080
return env

To exploit this, you add some more lines to the provisioning script (the new lines are at the end of the snippet):

import environment
                        
profile = environment.getProfile()

weblogic_listen_port = int(profile[ 'managed.server.listen.port' ])
hostnames = profile[ 'managed.server.hosts' ]
managed_server_count = len( hostnames )
cluster_list = ''

for idx in range( managed_server_count ) :
listen_address = hostnames[idx]
interface_address = listen_address
managedServer = create( managedServerName, 'Server' )
managedServer.setInterfaceAddress( interface_address )
managedServer.setListenAddress( interface_address )

cluster_list = \
cluster_list + listen_address + ":" + str(weblogic_listen_port)

if 'cluster.dns.name' in profile :
cluster.setClusterAddress( profile['cluster.dns.name'] )
else :
cluster.setClusterAddress( cluster_list )

f = open('weblogic.conf', 'w')
f.write('<IfModule mod_weblogic.c>\n')
f.write(' WebLogicCluster ')
f.write(cluster_list)
f.write('\n')
f.write(' MatchExpression *.jsp\n')
f.write('</IfModule>\n')
f.close()
f = None

f = open('httpd.conf', 'w')
f.write('Listen = ')
f.write( str( profile[ 'apache.listen.port' ] ) )
f.write('\n')
f.write('LoadModule weblogic_module modules/mod_wl_20.so\n')
f.write('<IfModule mod_weblogic.c>\n')
f.write(' Include conf/weblogic.conf\n')
f.write('</IfModule>\n')
f.close()

The extra lines at the end create two files: httpd.conf and weblogic.conf that configure the Apache Web server and the WebLogic plug-in, respectively. Here is what is output as httpd.conf:

Listen = 8080
                        
LoadModule weblogic_module = modules/mod_wl_20.so
Include conf/weblogic.conf

and here is the weblogic.conf that is generated from environment.py:

<IfModule mod_weblogic.c>
                        
WebLogicCluster hostname_1:7001,hostname_2:7001,hostname_3:7001
MatchExpression *.jsp
</IfModule>

The configuration of httpd.conf uses the key value apache.listen.port, which is taken from our environment.py file. But notice that configuration of the plug-in uses data accumulated as a by-product of configuring the WebLogic cluster itself. The WebLogic domain configuration data has been used to configure another component to which it is related. Such sharing of configuration data can prevent many inconsistencies between interdependent component configurations.

Note that the technique used here, regarding writing the configuration file directly in Jython, is not really appropriate unless the configuration files are very small. In practice, many configuration files, including those for Apache Web server, will be too large for this simple treatment. In such cases, it is better to use Jython to output a property file with the configuration values, and then use Ant to import the property file and perform token substitution against a configuration file template. Using Ant from within WLST scripts is made easy: BEA includes a Jython script to run Ant with WebLogic Platform 8.1 and 9.x installations.

Don't Over Parameterize

I've been pushing the value of writing adaptable scripts that can be used for more than one environment. However, you should guard against excessive parameterization. Don't create parameters for attributes and values that are not going to change between environments—even if your script does set a default value. Unnecessary parameters create complexity in scripts that makes them more difficult to understand. Remember, you're trying to create reusable scripts that may be used by others long after you have moved on to another project. Every unnecessary parameter you define raises this potential issue for your successor: "Why is this setting made variable when all environments adopt the same value? Is something missing?" Just keep in mind that your successor may know your new email address or phone number and chase you for an explanation that you will almost certainly have long forgotten!

Conclusion

BEA WebLogic Platform installations bring with them three powerful scripting tools in the form of WLST, Jython, and Ant. These scripting tools can be used to build powerful, adaptable, and maintainable provisioning and deployment scripts that can be applied to all the environments your project will need to ensure your applications comply with their requirements. Scripting not only improves consistency between environment builds, but it also reduces effort—so you win twice over.

Paul Nixon is Technical Director of Orbism Ltd., a consultancy that specialises in automation of deployment and system management for BEA Platform solutions. Paul has more than 30 years experience in the IT industry.