by Paul Nixon
BEA WebLogic Platform applications are usually deployed as part of a complex production system. When delivering WebLogic Platform-hosted applications, formal testing of your applications will require properly controlled test conditions, and the provision of those conditions can be a complex task in itself. Without properly prepared environments and successful application deployment, you cannot perform formal testing, and delays to your project will occur at a time when the project has little slack left in its delivery schedule.
Automation of the processes that build test environments can help to prevent these delays. In this article I will show how the effort in automating deployment into one test environment can be reused for other test environments.
"Please release me, let me go
My bugs aren't major anymore...."
So, your development team has built a suite of applications, deployed it successfully into a WebLogic domain, and tested the applications thoroughly with an automated test script. Perhaps the developers have read and implemented the suggestions in Michael Meiner's article Developing Web Applications in a Clustered Environment Using WLST and BEA Workshop (Dev2Dev, 2006), have tested the suite in a cluster with a Web server as its front end, and you have seen that it runs in a cluster and that they can demonstrate that failover is working.
Great fantasy! The reality is that the suite has only just begun its journey to production. It has many hurdles to leap before it gets there, and the developers will make at least one more delivery before then. Tests will be run, by testers or by test tools, against the suite and on machines to which the developers have no access, and those tests will find faults; at least some of those faults will need to be fixed, and the whole roundtrip will be repeated several more times.
Tests are typically run in a set of well-specified environments. Let's look at what these controlled environments are, and at what it takes to promote an application from one test environment to the next.
I use the term "controlled environment" to mean one in which access is constrained according to some policy. The production environment is a controlled environment, and so is a staging environment. The access control policies on these two environments would be similar, but the staging environment would probably have a less restrictive access policy. A "controlled test environment" is, simply, an environment used to test an application (suite) and is controlled—that is, it has access constrained by a policy.
The aim of testing in controlled environments is to achieve repeatable test results: Given the same test on the same software in the same conditions, you should get the same results. Providing the “same conditions” requires that preparation of the test environment be consistent and repeatable. Automation of environment builds should ensure environments are initially in a repeatable condition, but more is required to ensure that the tests that occur are undertaken under repeatable conditions. Access controls should be implemented to constrain who can change the environment after it is built, and what kinds of changes are permitted. If runtime changes are applied prior to test, it is important to capture those changes in some kind of test log, so that the exact test conditions may be recreated by repeating the environment build reapplying the runtime changes.
Access controls may be applied formally or informally, or a mixture of both. If your project uses formal access controls, they will usually be implemented through software or hardware devices that prevent the policy rules from being transgressed. If your project has a less formal culture, then some or all access controls may be applied informally, that is, by consent and adherence to accepted practice. Obviously, informal controls are easier to subvert, even unintentionally, and such environments may install audit software that checks environment configurations against expected settings and raises alerts if unacceptable variations are present.
Performance test, stress test, and user acceptance test environments are all examples of controlled test environments. Developers usually are allowed to see test results and investigate problems or errors in controlled test environments but usually will not be permitted to administer or modify the environment configuration or be involved in test execution.
A controlled environment involves more than just the hardware: It is a combination of appropriate hardware, infrastructure, and software. For example, the same hardware could be used for a performance test environment and for a stress test environment. However, the software configuration may be different. For example, a performance test environment may use instrumented JVMs to enable diagnostic tools to analyze where performance bottlenecks occur, but this would not be a requirement for stress testing.
In a well-managed project, you will migrate the application suite through a number of controlled test environments, subjecting the application to testing in each. Figure 1 provides an example of this.
Figure 1: Migration of an application through several controlled test environments
The various test environments should be identified and their use planned in a project test plan. The process of migrating the suite through the various test environments and, ultimately, to live production is called application promotion.
As part of the project test plan, your project may define entry criteria that must be achieved before an application suite is promoted from one environment to another. Deploying and testing in controlled environments can involve significant costs, as I will cover later in this article, and applying entry criteria avoids those costs being incurred if the applications are not of sufficient quality to make the testing meaningful.
In projects that use iterative or agile development methods, several versions of the application suite may be submitted for testing in one or more of the controlled environments. The test success criteria normally will be increasingly stringent or extensive for each successive version tested. Without suitable measures being taken, the cost of preparing test environments and deploying the suite can multiply, sometimes to such an extent that the value of these methods may be called into question. Fortunately, tools and techniques are available that can make promotion efficient and reliable, and I review some of these in this article.
We can usefully divide deployment-related promotion tasks into two main stages:
The term provisioning can broadly be defined as "providing the resources required by an application for it to perform according to specification." Resources can be anything from the BEA WebLogic Server container itself, through database connections, to Web services and back-end system connectivity. Provisioning issues are discussed in detail in Andy Lin's excellent article on automation, Automating WebLogic Platform Application Provisioning: A Case Study (Dev2Dev, 2005).
To thoroughly test your applications, you should expect to deploy and test each application many times in each one of your test environments. You can choose from two broad approaches:
The former approach—rebuilding the environment for each test—produces the greatest overhead, but it has the advantage that you don't need separate scripts for provisioning and deployment, as they become two parts of the same task. On the other hand, if your test plans require complex test environments involving many servers and/or many domains, the provisioning overhead may be too much to accept for each test run, and you may opt for the latter approach.
Promoting an application suite is much more than just building a bigger version of the existing environment and deploying the suite into it. The various controlled test environments will have specific test objectives, and these objectives will manifest themselves as differences in provisioning requirements.
Table 1 below illustrates this point using an entirely fictional but not unrealistic set of outline test environment specifications for testing one suite of applications. These are just some of the potential increments between environments.
|Functional Test|| "Debug" version of libraries and drivers
Diagnostic components installed, for example, unit testing frameworks [Cactus]
Database is low spec, for example, Oracle but no RAC, and schema may not be optimized for the DBMS
Platform and applications share database tablespace and use a "generic" schema
Minimal number of servers in cluster
Many back-end services dummied
|System Test|| Production version of drivers and libraries
Database is full spec enterprise, for example, Oracle with RAC, but schema may not be fully optimized for the DBMS
Separate tablespace for WebLogic Platform tables vs. application tables
Small number of servers installed
Back-end services may be test implementations
|Performance|| Production version of drivers and libraries
JVM is instrumented
Tuned parameter values set on JVM and other configuration items, for example, connection pool sizes and custom execute queues
Performance data collection agents installed
Database is full spec enterprise, for example, Oracle with RAC
Separate tablespace for Platform tables vs. application tables and schemas optimized for the DBMS
Large number of servers, possibly enough to cope with maximum specified load
Load generator servers configured
Back-end services may be test implementations
|User Acceptance|| Production version of drivers and libraries
Database is enterprise but not necessarily full spec, for example, Oracle but no RAC
Separate tablespace for Platform tables vs. application tables
Minimal number of servers installed
Back-end services fully configured
Table 1: Illustrative Test Environment Specifications
These differences can greatly increase the burden of provisioning. Usually, errors in provisioning manifest themselves as deployment errors, so they do not show up until after you have built the environment and during the first deployment of the application suite.
Deployment problem resolution is often painful and protracted when controlled environments are first used. Neither the developers nor the provisioning team will have experience of analyzing deployment exceptions in complex environments. Developers often expect that successful deployment into their own test machines or integration environments means that problems of deployment into other environments have been solved. Conversely, testers and operations staff expect that developers will have solved deployment problems as part of their software delivery and are often frustrated when developers explain that they have not yet run the application in such complex and controlled environments.
The bottom line is that underestimating the effort required to promote an application can hit hard when least expected.
Pages: 1, 2