Oracle Usable Apps | Applications User Experience Simplicity, mobility, extensibility
   
 
The Background of User Experience
 
Creating Standards for Usability Test Results

Oracle director helps find common ground among software companies in move to aid vendors

Author: Anna M. Wichansky, PhD CPE, Senior Director – Oracle Applications User Experience
Revised: March 18, 2009
First Published: Jan. 30, 2008

Anna M. WichanskyAnna Wichansky is senior director of the Advanced UI group at Oracle Corporation, where she founded the
company’s usability labs and initiated the usability engineering process. She is an experimental psychologist and certified professional ergonomist with experience simplifying technology for users in a wide array of applications, including telecommunications, transportation, computing, media, entertainment, medicine, and space travel. Anna has worked for the U.S. Department of Transportation, Bell Labs, Hewlett Packard, and Silicon Graphics, and consulted for federal government agencies and non-profit organizations on usability-related issues.

 




In the late 1990s, I had the good fortune to be part of a significant joint effort between industry and government to create standards for usability testing. This was called the Industry Usability Reporting Project, or IUsR, and it was run by the National Institutes for Standards and Technology, or NIST.

The company wanted results that would be comparable in terms of methodology and reporting, making it easier to compare vendors.

My interest in the project stemmed from a request from airline industry giant Boeing, a major enterprise software customer of Oracle. Boeing had productivity goals in place for use of its software; the company wanted users to be able to come up to speed on commercial off-the-shelf software quickly, without excessive learning curves or help-desk support. Before it would buy the software, the company wanted to get usability test results from software vendors. But, of course, the company wanted results that would be comparable in terms of methodology and reporting, making it easier to compare vendors.

In conversations with other major software vendors, the idea of reporting such results to customer companies, under the right non-disclosure agreements, was favorable.

A New Project

This idea led to the formation of a steering committee, including NIST members and industry representatives, and the organization of the first IUsR workshop in March 1998. The meeting was held at NIST in Gaithersburg, Md., and there were 25 attendees representing a number of large software vendors, customer organizations, consultants, and academics in the usability engineering discipline.

gif
Photo courtesy of Anna M. Wichansky, Oracle Applications User Experience

Members of a steering committee organized by the National Institutes for Standards and Technology meet in the late 1990s to create standards for usability testing. These members of the Industry Usability Reporting Project, or IUsR, are Keith Butler of Boeing, left; Anna Wichansky of Oracle, center and standing; Emile Morse of the National Institutes for Standards and Technology or NIST, left and sitting; Jean Scholtz of DARPA, or the Defense Advanced Research Projects Agency, right and standing; and Sharon Laskowski of NIST, far right.

Several of us were invited to make presentations concerning what usability test data we actually collected on products and what we could propose as common denominators among our methods that would allow a common industry reporting format to be developed.

As a result of this meeting, groups were formed to deal with general management issues, methodology, results and product descriptions, and pilot-test planning. As a member of the methodology working group, my main focus was to identify reliable and valid ways of conducting and reporting on usability testing on which we could all agree.

Finding a Consensus

Although this initially sounded like a tall order, it was amazing how much consensus we had in that initial meeting about how testing was done among the large vendors and the types of data we would be willing to provide customers. Some of the items people felt strongly about were:

  • We should not be proscriptive about testing methods, but rather concentrate on the reporting format to emphasize the types of information customers want in order to make procurement decisions.
  • There should be empirical data collected with users. Checklists and other analytical techniques conducted by vendors were of lesser value to customers than data collected from actual users.
  • There should be some quantitative, human performance data and some qualitative, subjective data collected. Customers were interested not only in how well people performed with the software, but also in how well they liked it.
  • There should be a minimum number of users tested (based on the literature, we recommended eight per user type).
  • There should be a template for reporting purposes that was accessible to procurement and executive audiences as well as usability professionals in the customer organizations.
  • We should recruit pairs of representatives from vendor and customer organizations to perform trials of the new reporting format.

NIST promptly set up a Web site where we could all communicate about the progress of our working groups. It also helped organize conference calls and an e-mail distribution list for updates and discussions.

In the first year, an informational white paper and basic guidelines for the common industry reporting format were written. A document template was also produced, as well as an example of a test report based on a fictitious product. Participant companies were also recruited for pilot tests.

Pilot Testing Plans

A second workshop was held in September 1998, where the draft Common Industry Format (CIF) for usability reporting was proposed. A third meeting was hosted at Oracle in Redwood Shores, Calif., in September 1999, and attended by additional customer and vendor organizations interested in hearing about pilot testing plans between customers and vendors, which included Oracle, Boeing, and Microsoft.

A fourth meeting was held in 2000 in Gaithersburg to discuss the results of these trials. A fifth meeting took place in 2002 to form new working groups for hardware testing and user requirements CIFs. In 2004, a workshop was hosted by Fidelity Investments in Boston to discuss CIF standards for formative user testing. And in 2005, a workshop was held at the Usability Professionals Association to follow up on formative test reporting.

One of the missions of NIST is to facilitate industry efforts in setting their own voluntary standards. After sufficient drafts, reviews, and feedback, NIST submitted the CIF to the International Committee for Information Technology Standards (INCITS), an organization that facilitated acceptance of the document by the American National Standards Institute (ANSI). The CIF became ANSI/INCITS 354-2001. In the next five years, NIST also facilitated its acceptance by the International Organization for Standardization (ISO), as ISO/IEC 25062:2006.

An International Standard

Creating an international standard is a long and arduous process. A document has to be sponsored and proposed by a nation’s standards organization; for example, DIN for Germany, ANSI for the United States.  Then it is reviewed by representatives of other nations participating in various ISO technical committees — in this case ISO/IEC JTC1/SC7, on software and systems engineering. There is travel to international meetings to discuss and vote on new standards. Thus, representatives have to be funded and dedicated to this purpose.
Some members of our industry working groups actually sit on standards technical committees, but in general, we were all grateful for the willingness, expertise, and time of the NIST project managers in shepherding these standards through the various hurdles.

Standards-making requires a long-term perspective, keeping in mind that standards today may not reflect the technologies of tomorrow. The best contents for standardization are well-researched topics that are not likely to change between product generations. These include human factors, methodologies, and reporting parameters.

Standards also tend to reflect well established and generally accepted knowledge about a domain. Sometimes the latest research is not reflected in the standard, because it does not yet have sufficient verification or a track record of the application to be included. Still, it is reassuring that experienced managers in the usability community seem to have a great deal to agree upon that can figure in standards such as the CIF.

More than 300 participants have attended workshops and registered in the IUsR Web site, representing more than 100 organizations in more than 30 countries worldwide. A wide variety of vendors and customer organizations have used these standards in many ways.
Companies such as Oracle have revised their usability testing report templates and procedures to provide data in the CIF format, for summative usability tests. This enables companies to quickly and easily provide such reports when requested by customers in the sales cycle.

Customer organizations such as the Italian government have based requirements for new systems on the CIF document, and this has spawned such efforts as the Common Industry Format Specification for Usability Requirements (CISU-R) working group. NIST and the Usability Professionals Association have held workshops on the Formative CIF, which would enable rapid usability test reporting in a standardized document format.

Appreciating Customer Needs

Overall, I consider my experience on the IUsR project to be one of the highlights of my career in human factors in general, and software usability engineering in particular.

It definitely increased the scope of my effort to appreciate customer needs and provide what customers wanted. It enabled me to learn and access the best practices of other industry representatives in a cooperative, rather than a competitive, way.

It was a pleasure to work with such experienced and qualified people as the other members of the steering committee and workshop subcommittees. It exemplifies what can happen when managers collaborate to improve the lot of users on the highest levels.

© ACM, (2007). This is the author's version of the work. It is posted here by permission of ACM for your personal use, and not for redistribution. The definitive version was published in interactions, 14(3), May-June, 2007, 38-39.

Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers or to redistribute to lists requires prior specific permission and/or a fee.

For more information on CIF

Scholtz, J., Morse, E., Laskowski, S., Wichansky, A., Butler, K., & Sullivan, K. The Common Industry Format: A Way for Vendors and Customers to Talk About Software Usability. Proceedings of HCI International 2003 , Vol. 1 (Human-Computer Interaction), Jacko, J. and Stephanidis, C. (eds.), Mahwah: Lawrence Erlbaum Associates, 554-558.

Scholtz, J., Laskowski, S., Morse, E., Wichansky, A.M., and Butler, K. Quantifying Usability: The Industry Usability Reporting Project. Proceedings of the Human Factors and Ergonomics Society 46th Annual Meeting, 2002, 1930-1934.

Oracle.com  |  About Oracle  |  Careers  |  Contact Us  |  Legal Notices  |  Terms of Use  |  Your Privacy Rights