Chapter 8. Requirements and test management – Agile ALM: Lightweight tools and Agile strategies

Chapter 8. Requirements and test management

 

This chapter covers

  • Data-driven tests, acceptance tests, and BDD
  • Approaches to integrating different languages and tools for barrier-free development
  • Examples based on Ant, Maven, Selenium, TestNG, FEST, Fit/FitNesse, GivWenZen, XStream, and Excel

 

In this chapter, we’ll discuss how to implement collaborative and barrier-free development. We’ve already discussed tools that support release management, connecting the roles and artifacts in a task-based way. We also looked at how you can integrate the software delivery step into this process chain by integrating Mylyn with build engines. But build engines such as Jenkins, Bamboo, and TeamCity are only the infrastructure—they call your scripts, compile and test your application, and then package and deploy it. These steps don’t say anything about the quality of the software in terms of how (and if) it implements customers’ requirements.

In this chapter, we’ll focus on the requirements and test management, and on integrating them with the coding phase.[1] Solid requirements management is essential for project success. “Studies of factors on challenged projects revealed that 37% of factors related to problems with requirements,” such as poor user inputs.[2] As you have already learned, the development phases are highly integrated. Requirements management, development, and delivery are all part of the development lifecycle.

1 Many people think that coding and testing aren’t two distinct phases; rather, they belong to one project phase called “development.”

2 See Craig Larman, Applying UML and Patterns (Prentice Hall, 2002), pg. 42.

The integrated Agile ALM approach focuses on the customer’s needs. In chapter 1 (section 1.3.4) we discussed what outside-in development comprises and that it’s an essential part of Agile ALM to archive exactly that: a focus on satisfying the needs of the customer.[3] In this chapter, we’ll discuss outside-in development in more detail, and I’ll explain how to host it using integrated toolchains. We’ll look at some use cases in the context of acceptance tests and behavior-driven development, and we’ll focus on satisfying the needs of stakeholders. We’ll start with a data-driven approach, continue with acceptance testing, and finish with behavior-driven development.

3 For more information on outside-in, I recommend Carl Kessler and John Sweitzer, Outside-in Software Development (IBM Press, 2007).

Integrated toolchains are recommended, and we’ll introduce them with the help of some example use cases. The seamless characteristic of seamlessly integrated tool-chains and programming languages is what I call barrier-free because you don’t need to be concerned that there are different programming languages, different project roles, or different test types. (In chapter 9 we’ll discuss polyglot programming—the aspect of the barrier-free approach that focuses on programming languages—in more detail.) All stakeholders use the same infrastructure and frameworks for the entire development process. As discussed in chapter 1, the Agile ALM approach provides a single view of the truth—a single view of the project, its processes, data, and status—as opposed to multiple and confusing versions, such as when you have organizational or technical silos across your project. The project infrastructure is collaborative because different project roles work together while writing and managing tests. Running collaborative tests frequently, as part of your continuous integration ecosystem, leads to early feedback and a living software documentation

Because I don’t like to read overly academic stuff myself, I won’t inflict it on you. This chapter focuses on specific and applied use cases. You may develop SWT applications rather than Swing applications, or you may prefer a specific tool in the discussed toolchains to another, but the strategies are the same. In this section, you’ll gain further insight into what Agile ALM is and what it means to develop in an outside-in and barrier-free way.[4]

4 For more details on Agile requirements and test management, I recommend Lisa Crispin and Janet Gregory, Agile Testing (Addison-Wesley, 2009), and Dean Leffingwell, Agile Software Requirements (Addison-Wesley, 2011).

8.1. Collaborative tests

Software should be developed in a collaborative way. All roles, particularly developers, testers, and domain experts, should work closely to create the best software possible. But the barrier-free approach can also be supported by collaborative processes and tools.

Essential aspects of writing tests collaboratively include writing good acceptance tests, using the language of the domain expert, keeping the tests in an executable form, and considering behavior-driven development (BDD). Acceptance tests define the expectations of the customer (the person with the money) or the user (the person who is affected by or affects the product), or both. Writing acceptance tests in a ubiquitous language and in an executable way fosters outside-in development, further improves collaboration, and leads to better and more meaningful feedback loops. BDD is another way to apply outside-in development.

In 2003, Brian Marick defined an Agile testing matrix that was further refined by Lisa Crispin.[5] The matrix distinguishes between business-facing tests and technology-facing tests (see figure 8.1). A business-facing test is one that’s understandable by a domain expert, whereas a technology-facing test is one written by and for developers only. Additionally, the matrix groups tests that support the team and tests that critique the product.

5 Brian Marick’s original blog post, from August 21, 2003, can be found here: http://www.exampler.com/old-blog/2003/08/21/. Lisa Crispin’s refinements can be found in her book, Agile Testing (Addison Wesley, 2009), pg. 98.

Figure 8.1. A test matrix (a skeletal based on Lisa Crispin’s version of Brian Marick’s diagram) that arranges acceptance tests and BDD in quadrants. Tests can be divided into business-facing and technology-facing as well as those that support the team and those that critique the product.

Supportive tests directly help during the process of developing the software, whereas tests that critique the product are after-the-fact tests that validate the completed product (or a reasonable increment of it) in order to find defects. The matrix in figure 8.1 consists of four quadrants, Q1–Q4.

The lower-left quadrant (Q1) represents technical tests (often provided with the help of tools from the xUnit family). This quadrant commonly includes unit tests and component tests, where a component is more coarse-grained than a unit is and often spans different artifacts or architectural layers. These tests help to improve the design of the code and improve the internal quality of the software. These white box tests address how a specific task is solved.

Another aspect that relates to Q1 is test-driven development (TDD).[6] TDD is a widely accepted concept that includes writing tests first and refactoring the code continuously. The goal is to focus on the specific task, while eliminating waste, improving the design of the code, and setting up and maintaining adequate test coverage. In his book Clean Code, A Handbook of Agile Software Craftsmanship, Robert C. Martin lists “Three Laws of Test-Driven Development”:

6 See Kent Beck, Test-Driven Development (Addison-Wesley, 2002) and Lasse Koskela, Test Driven (Manning, 2008).

  • You may not write production code until you have written a failing unit test.
  • You may not write more of a unit test than is sufficient to fail, and not compiling is failing.
  • You may not write more production code than is sufficient to pass the currently failing test.

BDD often touches Q1 too. Within the context of Q1, BDD is similar to TDD, but with more focus on specifications that lead to low-level specifications for the code.

The tests in Q2 are often highly automated as well, but they drive the development at a higher, functional level and target the external quality of the software. There, tests define functional requirements and run as black box tests. They address the question of what the specific task is. “Design itself is the process of converting a black box to a white (or transparent) box—one in which we can clearly see all the details of how.”[7] This quadrant includes acceptance tests and BDD.

7 Donald C. Gause and Gerald M. Weinberg, Exploring Requirements, Quality before Design (Dorset House, 1989), pg. 249.

Acceptance tests are also used in Q3 after the software is developed (development milestones can be provided frequently). These manual tests address aspects that are hard to automate, such as usability.

Finally, Q4 includes tests that critique the product on a technical level. This area often addresses nonfunctional requirements.

Acceptance tests are business-facing tests. They represent functional requirements for software that’s under construction (coordinating what’s developed) or already completed, ensuring that changes don’t break existing functionality. Tests that validate that changes don’t break existing functionality are often called regression tests.

Acceptance tests should be executed automatically in order to reduce the cycle time and to deliver objective results. The technique of BDD, which we’ll look at a bit later, also supports the team while developing the software. Acceptance tests and BDD foster outside-in, barrier-free, and collaborative development. Implemented and integrated with the right tools, these strategies are powerful vehicles for requirements and test management. By integrating different quadrants of the matrix with each other, all the different test categories can be run in conjunction with one step, and the results can be aggregated.

Let’s start by looking at the basics of writing good data-driven tests. Data-driven tests are the prerequisite for any further advanced strategy.

8.1.1. Data-driven tests

An important aspect of testing is how you generate and manage the physical test data. Data-driven testing means testing with test data that’s decoupled from the test scripts. You write data-driven tests before (or while) developing the application. Using data-driven testing only for after-the-fact functional testing is often considered an antipattern.

There are many advantages to separating the data from the tests, including the following:[8]

8 See Thomas Hammell, Test-Driven Development (Apress, 2005), pg. 169.

  • It makes test data easy to edit.
  • It makes adding new test cases easier.
  • It helps reduce failures caused by invalid data.

The data-driven testing approach focuses on the separation of concerns instead of the hardcoded data, and it allows you to change data easily. You can distinguish between input data and output data; in an approach that’s fully data-driven, both types of data should be excluded from the code. Therefore, both types of data can be managed without touching the test classes. Where you need to reference the data in test classes, solutions must support variables inside these test classes and generate verifications dynamically. Examples of mediums for input and output data are flat files, HTML files, and Excel documents.

For user interface (UI) testing, you can use a capture and replay (CR) tool to collect input data (and user interactions) for test input. CR tools for UI testing can add value, but you shouldn’t rely on them solely. Captured and saved interactions result in scripts that are like source code: You must maintain these tests scripts, refactor them, and optimize them. This optimization includes isolating tests after recording them, and making them robust for future changes. For example, you should identify UI controls relatively, not by an absolute position that directly depends on other controls. Software changes, and input scripts must be flexible enough to evolve along with the software.

Depending on the context, having a small set of automatic UI tests can be a good start. You could use a small set of automatic UI tests as a sanity check that runs after every build or as a first quality gate in a staged build environment. Many projects use several UI tests, and they find it helpful; some even automate all their acceptance tests as UI tests. But testing via the UI is often slow and brittle. Not everything should be tested via the UI; only a subset of all existing tests of different test types should be. You should always take care to slice your tests adequately. For example, if you must start a UI test in order to test a database operation or business logic, something can be improved for sure.

Collecting and using data for tests has been a common practice for quite some time now. But what differentiates good tests from simple data-driven tests? The answer is that good tests should be acceptance tests, which is what we’ll discuss next.

8.1.2. Acceptance tests

Acceptance tests determine whether a system satisfies its specified acceptance criteria. This helps the customer to decide whether to accept the software: “Acceptance tests allow the customer to know when the system works and tell the programmers what needs to be done.”[9] This means that acceptance tests also tell the programmers what the customer doesn’t want them to do.

9 See Ron Jeffries, Ann Anderson, and Chet Hendrickson, Extreme Programming Installed (Addison-Wesley, 2001), pg. 31.

Using acceptance tests to determine what has to be delivered to the customer is sometimes called acceptance test-driven development (ATDD). The name suggests an analogy to traditional TDD, but the latter is more focused on improving the design of the software and delivering the right thing correctly, whereas ATDD has the goal of ensuring that the right thing is delivered and that it’s delivered when it’s supposed to be delivered. The goals and concepts of these two approaches are similar though.

Compared to a subjective “look and approve” approach, acceptance criteria is measurable and objective. In the worst-case scenario, the acceptance criteria aren’t known, or the approval is done by the client in a capricious way.

Setting up acceptance criteria the Agile way means that the specification is neither calculated precisely (in a mathematical sense) nor complete. Rather, the criteria consist of example interactions. That’s why this approach is often referred to as specification by example.[10] The general process is compatible with traditional requirements management, where you write use cases (or user stories) that also contain scenarios.

10 Gojko Adzic, Specification by Example (Manning, 2011).

Acceptance tests can be used on different specification levels, from coarse-grained to fine-grained, starting with tests for features and stories or scenarios up to tests for tasks. Because each acceptance test examines functionality, they’re functional tests.

Acceptance tests are another example of focusing on meeting the stakeholder’s requirements as advocated by the outside-in approach. You can introduce critical requirements into the test, distinguishing between must-have features and other less important ones. Acceptance tests assess whether the application is doing the right thing; they approach the project from the macro level. Unit tests (or component tests, depending on how you slice them) technically assess the classes and modules; they focus on the micro level. Unit tests validate whether the right thing (according to the acceptance criteria) is done correctly. Acceptance and unit tests should be used in conjunction with each other.

Although it’s strongly recommended that you write acceptance tests before (or while) developing the application, acceptance tests can also be created after the application is written in special cases—for instance, in a migration scenario accessing legacy code. Acceptance criteria should be specified before starting the acceptance routine that’s approving the software.

Although you use an incremental and iterative development process, you must know the goal that you’re trying to achieve. If your work isn’t defined by concrete requirements, you won’t be effective. Whether you do it the Agile way or not, clear requirements are essential for achieving measurable and objective success. In the best case, the requirements can be executed and validated automatically, for instance, through triggering by the CI system (like those shown in this chapter). They’re part of the release and are put into the VCS along with the other coding artifacts.

Let’s now discuss the importance of ubiquitous language in testing.

8.1.3. Ubiquitous language

Who can describe the customer’s expectations better than the customers can? The domain expert has the deepest domain knowledge. This is the approach taken in the book Domain-Driven Design, where Eric Evans defines ubiquitous language as being “structured around the domain model and used by all team members to connect all the activities of the team with software.”[11] This ubiquitous language breaks down barriers between different roles and organizational units. Where other approaches use different vehicles for stakeholder communication (such as the software architecture[12]), writing acceptance tests in an ubiquitous language is a common Agile practice for stakeholder communication and asking the right questions while moving from the problem domain to the solution domain.

11 Eric Evans, Domain-Driven Design: Tackling Complexity in the Heart of Software (Addison-Wesley, 2003), pg. 514.

12 See Len Bass, Paul Clements, and Rick Kazman, Software Architecture in Practice, 2nd ed. (Addison-Wesley, 2003), pg. 27.

A ubiquitous language keeps you aligned with business values and goals. Information technology is a means to the end of achieving those business goals, so you don’t align the communication with the technical language of developers, but rather with the language of the subject area you’re working in. Consequently, it’s important for the entire team to communicate in the domain language, without having silos.

A common challenge is that customers, domain experts, and the technical crew don’t interact often enough and they don’t speak the same language. Agile tries to solve that, particularly by writing tests iteratively, focused on functionality and requirements to help overcome any barriers and clarify communication. This is a major difference compared to unit tests that are written by and for developers.

The barrier-free approach of using an ubiquitous language must respect the fact that customers often don’t have a detailed technical understanding (nor do they need to). The process should also be efficient, which means the requirements should be documented in a form that the customer can work with. Teamwork is also essential: You must help the customer and support them if they have problems, such as writing tests, or if the customer asks for implementation details.

It can also be good to have a technical person (or business analyst) stand in as a proxy for the customer, to document the requirements in this executable form. This can be helpful if the real customer isn’t available, or if the customer doesn’t want to write tests in an executable form. The proxy should be installed before a release starts; all project roles (and their responsibilities) should be fixed and assigned to people before the release starts. Clearly communicated project roles and responsibilities are essential for open and direct team communication.

Using a common and relatively unambiguous language is essential. Acceptance tests that are written in the language of the domain expert can be validated continuously if the tests are executable.

8.1.4. Executable specifications

Executable specifications allow you to use tools that read the specifications automatically (either after manually starting the process, or continuously as part of a CI process), process them against the system under testing, and output the results in an objectively measurable, efficient, and readable way. The domain expert specifies the tests in simple formats, and the program writes the results after running the tests against the system under test.

Traditional Word documents aren’t executable and they’re problematic in Agile teams where you want to run tests iteratively and often. Someone must read the specs in Word documents, apply them to the system undergoing the test, verify the results, and document them manually, in isolated manual and error-prone steps. Describing requirements in an executable way fosters the barrier-free approach. Executable requirements help to minimize the number of different artifact types that express the same information about the software. Merging different mediums for documentation (single-sourcing product information) reduces the amount of traditional documentation, because the specification is the system’s functional documentation and therefore can be efficiently validated against the current software state. Executable specifications lead to living (always up-to-date) software documentation, more efficient code changes, higher product quality and less rework, and a better alignment of activities of different roles on a project (see Adzic, Specification by Example, pg. 6).

By combining different test strategies, you can profit from the best results of each. Customer-centric acceptance tests are based on data-driven tests and are written in the language of the domain expert. These specifications test the application “by example” by applying test scenarios to the system under test. Acceptance tests (in the Agile sense) are executable, just as BDD fosters executable specs, as discussed next.

8.1.5. Behavior-driven development

Behavior-driven development (BDD) promotes a special approach to writing and applying acceptance tests that’s different from the traditional TDD, although BDD also promotes writing tests first. BDD was first defined in Dan North’s article, “Introducing BDD” (http://dannorth.net/introducing-bdd), in which the tests are like (functional) stories in a given/when/then format. This specification-oriented technique also uses a natural language to ensure cross-functional communication and to understand business concepts. BDD provides a ubiquitous language for analysis and emphasizes application behavior over testing. BDD fosters writing tests from a domain perspective rather than a technical perspective.

In BDD, user stories are input to test scenarios that specify what the system does. The programmer codes the test scenarios directly in the test tool.[13] BDD is a new Agile software development technique that helps software developers collaborate with businesspeople. But this isn’t its only benefit. It also simplifies and clarifies the test code.

13 James O. Coplien and Gertrud Bjornvig, Lean Architecture (Wiley, 2010), pg. 175.

Let’s discuss the benefits and principles of using BDD in your projects. First, and most importantly, BDD is a specification-oriented technique. This implies that you, as a BDD developer, will be focused on specifications as the main concept. But you’re also going to leverage BDD for verifying and writing your code. BDD is a different approach than traditional TDD, focusing more on code verification than on the functionality the code should provide.

Why does BDD matter? Because well-defined specifications help developers write tests that cover all major aspects of system functionality. They also provide a good overview of how everything should work. BDD uses a natural language for specifying interactions and functionality, which is the easiest way to ensure good communication and understanding of business concepts by all members of the project, whether they’re developers, project managers, or domain experts.

 

Given/When/Then and Other Structures

Don’t equate BDD with given/when/then. You can use other ways to structure specifications, like “As a” (which expresses the type of user), “In order to” (which expresses the goal), “I can” (which expresses the task), and “And then” (which expresses the result). Even though BDD has been popularized with the given/when/then structure, the Fit tool (which we’ll look at later in this chapter) has been around for much longer than BDD, and it was used by people doing BDD before the BDD name came into use.

 

I’m sure you have been in situations where you start writing tests and only after some time passes does it become apparent exactly what needs to be tested and how the testing needs to be done. BDD solves this issue by defining in the first step the name of the test method that will describe your business case. Only after that do you start developing your test.

BDD requires names for test methods that describe the functionality each test is supposed to check within the method. The name of the method clarifies what should be tested and how the code should work. This is extremely useful for other developers, because they only need to take one look at your test to get an idea of what it tests and how it works. You might even say that the behavior specification defines your test methods, which defines your application code.

Consider the following example. You’re writing code to calculate the sum of two values. You might start by creating the CalculatorTest class, which will contain the testAdd() method. Then you create the Calculator class with an add() method that will take two parameters and return their sum. This approach works fine, but you could do that in a better way by defining the behavior, creating tests with appropriate test methods (with well-defined names), and finally creating the Calculator class with the add() method. The test class might look like this:

class CalculatorTest {
  public void addsTwoAndThreeAndReturnsFive() {
     ...
  }
  public void addsMinusOneAndThreeAndReturnsTwo() {
     ...
  }
}

BDD can easily be combined with traditional approaches like TDD. All you need to care about is defining your specifications (which should represent the business behavior of your system) before developing the test and then implementing tests directly associated with appropriate specifications.

To write a BDD specification, focus on the three most important BDD phrases:

  • Given— Defines the initial state of the scenario
  • When— Defines an event (something that should happen)
  • Then— Defines the final state of the scenario

The initial state of the scenario is the beginning of your business case. It also serves as the input for the event represented by the term when. The final state should represent the end of your business case. It describes what you want to reach as a consequence of the preceding event.

Suppose a child wants to buy one can of Coca-Cola from a vending machine. We’ll consider three main scenarios:

  • The machine has enough cans of Coca-Cola; the child pays for one and receives it.
  • The machine has enough cans of Coca-Cola; the child pays an insufficient amount of money for one can and receives all the money back.
  • The machine is out of cans; the child pays for one can but the machine returns all of the money.

Here’s what the BDD approach would look like for the first scenario:

  • Given that there are enough cans of Coca-Cola.
  • When the child pays enough money.
  • Then ensure the child receives one can of Coca-Cola.
  • And ensure the child receives change.

The first line reads, “There are enough cans of Coca-Cola.” This describes the initial state of the scenario (that is, it sets an initial context) and ensures that a sufficient number of cans are available in the machine. Then the “child pays enough money” event occurred. This event represents our business logic. As a result, we should receive two outcomes, which will define the final state of the scenario. These outcomes are “receives one can of Coca-Cola” and “receives change.”

The second scenario is defined here:

  • Given that there are enough cans of Coca-Cola.
  • When the child pays an insufficient amount of money.
  • Then ensure the child will not receive a can.
  • And ensure the child receives all money back.

And here is the third scenario:

  • Given that there aren’t enough cans of Coca-Cola.
  • When the child pays enough money.
  • Then ensure the child receives all money back.

These scenarios are described in natural language that can be understood by all project members. They directly represent the business flow and business expectations by describing the initial state, the events, and the final state that should be reached.

BDD is effective, as is the use of test tools such as TestNG, Selenium, and Excel.

8.2. Acceptance testing with TestNG, Selenium, XStream, and Excel

This section contributed by Simon Tiffert

In this section, we’ll test a rich internet application (RIA) with Selenium as the web driver and TestNG as our backbone for data-driven tests. The test data is serialized by XStream. Maven supports all these tools.

Before we start integrating the tools in our example use case, let’s briefly discuss the underlying technologies.

8.2.1. TestNG and the data-driven approach

TestNG (http://testng.org) is a testing framework inspired by JUnit and NUnit. It’s flexible and perfectly suited for normal unit tests as well as more complex tests of different types: unit tests, integration tests, and acceptance tests. It’s supported by all major Java IDEs and build systems. Features like test groups, support for data-driven testing by enabling parameterized tests with complex objects (@DataProvider), and test dependencies make TestNG a unique tool.[14]

14 See Cédric Beust and Hani Suleiman, Next Generation Java Testing: TestNG and Advanced Concepts (Addison-Wesley, 2008).

Test groups can be used to group tests for different setups, and they can be used to prioritize your tests. The first group would be the most important tests; you must run them to ensure that the main system is working. The second group is smoke tests; these should run quickly enough for you to be able to trigger them several times a day. With a smoke test, various areas of the system are analyzed but not in full detail. The last test group includes every test and tests in more detail; it should be triggered only once a day, such as in a nightly build.

A test can belong to zero, one, or multiple test groups. This is defined in an annotation at the method or class level. A group definition on the test class is inherited for every test method inside the class. A simple test assigned to the test group smoke-test could look like this:

@Test(groups = {"smoke-test"})
public void testToString() {
    User user = new User("Albert", "Einstein");
    Assert.assertEquals(user.toString(), "User: Albert Einstein");
}

Implementing the tests from a Maven script is pretty easy. The Surefire plug-in is included in Maven’s default configuration and searches test cases in the src/test/java folder. It runs in the Maven test phase and looks for class names that follow these patterns: **/Test*.java, **/*Test.java, and **/*TestCase.java. You can run both unit tests and integration tests with the Surefire plug-in, but this configuration gets more complicated, and there’s no way to skip the integration tests.

The Failsafe plug-in is designed to run integration tests. It’s a fork of the Surefire plug-in, and it ensures that the postintegration phase runs even if there’s an error in the integration tests. To differentiate the tests, you can store the integration tests with any of the following patterns: **/IT*.java, **/*IT.java, and **/*ITCase.java.

Both plug-ins can run JUnit and TestNG tests. All you need to do is include the dependency of the framework. In our example, it’s this:

<dependency>
    <groupId>org.testng</groupId>
    <artifactId>testng</artifactId>
    <version>5.10</version>
    <scope>test</scope>
    <classifier>jdk15</classifier>
</dependency>

If you want to run all the tests, there’s nothing else to configure. With the help of different testng.xml files, you can set up finely grained tests. You can define parameters for each test suite and include or exclude Java packages and files or the previously defined test groups. An example TestNG test suite is illustrated in the following listing.

Listing 8.1. TestNG test suite

One important feature of TestNG is the separation of test logic and test data. To catch as many error situations and corner cases as possible, you should define them in the most compact form. In the following listing, TestNG’s DataProvider is used to inject an array of arrays into the test method.

Listing 8.2. Using data-driven tests with TestNG and its DataProvider

As you can see, we’ve extracted the data out of the test logic. TestNG expects an array of objects for each test run, and each entry of that array is matched to a parameter of the test method.

Often, you will need the same data for different tests. To reuse the data, you must define the DataProvider to the @Test annotation—that’s good. The data is hardcoded inside the test class—that’s bad. The next step is to define the test data outside of the class; then other nondevelopers can manage the data, too. You shouldn’t do this for unit tests, but in functional tests, you can predefine the test data and program against this data.

The data needs to be defined in a more user-friendly format, such as XML, Excel, or whatever format best suits your needs. Let’s first look at an approach based on object trees and XML. Using test tools with a data-driven approach is important. XStream helps makes this effort easier by serializing the XML data.

8.2.2. Data-driven testing with XStream

XStream is an XML serializer and deserializer. You’ll use it when you need an object from an XML structure or vice versa. It’s easy to use and the XML is clean.

If you’re defining test data of flat objects in Java, you need to write some glue code. If you need to define object trees, you’ll soon realize that Java is the wrong language—it’s time-consuming to initialize every child object, set the values, and assign them to the parent object. This would be fine if you needed it at a few points in your application, but if there are predefined object hierarchies, you should extract them. An XML structure is a common way to describe object hierarchies; there are a lot of tools available, and you don’t need to worry about using the wrong character sets.

Let’s start with a simple example. First, we’ll add the XStream dependency to the pom.xml file:

<dependency>
  <groupId>com.thoughtworks.xstream</groupId>
  <artifactId>xstream</artifactId>
  <version>1.3.1</version>
</dependency>

Now let’s write a simple Java bean:

public class User {
    private String firstName;
    private String lastName;
    public User(String firstName, String lastName) {
        this.lastName = lastName;
        this.firstName = firstName;
    }
    ...
}

To serialize User with XStream, we can use XStream’s API as follows:

XStream xstream = new XStream();
User user = new User("Paul", "Breitner");
String xml = xstream.toXML(user);

Depending on the package name you use in your Java class, the result should look similar to this:

<org.agile.alm.entities.User>
  <firstName>Paul</firstName>
  <lastName>Breitner</lastName>
</org.agile.alm.entities.User>

As you can see, there are some package definitions in the XML file.

If you want a cleaner XML structure, you can define aliases in XStream. You can define an alias either as an annotation or as a special definition. If you can control your objects, annotations are a handy way to write and forget about the aliases if you use that object multiple times in different code positions. Serializing with a defined alias looks like this:

XStream xstream = new XStream();
xstream.alias("user",User.class);
User user = new User("Paul", "Breitner");
String xml = xstream.toXML(user);

Working with XStream annotations looks like this:

import com.thoughtworks.xstream.annotations.XStreamAlias;

@XStreamAlias("user")
public class User {
    private String firstName;
    private String lastName;
    ...
}

In the Java code, serializing based on annotations looks like this:

XStream xstream = new XStream();
xstream.processAnnotations(User.class);
User user = new User("Paul", "Breitner");
String xml = xstream.toXML(user);

The results are the following XML:

<user>
  <firstName>Paul</firstName>
  <lastName>Breitner</lastName>
</user>

For deserializing objects back from XML, you must go the following way:

String xml = "<user><firstName>Paul</firstName>
<lastName>Breitner</lastName></user>";
XStream xstream = new XStream();
xstream.processAnnotations(User.class);
User user = (User) xstream.fromXML(xml);

XStream is even more helpful if you have larger object trees for deserialization. Normally, the objects of your model are already defined, and with XStream, you can easily reuse complex objects to drive your DataProvider in TestNG.

Now let’s go back to TestNG. To use your list of objects as a DataProvider there, you can use a helper function that transforms it into an array of arrays, as shown in the following listing.

Listing 8.3. TestNG test class reading data from XML via XStream

Transforming the list into an array of arrays allows it to fit into TestNG’s data-Provider format.

The reuse of your model can be handy in data-driven functional tests. If you have ten or more parameters in your user interface that you want to check, the parameter list becomes even more complicated and results in a lot of work. Building your user interface around your model using XStream is the way to go. XStream makes it easy to fill in the important parts of your model.

To further explain this use case, let’s talk about using Selenium to test web apps.

8.2.3. Testing the web UI with Selenium, TestNG, and XStream

Selenium is a web test framework that drives the user interfaces with JavaScript. It has direct access to the full web page and its DOM. Different types of locators are available that tell Selenium which HTML element a command refers to on the page.

If you’re testing in the Java world, you’ll normally be faced with Selenium Remote Control (Selenium RC). Selenium RC acts as a small server that gets execution commands. It can start and stop browsers on different platforms; after the browser is started, it communicates with the injected Selenium Core, which is based on JavaScript. Different client libraries are available for different platforms to drive Selenium.

Selenium fits perfectly in a Maven and TestNG setup. The Selenium Maven plug-in starts Selenium RC on the local machine in the preintegration test phase (see the following listing).

Listing 8.4. Starting and stopping the Selenium server via Maven

The TestNG test suites are defined in XML, and you can tell Maven which test suite to take. Instead of using the Surefire plug-in directly (which is responsible for testing with Maven), you can use the Failsafe plug-in, which is a fork of the Surefire plug-in for running integration tests, as was discussed earlier in this chapter. The Surefire plug-in stops the build when a test failure occurs, with the result that the test environment isn’t released correctly. The Failsafe plug-in won’t fail the build during the integration-test phase, enabling the postintegration-test phase to execute.

The following listing shows the part of the POM where you define that the testng-firefox-minimal.xml test suite is taken for test definition.

Listing 8.5. Defining the tests in the POM

By using the failsafe-maven-plugin , the build won’t break even if tests fail. The TestNG test suite is a collection of include and exclude patterns and packages. Generally, for setting up Selenium tests, you need the following:

  • A system under test, accessible on the server where Selenium RC runs
  • A configuration to access Selenium RC
  • Selenium tests to verify that the web user interface is working as expected

Locally, you can start the application with the Maven Cargo plug-in. The application is running on a port you need for test configuration. You additionally need the port of Selenium RC, which is port 4444 by default. The host is localhost in this example (as you will see in listing 8.6).

You can prepare test servers with different operating systems and browsers installed. The servers can run Selenium RC as a service, and you only need to point your tests to run against these servers. The application that you want to test could be complex and so large that it’s deployed only once a day to an external location. In this case, you could run the tests on different operating systems and browsers against this external location. Be aware that parallel tests get more complicated in such a setup.

Let’s go back to the Selenium test setup. The easiest way to get started with Selenium is to use the Selenium IDE until you’re familiar with the most recently used commands. Selenium IDE is a Firefox plug-in that can record tests while you use the application under test. This allows you to replay the tests inside the Selenium IDE and export tests in various languages like Java. Once you understand the common commands, you can dig deeper into your DOM, your application, and your tests.

Recorded tests are fragile. The tests may suggest that nothing has changed, but if things like layout or template do change, your tests shouldn’t break. Tests should rely on the base of your application, like HTML elements that are marked with a unique ID. This is why we rely on tests written in a higher programming language like Java with the TestNG framework—we can include those test sequences in continuous integration easily. Furthermore, hosting the tests in TestNG enables barrier-free testing: We can aggregate functional tests with other test categories.

To add the compile and classpath dependency to the Selenium Java library, add the following entry to your Maven POM:

<dependency>
    <groupId>org.seleniumhq.selenium.client-drivers</groupId>
    <artifactId>selenium-java-client-driver</artifactId>
    <version>1.0.1</version>
</dependency>

The following listing shows a simple TestNG test class. It uses the Selenium API provided by selenium-java-client-driver.

Listing 8.6. A simple TestNG test class, including Selenium

The browser is started with TestNG’s @BeforeClass annotation. You could also use @BeforeSuite or @BeforeTest. Keep in mind that the browser start process can take a pretty long time, so you should avoid restarts if possible.

With the initialization of the DefaultSelenium class, you need four parameters:

  • serverHost—The host name of the Selenium RC server
  • serverPort—The port of the Selenium RC server
  • browser—A Selenium-specific browser string (such as, *iexplore for Internet Explorer)
  • browserURL—The URL of your application

The test in listing 8.6 uses the @Test annotation. It first opens the website, waits for the page to load (for a length of time specified in milliseconds), and then verifies whether the text appears.

Let’s now complete the data-driven web testing scenario by integrating TestNG, Selenium, and XStream. The following listing is the missing piece.

Listing 8.7. Integrating TestNG, Selenium, and XStream

This listing shows the remaining details of the TestNG test class that runs a data-driven test. The testDataTable method gets its test data from the XStream DataProvider that we discussed earlier in this chapter, and then it runs the test

It can be more convenient to use Maven profiles to manage which kinds of tests you want to run. The following listing shows such an example.

Listing 8.8. Using profiles to decide which tests to run

The daily profile runs the integration test with the Maven Failsafe plug-in. The second profile has the ID nightly. It can look completely different and will run if nightly is passed as the parameter. The idea is to define the plug-in in two different profiles and then script the Failsafe plug-in with different TestNG XML configurations. You can use this pattern for various situations.

To activate the profile, you use system properties. This is handy if you need to run different profiles on the same computer. If you need different profiles—for example, for the development computer versus the build server—you had better activate the profiles in the settings.xml file.

To run the normal tests, type the following in your command shell console:

mvn clean install

To run the nightly tests, you can use your newly created profile:

mvn clean install -Dnightly

Selenium, TestNG, and XStream are popular testing tools and they integrate with build tools such as Maven and Ant. But many test engineers are successful with running data-driven tests using Excel, too.

8.2.4. Data-driven testing with Excel

XStream-based tests inject one object for each test run. If you’re dealing with flat structures or using Excel for other project tasks, XML can look like a too-complex, too-technical, developer-centric solution. There are some Excel libraries for Java available, so it’s easy to define test data within Excel sheets. This section shows you how to do that. This solution extends the infrastructure we set up for the XStream processing.

Order the Excel columns to match the parameters in your test method, with each row representing a test run. Table 8.1 shows an example Excel sheet.

Table 8.1. Example Excel sheet containing two rows, two columns, and a header

First name

Last name

Paul Breiter
Bernd Müller

As in the XStream solution, you need a TestNG host to host the tests and read the test data. The TestNG test method gets data from a TestNG data provider. The logic that converts the Excel sheet into an array of arrays is simple (it’s shown shortly in listing 8.9). You also need to choose an Excel library to read the sheets: the Apache POI library (http://poi.apache.org/) or the Java Excel API (http://jexcelapi.source-forge.net/) are both good choices.

The following example uses the Apache POI library, which is added to the pom.xml file:

<dependency>
    <groupId>org.apache.poi</groupId>
    <artifactId>poi</artifactId>
    <version>3.6</version>
</dependency>

A helper method reads a file that’s located in the resources folder of your Maven project. It reads the first sheet and uses Apache POI to find the filled data range. The first line should be excluded because it contains the column headings. The following listing shows this basic example; you can extend this to more specific formats as necessary.

Listing 8.9. Reading from Excel with POI

As you have seen, it’s easy to use Excel as the source for your tests. If you want to use a different flat data database, you can use CSV with a StringTokenizer or fixed-length formats. It’s up to you to choose your weapons.

Because the user (domain expert) can specify the application and can help with maintaining tests (test data), data-driven tests with Excel are considered to be acceptance tests, in a narrow sense.

 

Using Maven, TestNG, and Selenium 2 for web interface testing

Web interfaces are no longer simple forms where you enter data and click Next to reach the next page. We’re confronted with more interaction and dynamic interfaces implemented through JavaScript and Ajax. Different browsers have different features, so it’s important to test the real usage as closely as possible. On the other hand, you may have limited time for testing. Tests in real browsers are slow. You need to start the browser environment, and the (Selenium) locators are sometimes slow because you need to use the native methods.

HtmlUnit is a browser emulator written in Java that’s ideally suited for fast and reliable tests. With the Selenium 2 release, we now have a merge of Selenium 1 and WebDriver, so you can now run the same tests in real browser environments or in HtmlUnit. Which environment you choose depends on your needs.

If you’re working with a lot of JavaScript, you should test in real browser environments. But because real browser tests could easily exceed an hour, they aren’t suited to run with every check-in on the CI server. Instead, you can run them in the HtmlUnit environment and run the real browser tests nightly. In this way, you can get fast feedback for normal errors and then also detect browser differences once a day. The combination of Maven, TestNG, and Selenium 2 is a perfect setup for this solution.

 

Excel and TestNG make an effective test toolchain. Fit, TestNG, and FEST (Fixtures for Easy Software Testing) also help with creating effective acceptance tests.

8.3. Acceptance testing with Fit, TestNG, and FEST

The example of outside-in development and barrier-free testing with a chain of integrated tools that we’ll look at in this section is based on specifying, developing, and testing a Java Swing application. We’ll functionally test the application with our UI testing framework, FEST, automatically validate the acceptance criteria with the test framework, Fit. The framework that hosts the tests is TestNG. We’ll also embed the tests in both Ant and Maven. We’ll integrate both the tests (functional tests and unit tests) and the build results.

We’ll discuss the individual tools and their integration as we come to them. Let’s start with the application under test.

8.3.1. The application

In this section, we’ll test a small Swing application. It contains an editable table of two columns, an input text field, and a button. A document listener is added to the input field. When entering or removing text, the table is updated accordingly. Table rows are displayed, and the first column value starts with the pattern entered in the input field. For example, entering M will display the four result sets that start with M, filtering out others (assuming there are other rows in the table). Under the hood, a table is associated with a TableRowSorter (see figure 8.2).

Figure 8.2. Swing application with a table and a corresponding TableRowSorter

Let’s specify and test this functionality (and develop the code). As you can see, we need a functional test to verify this user interface behavior. Because Java 1.6, the sorter functionality is available as part of the standard distribution, so you don’t have to implement it again yourself. Generally, it’s an antipattern to test a standard API, but it offers a simple isolated use case for explaining the integration of Fit, TestNG, and FEST.

You can also extend your testing capabilities with Fit.

8.3.2. The specification

Fit (http://fit.c2.com), the Framework for Integrated Test, is a popular, free tool.[15] This tool perfectly meets our need to specify user interactions on the application and to specify expected behavior. Interactions and expected behavior are defined in HTML syntax (see figure 8.3). This is an easy-to-use interface, because customers can edit HTML using any number of tools, including Word.

15 See Rick Mugridge and Ward Cunningham, Fit (Prentice Hall, 2005).

Figure 8.3. The HTML specification in the Fit format, viewed in a browser

Fit can process the specification automatically, and it generates another HTML file containing the result of the approval. This new file consists of the same structure and content as the specification and adds the check results to it in the form of colors. We’ll talk about the result page in the next section.

 

Fitnesse

FitNesse (http://www.fitnesse.org/) is a free, standalone wiki-based tool that integrates with Fit (and with another test system called SLIM). FitNesse allows you to write your tests in a wiki syntax instead of plain HTML tables. More about FitNesse in section 8.4.

 

By writing fit.ActionFixture in the first table row, you specify that you’ll use Fit’s ActionFixture. An action fixture interprets rows as a sequence of commands to be performed in order. The ActionFixture knows four Fit commands, and they must be written in the first column:

  • start—Subsequent commands are directed to an instance of the class that’s written in that row of the table (in this case, com.huettermann.fit.FitTestActionFixture).
  • enter—Invokes the method of the class with an argument. This command takes values from the Fit specification and enters them in fields on the UI.
  • press—Invokes a method of the class with no arguments. This clicks (or presses) a button on the UI.
  • check—Invokes a method of the class with no arguments, and compares the returned value of that method with the given value in the table cell. This reads and validates values from the UI.

 

Refactoring Fit Tests

The FITpro project (http://www.luxoft.com/fit/) provides functionality to integrate Fit into Eclipse, and it offers reporting and refactoring features. This has the benefit of working on Fit tests directly inside Eclipse. You can include Fit into a build script and use this script in your IDE too. Refactoring tests is appealing because you don’t need to worry about changes not being applied to all corresponding artifact types (HTML, Java fixtures, and so on).

 

The Fit specification is associated with the system under test by a Java class (named a fixture). You have to develop this Java class by extending Fit’s ActionFixture base class (see the following listing).

Listing 8.10. The Java fixture associated with the Fit HTML table (extract)

Java reflection has methods that are linked (and later called) according to the names in the table (see figure 8.3) and their signatures.

To find the visual controls on the UI, drive them, and retrieve content, you can use FEST (Fixtures for Easy Software Testing; http://easytesting.org). FEST is free and is technically based on the AWT robot. As you can see in listing 8.10, you can use FEST’s fluent interface notation to navigate through the object hierarchy.

 

Note

A fluent interface is an object-oriented API leading to more readable code. It’s normally implemented by method chaining to relay the instruction context of a subsequent call. The approach was widely spread by Eric Evans and Martin Fowler.

 

Before you can call the Fit test to process a HTML document, you need to write a small adapter, like that shown in the following listing.

Listing 8.11. Calling the Fit application passing parameters (extract)

This adapter receives the HTML Fit spec and calls Fit with it. You can now execute the HTML specification and compare the defined specification with the current application functionality.

In order to process the spec, you need to glue the tests, which we’ll discuss next.

8.3.3. Gluing the tests and processing the document

To run the test, you can use the free TestNG, which you saw earlier in section 8.2.1. TestNG and its tests are the entry point for executing the Fit tests. Because we’re using unit tests in this example too (for example, with TestNG), we can integrate those tests, profit from aggregated reporting, and minimize barriers and overhead.

 

Additional glue code?

In some situations, dedicated testers use feature-rich tools to write acceptance tests in specific scripts. The problem with these types of tools is that they generally use some kind of scripting language that’s different from what the team is using for production code. Sometimes they use something like JavaScript, and other times a proprietary scripting language. Programmers on the team don’t want to switch gears and have to use a different language for writing test scripts.

An Agile team should take a whole team approach, where everyone, regardless of their main role, is responsible for quality and making sure all testing activities are completed for each user story and release, so the whole team needs to choose test tools by consensus so that everyone can use the tool.

An advantage of the Fit/FitNesse model for tools is that if you have testers writing test cases and programmers writing the fixtures that automate them, these two groups are forced to collaborate, which is a big advantage![a]

a Special thanks to Lisa Crispin for discussing this with me and providing her opinion.

 

TestNG can not only run tests (as you can see in the example), but it can also host different types (groups) in parallel, enabling you to call groups of tests or all tests. A group can be all unit tests or an integration test. TestNG and its tests are our entry point for executing the Fit tests. Because you use unit tests, too (for example, with TestNG), you can integrate those tests, profit from the aggregated reporting, and minimize barriers and overhead. The following listing shows how you can set up a TestNG class that hosts different groups.

Listing 8.12. Integrating the Fit test into TestNG

First, you append _Integration to your test class to indicate that you have integration tests. If you want, you could access classes with this suffix by reflection. Next, you specify the input folder where the HTML spec is located and the output folder where Fit puts the result file. You use FitRunner to encapsulate the access to the Fit framework and set up the test method for the first group, gui. You then run the Fit runner with parameters to process the HTML document. The second test method for the backend group is a dummy in this example to simulate another group.

In this example, one group expresses all GUI tests; the others are backend tests. Running only the tests of type gui will run the functional tests. More about running the tests a bit later.

Running the TestNG test will process the Fit table. FEST will start and drive the Swing application. Executing Fit will create a result table (see figure 8.4).

Figure 8.4. The Fit result document shows successful checks with a green background.

The content of the result table is exactly the same as the spec file, with one difference: The cell of a check row containing the expected result has a background color. The color is green when the expected result is identical to the actual one (as in the four cases in figure 8.4) and red if it isn’t.

8.3.4. Running tests with Ant

You won’t want to always trigger the test cycle via an IDE or even manually on the console. You can also use Ant to call the tests automatically. To do that, call TestNG inside the Ant script. The following listing shows an example of how to do that.

Listing 8.13. Running TestNG with Ant (excerpt)

In the Ant script , you use the embedded TestNG Ant task xmlfileset to define which tests to process . Using this TestNG feature, you can configure the tests you want to run without hardcoding the test classes in the script.

The referenced mastersuite.xml file is illustrated in the following listing.

Listing 8.14. The TestNG mastersuite.xml defines which tests to run

This script aggregates different scripts. This way, you can further group the tests. In this example, there are some priority tests, some mocked tests, and a common test suite .

If you investigate the common-testsuite.xml file, you’ll see the following listing.

Listing 8.15. TestNG test suite defining which tests to run

This listing is an XML document that follows the TestNG schema. You can see further hierarchies and collections of tests , and you include and exclude groups to run . The groups in the XML, reference the TestNG groups in the TestNG test class’s annotated test methods. The example also demonstrates the flexibility you have to address tests in your Java classes. Besides include and export patterns, you can also reference the artifacts by their package names or class names.

Now it’s time to run the script. By calling the Ant script, you compile and package your system and test. Then Fit runs the acceptance tests by starting the application and driving the UI. Afterward, the output document is written and the system undergoing the test is stopped.

Effective reporting is essential. To validate the success of the test, you don’t need to validate all the results. We have one single entry point to the test results because we embedded the acceptance tests into the TestNG suite. Therefore, we have a reporting document that shows the results of all the test types. Figure 8.5 shows the resulting document, which is a bit different from the standard TestNG report.

Figure 8.5. ReportNG aggregating TestNG tests, including the functional tests in the common test suite

As you can see in figure 8.5, we integrated ReportNG (freely available at the official project website, http://reportng.uncommons.org) to further facilitate the reporting. ReportNG is a simple report plug-in for TestNG that provides a nicely colored view of the test results. It also produces JUnit format XML output for further integration into CI engines.

You can also integrate these features into the Maven run.

8.3.5. Running tests with Maven and adding to a Maven site

Depending on your overall strategy and project conditions, it can be helpful to integrate the Fit tests into Maven, but setting this up can be a bit tricky. This section shows how you can integrate Fit tests into Maven.

Fit tests are stored in files in the directory system. You need to prepare the Fit specs so they’re found in the Maven build lifecycle while executing the tests. This means you need to copy them into the target folder before the tests run. You also need to prepare a target folder where the Fit results will be placed. Ant is efficient at copying files and creating folders, but Maven, like Java, lacks an easy way to handle files, so you can use the Maven AntRun plug-in to embed an Ant script in your Maven build file.

The use of this Maven–Ant bridge should be kept to a minimum. It’s possible to insert the complete Ant script in the Maven file, but doing this would probably rob you of the Maven features, which were the reason that you chose to use Maven in the first place. (But note that fully integrating bigger Ant scripts with Maven as a first step to migrating the Ant script to Maven could be a valuable migration strategy.) In this example, we’ll focus on inserting Ant tasks to copy Fit files.

The code that copies the Fit spec and creates the result folder can look like the following listing.

Listing 8.16. Providing Fit tests

First, you add the Maven–Ant bridge for integrating Ant scripts into Maven in the build lifecycle . Next, gluing the Ant processing to the generate-sources phase enables execution of the Ant tasks before sources are processed. You then copy the Fit specs to the target folder . Finally, you prepare a result folder for the Fit results .

Now we have to configure Maven to find and execute the TestNG tests. The following listing configures Maven’s Surefire plug-in for this.

Listing 8.17. Configuring Maven to include the Fit tests

In the notation of this Maven plug-in, you configure the Surefire plug-in to use your test suite and your listeners. Running the tests with Maven leads to the same test reports you already know (see figure 8.5). But by using Maven and providing the Fit files in their target folders, you also gain from Maven’s reporting facility, particularly the Maven website.

The Maven command mvn clean install site, entered on the console shell prompt, runs Maven, cleans up Maven’s working area, compiles the application under test and runs the tests, and generates a Maven website for your Maven project. The generated Maven website contains all the information about your build, and the site is highly configurable to fit your individual needs. Figure 8.6 shows the website, configured to provide links to the Fit spec and results page.

Figure 8.6. The Maven site configured to include the Fit specifications and results page as links in the left sidebar

The Maven Surefire plug-in makes the TestNG test results accessible in the Project Reports area of the site. This is a good example of what I have been calling the barrier-free, integrated approach. The TestNG test tool and Surefire plug-in are useful in implementing an effective approach.

Using BDD in FitNesse with GivWenZen is also an effective approach.

8.4. BDD in FitNesse with GivWenZen

This section contributed by Wes Williams

FitNesse started as a tool to make Fit more accessible by adding a wiki frontend to it. It’s implemented as a simple web server and wiki with a page editor and wiki syntax that’s easy and quick to learn. It also provides a runtime environment for the tests.

New wiki pages are created as they are in many wikis—by typing the name of the new page in WikiWord (camel case) style in the URL. To tell FitNesse that this page is a test, click the Properties button in the left menu of the page and select the Test page type.

 

Installing and running FitNesse with GivWenZen

To get started with FitNesse and GivWenZen, download the latest zip file from the official web site (http://code.google.com/p/givwenzen/downloads/list), unzip the file into a folder, and run the command java -jar ./lib/fitnesse.jar.

Once the FitNesse server is running, you can start viewing, creating, and editing wiki pages and creating tests via a simple browser interface. Point your browser to http://localhost/ and you’ll see links to the GivWenZen documentation on the Google code site, example test pages, and the tests for GivWenZen.

For more information about using GivWenZen, consult the website at http://code.google.com/p/givwenzen/.

 

All of the wiki content is saved in the FitNesseRoot folder; this is in the same folder from which you started the FitNesse server. You’ll see a hierarchy that represents your URL path in which every ParentPage is located. ChildPage will have a ParentPage folder that contains a ChildPage folder. In each directory, you’ll find a content.txt file that contains the wiki markup for the page and a properties.xml file that holds the properties, such as the page type. It’s a good practice to put the FitNesseRoot directory under version control, along with the code it describes and tests.

 

Automated acceptance testing with FitNesse

If you’re creating automated acceptance tests, you should be including them in your automated build. This is also true for tests based on FitNesse.

Here are a few options for including FitNesse in an automated build:

  • Use the set of Ant tasks that come with FitNesse to integrate FitNesse into your build system.
  • Include FitNesse in your Maven-based build with the Maven FitNesse Plug-in.
  • Use the Hudson/Jenkins plug-in for FitNesse to integrate FitNesse in Hudson/Jenkins directly.
  • Use JUnit to run the FitNesse tests by using the JUnitHelper class that ships with FitNesse. This way you can create JUnit XML result files that can be reported by any build server.

 

FitNesse will need to know where to find the code that will run the tests and the application code that the tests will verify. This is done with a special wiki syntax: !path ./myclasses. Multiple paths can be created on separate lines, as shown here:

!path ./target/classes/main
!path ./target/classes/examples
!path ./lib/commons-logging.jar
!path ./lib/fitnesse.jar
!path ./lib/log4j-1.2.9.jar
!path ./lib/slf4j-simple-1.5.6.jar
!path ./lib/slf4j-api-1.5.6.jar
!path ./lib/javassist.jar
!path ./lib/google-collect-1.0-rc4.jar
!path ./lib/dom4j-1.6.1.jar
!path ./lib/commons-vfs-1.0.jar

The path is relative to the working directory in which the FitNesse server was started. Child pages inherit the path of parent pages and can add to the path.

FitNesse originally sat on top of the Fit test system, but Fit has been stagnant since reaching a mature feature set. Additionally, Fit isn’t always easy to translate to other languages. This isn’t an issue if you don’t want to translate the existing library to other languages. But the FitNesse team decided to implement a new test system that they named SLIM. FitNesse can be used with either the Fit or the SLIM test system. If you’re just starting with FitNesse, consider choosing SLIM. SLIM is what we’ll use in all the following examples; the previous sections of this chapter already demonstrated the use of Fit.

To use SLIM, you must tell FitNesse that you wish to use that test system. You can do this with more special wiki syntax:

!define TEST_SYSTEM (slim)

This should go in the top-level wiki page of your suite of tests, and then all child pages will inherit this property.

 

Fitlibrary

FitLibrary is another option in addition to SLIM. Most people who are using the Fit test system use the FitLibrary, which is still maintained and which has a lot of useful features. Unfortunately, the FitNesse team doesn’t verify FitLibrary is still working with each release, and occasionally FitLibrary stops working with the latest version of FitNesse.

 

8.4.1. Testing with GivWenZen

Every SLIM test needs a fixture. A fixture is code—Java in this case—that executes a test. SLIM has several built-in table or fixture types: script tables, decision tables, query tables, and so on. We’ll be using a simple script table because with GivWenZen the majority of the code required to execute the tests goes into Java classes, which are referred to as step classes.

GivWenZen comes with a simple fixture, which we’ll start with: org.givewenzen.GivWenZenForSlim. To tell a test page where to find the fixture, you use a special import table:

|import|
|org.givwenzen|

Now you can tell SLIM to use the fixture with a script table start command:

-|script|
|start|giv wen zen for slim|

Notice the - before the |script| code. This will hide the first row of a table in SLIM. The |script| row is purely technical and adds no value to understanding the test. If you’re writing a test, it’s needed. But it’s not needed when reading the test, so it’s better to hide it (so it’s not shown in the reporting—more on that a bit later).

Our first page could look like this:

Listing 8.18. Completed test page

Run this test by clicking the Test button on the left-hand menu. You should results similar to those shown in figure 8.7.

Figure 8.7. Our first, simple test setup, including a fixture

The test will fail, but the fixture should start successfully. We’re working in a TDD style at the story level, so its failure is expected. As mentioned earlier, BDD is a nice extension of TDD, when done correctly.

What has happened here is that FitNesse has found the fixture and called the methods given, when, then, and and, passing in the step text as the first parameter. Because we’ve not defined these steps anywhere, GivWenZen is throwing an exception. Let’s take a quick look at the fixture we’re using:

Listing 8.19. GivWenZenForSlim script fixture

The name of the fixture class maps to the |start| command on the wiki page. Notice that the given, when, then, and and methods all take a single string parameter. The SLIM script fixture turns rows into method calls. The method name is determined by taking the value in the first column in a row and every other column after that and concatenating them. The other columns are expected to be parameters to the method.

A test specification written as |Given|...| calls the method public Object Given(String methodString). A test specification written as |given|...| calls the method public Object given(String methodString). This is because SLIM uses a case-sensitive matching for methods. To demonstrate this, our example fixture has two of each method: one beginning with a lowercase letter and one beginning with an uppercase letter. Writing both versions of methods can be convenient if you want to be free in which initial letters you use while writing the test specification .

Our fixture is simple, so the first column in the test is given, when, then, or and. We have no additional method columns, and we have one parameter column that gets passed in to the given, when, then, and and methods. It’s a fairly easy concept, and that’s all you need to understand about fixtures to use GivWenZen. Now let’s implement the steps of the test.

At present, our tests are failing with an error because the steps aren’t implemented. See the following sample error output:

__EXCEPTION__:org.givwenzen.DomainStepNotFoundException:
You need a step class with an annotated method matching this pattern: 
     'A flight departing at 0800'
The step class should be placed in the package or sub-package of bdd.steps or 
     your custom package if defined.
Example:
  @DomainSteps
  public class StepClass {
    @DomainStep("A flight departing at 0800")
    public void domainStep() {
      // TODO implement step
    }  }

The exceptions are listed at the top of the wiki page. SLIM doesn’t put them in order, but it’s easy to figure out which one belongs to which row in most cases. The error for the first row in the table states we need a step class that has a method with an annotation that matches our step text. The first thing we need to do is create a step class.

By default, GivWenZen looks for step classes in the bdd.steps package, and the class should be annotated with @DomainSteps:

package bdd.steps;
import org.givwenzen.annotations.DomainSteps;
@DomainSteps
public class FlightSteps {
}

Now that we have the step class, we can start by copying the example method from the error message and adding it to the class. Then we should fix the name of the method, because domainStep isn’t descriptive. Let’s call it createFlight and make the method return a Boolean and return false for now.

When we run the test again, the former error is gone and the first row has a red background in the first column, indicating that the row executed but failed. Returning false for a row causes SLIM to display it as a failure.

I recommend starting each BDD cycle by getting all your tests to a failure state, with no exceptions. Once there are no exceptions, make the tests pass. Go ahead and create all the methods you need; it’s easiest to start with the example methods in the exceptions GivWenZen throws. After you’re done, your step class should look similar to the following listing.

Listing 8.20. FlightSteps with default failing steps

Your test should look like the one in figure 8.8.

Figure 8.8. Steps implemented (with default method bodies), no exceptions; all failed tests

Finally, you can make your test pass by implementing the methods. Step method parameters are created with regular expression captures that use a regular expression syntax in parentheses: (.*). We’ll change 0800 in the annotation for the createFlight method to the value (.*). Next, we’ll add a string parameter to the method signature called departureTime. We’ll also create a calendar object; this isn’t a good object for scheduling, but it will work for our simple examples. We’ll set the hour of the day and minutes based on the string passed in. Finally, we’ll set the time on the flight object and change the return from false to true.

This should result in a method similar to the following:

@DomainStep("A flight departing at (.*)")
public boolean createFlight(String departureTime) {
   Calendar departureCal = Calendar.getInstance();
   departureCal.set(Calendar.HOUR_OF_DAY,
      Integer.valueOf(departureTime.substring(0,2)));
   departureCal.set(Calendar.MINUTE,
      Integer.valueOf(departureTime.substring(2)));
   flight = new Flight();
   flight.departsAt(departureCal);
   return false;
}

You can run the test now, and the first row should turn green to indicate that the test is passing. I normally have only the then steps of my tests turn green because these are the real confirmations of completeness.

When you start the next step, you’ll see that you need the same exact conversion to a calendar for the arrival time. We’ll use the GivWenZen and Java property editors to manage these types.

8.4.2. GivWenZen and Java PropertyEditors

Like SLIM, GivWenZen can use Java property editors to convert to a specific type. Let’s do that now and move the conversion code into the property editor.

Let’s create a class named CalendarEditor that extends PropertyEditorSupport and put it in the bdd.parse package. By default, Java’s java.beans.PropertyEditor functionality looks for a property editor in the same package as the class it creates. Because this is the calendar object, we probably don’t want to put it in that package. GivWenZen has another package that it looks for in PropertyEditor and that is, you guessed it, the bdd.parse package.

We need to override one method in the CalendarEditor and that’s setAsText. This is where we’ll move the conversion from string to calendar, too. We should end up with a class that looks like this:

public class CalendarEditor extends PropertyEditorSupport {
   @Override
   public void setAsText(String departureTime {
      Calendar departureCal = Calendar.getInstance();
      departureCal.set(Calendar.HOUR_OF_DAY,
         Integer.valueOf(departureTime.substring(0,2)));
      departureCal.set(Calendar.MINUTE,
         Integer.valueOf(departureTime.substring(2)));
      setValue(departureCal);
   }
}

Now we can change the parameter in FlightSteps#createFlight to a calendar, greatly simplifying the method. The new method signature is public boolean createFlight(Calendar departureTime).

The implementation of our next method, flightArrivesAt, should be simple now. Change the 1000 value to (.*) in the annotation and add a calendar parameter called arrivalTime. Implement a new method on the flight that accepts the arrival time, and change the return statement to true. Now the first and second step should pass the test.

You don’t need a property editor for converting to native types such as int, double, and so on, or for converting to a string. Everything we’ve done in code, including the fixture, should be driven with TDD. The step classes and methods and the editors we create should all follow good coding practices.

The test currently looks a bit like a unit test, and this isn’t the sweet spot for FitNesse or GivWenZen. GivWenZen is best used for functional and acceptance testing of a story. In the real world, we would already have some type of domain created, and it would need to be instantiated and interacted with. In between making each step pass, we would be writing unit tests and creating classes to integrate them into real functionality. Moving ahead and finishing the implementation of the remaining steps leads us to the following listing.

Listing 8.21. Steps and flight

We now have a first passing test. Quite often your tests will have steps that touch multiple parts of your domain. Because we put steps related to each separate domain aggregate or service in different step classes in this example, we’ll probably need to share states with them. GivWenZen allows this by telling the GivWenZenExecutor, which was created in our fixture, what the shared state is. In the next section, we’ll look at adding some additional scenarios.

8.4.3. Adding further scenarios

In the flight-scheduling program, what we might have is some additional functionality related to airports and their behavior. We might also have some scenarios related to choosing to delay a flight depending on which airport we are departing from:

As a flight scheduler
In order to see the effect that taxi time has on the departure time at an 
     airport
When I delay a flight the departure time should be adjusted by the delay time 
     plus the taxi time of the airport

Our next test, which we can call DelayFlightWithAirportTaxiTimeTest could look like this:

|Given|airport XXX|
|And|airport XXX has a taxi time of 15 minutes|
|And|A flight departing at 0800|
|And|the flight departs from airport XXX|
|And|the flight arrives at 1000|
|When|the flight departure is delayed by 15 minutes|
|Then|the flight should depart at 0830|
|And|the flight should arrive at 1030|

You can give this a try: Create this test and run it. It should fail on the first three given statements, with exceptions, and the when statements should fail because of invalid values.

Let’s create a new AirportSteps class and implement the airport steps such that they fail without an exception and the new flight step:

@DomainSteps
public class AirportSteps {
   @DomainStep("airport XXX")
    public boolean createAirport() {
      return false;   }
   @DomainStep("airport XXX has a taxi time of 15 minutes")
    public boolean airportTaxiTimeIs() {
      return false;
    }
}

For this example, let’s create an Airport class and for the tests an AirportSteps class. We’ll create an AirportService class to give our FlightSteps and AirportSteps access to airports.

Next, we’ll create a fixture that uses an instance of the GivWenZenExecutor that knows about our AirportService. To do this, we can extend GivWenZenForSlim and override the default no-parameter constructor. In the constructor, we’ll create an instance of the GivWenZenExecutor using the GivWenZenExecutorCreator:

public class BookExampleGivWenZenFixture extends GivWenZenForSlim {
   public BookExampleGivWenZenFixture() {
      super(GivWenZenExecutorCreator.instance().
         customStepState(new AirportService()).
         create()
      );
   }
}

At this point, we need to change two places that use the GivWenZenForSlimFixture to our new BookExampleGivWenZenFixture. Ugh! FitNesse offers us a way around this. Let’s create a special wiki page called SetUp as a child of the BookExamples suite page. In this page, let’s put the import table and the table that starts our new fixture, as shown here:

|import|
|org.givwenzen|
-|script|
|start|Book Example Giv Wen Zen Fixture|

This is a special wiki page that will be included at the top of every page under BookExamples. Let’s remove the import and start tables from both of our current tests. After changing the properties of the BookExamples page to set the Page type to Suite, we’ll save the properties. Running the tests from the BookExamples page again results in the original test still passing but the new test fails.

The AirportService can now be passed to any of our step classes. Create a constructor in the FlightSteps and AirportSteps classes that takes an AirportService as a parameter:

@DomainSteps
public class FlightSteps {
   private Flight flight;
   private AirportService airportService;
   public FlightSteps(AirportService airportService) {
      this.airportService = airportService;
   } ...

Now that we have both step classes with access to the AirportService, we can implement the new steps. Go ahead and do so; I won’t show that here because these are simple methods.

Some comments before we finish this section. I would probably not have implemented the state object in this case. The domain would probably have a real service for finding and creating airports, and I would have used that from an AirportEditor. This would have allowed me to have airport parameters instead of strings, but this example did show that you can share states between the steps.

It’s good practice to break the new test when it’s first coded and implements a new story. Tests, fixtures, and step classes are code, and they should be treated like all code. Your tests must be maintainable or they will stop being used and become useless.

 

Slicing Fixtures

An issue I have seen is fixtures that become too big or have an inheritance hierarchy that’s too deep. This makes them difficult to reuse and definitely more difficult to understand. Your fixtures, step classes, and tests should have a logical organization that matches that of your application. For me, it seems like the step class idea helps with this, but it sure doesn’t guarantee it. As you refactor your application, your domain reorganizes your tests and the code that goes with these tests to match the current organization or structure of your domain. Not doing so will lead to confusion and will increase the difficulty of maintaining the tests.

 

8.4.4. Creating scenarios

Scenarios allow steps to be grouped and parameterized so you can use them multiple times with different parameters. A parameter table starts with a row that defines the table as a scenario table by putting the word scenario in the first column. The remaining columns work similarly to the method lookup of a script table. Start with the second column and use every other column after that to build the scenario name. In between the columns are the names of parameters.

The following listing shows an example scenario for the delay flight tests.

Listing 8.22. Delay flight with airport taxi time scenario
|scenario|delayed|delayBy|flight|origDepartTime||origArriveTime|with taxi 
     time|taxiTime|should adjust departure|newDepartTime|and 
     arrival|newArriveTime|times|
|Given|airport XXX|
|And|airport XXX has a taxi time of @taxiTime minutes|
|And|A flight departing at @origDepartTime|
|And|the flight departs from airport XXX|
|And|the flight arrives at @origArriveTime|
|When|the flight departure is delayed by @delayBy minutes|
|Then|the flight should depart at @newDepartTime|
|And|the flight should arrive at @newArriveTime|

The name of the scenario in this example is “delayed flight with taxi time should adjust departure and arrival times.” See how this is similar to the script table? But notice one strange thing: There’s an empty column between origDepartTime and origArriveTime. I left this blank in order to have the scenario name read well. This is better, but the blank column is needed because of the “every other column” rule.

The parameters for the scenario are the values delayBy, origDepartTime, origArriveTime, newDepartTime, and newArriveTime. Looking down through the steps, notice that we replaced exact values with the parameter names appended with an @ symbol.

You can include scenario pages by inserting the following line at the top:

!include DelayFlightWithAirportTaxiTimeScenario
|delayed flight with taxi time should adjust departure and arrival times|
|orig depart time|orig arrive time|taxi time|delay by|new depart time|new 
     arrive time|
|0800|1000|0|20|0820|1020|
|0800|1000|15|15|0830|1030|

The include feature is a nice option to give reusability and remove duplication. It’s also possible to have the scenarios automatically included in the page by putting them in a page called ScenarioLibrary. Figure 8.9 shows a more complex example, also using scenarios.

Figure 8.9. Passing scenario tests

To sum up, we have FitNesse up and running with GivWenZen. We have created a couple of tests in the BDD style with GivWenZen, a fixture, and a couple of step classes.

Automated acceptance tests add a lot of value to a project. Like unit tests, they increase the confidence you have in changes you’re making to the application. They describe the application to the whole team and are a meaningful way to collaborate on adding value to your application. But there’s a cost: You must maintain these tests.

8.5. Summary

In this chapter, we discussed requirements management and testing, and integrating these phases with the coding phase. Requirements management, development, and delivery are all part of the development lifecycle. This chapter introduced collaborative and barrier-free testing. We talked about data-driven tests, acceptance tests, and behavior-driven development. By discussing real-world examples, we learned how to integrate tools seamlessly.

The next chapter will explore another aspect of collaborative and barrier-free development and testing. With Groovy and Scala, we’ll talk about languages other than Java, also running on the JVM. they’re part of the polyglot development movement and bridge different technologies, including tools and languages.