Chapter 6. Creating a productive development environment – Agile ALM: Lightweight tools and Agile strategies

Chapter 6. Creating a productive development environment

 

This chapter covers

  • Approaches to accelerating development on the developer’s desktop
  • Strategies and tools for streamlining the development and build process
  • Tools like Mockito and Cargo that help with testing and deployment

 

This chapter illustrates strategies and tools for setting up controllable and highly maintainable environments that are isolated from those of other developers and production systems, and that include private workspaces (sometimes called sandboxes). But what does isolation have to do with collaborative team play? In fact, being able to work in isolation is essential for effective software development and team collaboration. Management has the obligation to foster reusability and protect investments, but it’s important for all stakeholders, such as developers, to be familiar with strategies for creating a productive development environment.

In this chapter, we’ll first look at what makes productive workspaces and review concepts from earlier chapters, including how to build code in sandboxes. Next, we’ll take a deeper look at a technique for creating testing stubs called mocking, which uses substitutes for real objects in tests. We’ll discuss mocking in general, and Mockito—a leading Java mocking framework—in particular. Lastly, we’ll talk about Cargo, which is a smart interface to application containers, and we’ll discuss TeamCity for running private builds on a remote build machine. We’ll also cover the importance of having consistent builds whether they’re triggered in a private workspace or on the official build server. I call this capability a congruent build.

6.1. Congruent builds and workspace management

Developers need consistent environments and private workspaces that are controlled and isolated from unexpected changes. This helps developers reproduce and detect bugs. Components include versioning the IDE configuration in version-control systems (VCSs) and checking in all dependencies, not only for code or components, but also for tools like Tomcat and Ant.

6.1.1. Workspace management and the VCS

As a rule, put as much into your VCS as you can, and give control to the relevant stakeholders, where it makes sense. This makes it much easier to rebuild a development environment or to switch to another machine quickly. Additionally, workspaces that can be reproducibly set up from the VCS help to maintain consistent standards across the team. Because the workspace is the first rung of the staging ladder, it’s important to have workspaces in a defined state that can be reproduced automatically. You should also commit the default configuration settings of your developed software or of the tools you use to the VCS. These settings are valid for all developers and can be used by all developers to test and run the application from inside their workspaces. Personalized configuration settings, such as individual usernames or individual database schemes, shouldn’t be kept in the VCS as part of the developed software, because they can’t be shared across the team.

Many teams find it helpful to put the Jenkins configuration settings to the VCS. They’re stored on hard disk and can be easily added to version control. It’s also convenient to use snippets that are stored in the VCS to trigger builds. For example, having a CruiseControl build server running suggests that you’ll have a build machine automating builds for different components or projects. You could put the CruiseControl control script (config.xml) into a VCS, or this container script could be stored outside of any specific project that has to be built with CruiseControl. You could put project-specific, build-related code snippets into the VCS folders belonging to the project, or put these snippets on the build server while checking them out of VCS and including them with native XML entities. Another solution could be to use CruiseControl’s include.projects element, which includes different build projects.

Another important aspect of productive environments is the ability to check a version of the complete project and run all automated tests quickly. Among other things, this approach allows you to check sources from VCS and run tests in one step, without having to manually start servers or similar items. This ability is often associated with a headless running mode—running tests without having to start a complete environment, IDE, or user interface. This gains even more significance with complicated technologies such as JBoss, Tomcat, and others. It’s also important to use the API and any related tool support where available, particularly in the context of testing. For instance, while using Spring, get acquainted with the Spring support for JUnit.

6.1.2. Workspace management and integrating code

Integrating code can be difficult. Developers must work on their code in isolation and must keep up with the functions completed by other developers. As a centralized synchronization point, the VCS and the continuous integration (CI) server help with this effort. The VCS contains the successfully integrated code; think of it as being the authoritative single source of truth. During development, developers are continuously committing to and updating from the codebase in the VCS. A typical sequence of activities can look like this:

  1. Get up to date by synchronizing with the VCS and updating changes.
  2. Make your own changes in alignment with the tasks.
  3. Prior to each check-in, run a local build with all tests.
  4. Update the workspace with the latest version in the VCS.
  5. Rebuild and retest.
  6. If everything looks good, then check in or commit your changes to the VCS.

A merging conflict can result from synchronizing your changes with the most recent version in the VCS, but most of the time you can easily resolve the conflict and check in your changes to produce a new, consistent version of the software.

Although these kinds of merging conflicts can be painful, there are other conflicts that are even worse: Given that you only check merged code into the VCS when it’s free of compile errors, the other category of errors concerns semantic correctness. Developers implement customers’ requirements, and the source code expresses functionality and semantic behavior. Merging different versions of the sources leads to one version that contains all interdependencies between the components and their functionality. Merging code and checking in the new version to VCS can lead to a new software version that’s free of compile errors but that no longer implements the customer’s requirements.

How can you minimize the probability as well as the risks of merge conflicts? When you check in small changes frequently, you minimize the chances of a conflict and ensure that any conflicts will be minor and easily resolved. Semantic changes can be detected early and often by writing automated tests and running them continuously. These tests compare the current behavior of the application with the expected one. We’ll look more at tests in chapter 8.

The more throughput your Agile team has, the more important it is to take care of finding bugs early, which means that you want to integrate early and often and to work in a private, isolated sandbox. To achieve excellent speed in delivering as much software in as short iterations as possible, you need workspaces isolated from outside changes—places where developers can work on their code and concentrate on finishing their individual tasks.

 

Isolation and Database Systems

Isolation is a core feature in database systems. It defines how and when changes made by one operation become visible to other concurrent operations. Isolation is one of the ACID (Atomicity, Consistency, Isolation, and Durability) properties of database management systems. Because the term isolation suggests a noncollaborative approach, I prefer to talk about productive workspaces.

 

In the normal course of development, the work your colleagues do results in changes being committed to the VCS that could potentially impact your environment, and it’s difficult to get any work done if the code in your sandbox is changing constantly. For example, if you regularly refresh your sandbox by pulling the latest changes from the VCS, you’ll never have a reliable workspace, because integrating each atomic change into your workspace impacts the stability of your environment. Constant updates lead to churn and waste a lot of time. It also prevents you from reliably tracking the results of merging changes with your coding (which hasn’t yet been transferred to the VCS).

Agile teams do integrate, but integration can be done differently in different contexts:

  • On a central build server, teams integrate continuously. This may lead to broken builds and crashed versions, but this is tolerated in order to identify bugs and integration issues early.
  • Developers work in their private workspaces and they have control over the version of the code that they’re working on. They control when and how their own isolated environment changes. Developers need to set up an environment and a flow that enables them to keep up with the code line, which is changing continuously, while empowering them to make progress without being distracted by changes made by others.

The build system on your local workspace should be as close as possible to the integration build system, with at least the same compiler and versions of external components that are required for the build. Typical features that may not be included locally are comprehensive test coverage and integration with or connection to all external resources, such as databases.

6.1.3. Workspace management and running tests

Besides source code and build scripts, all tests and the source code for integrating to remote resources, are maintained in the developers’ workspaces as well. But the developers don’t run all of those tests locally and often don’t connect to remote resources from their workspaces. Typically, functional tests run exclusively on a CI system where remote resources are connected.

These differences are a good reason to keep up the development flow: Waiting too long for test feedback or to run a full-fledged infrastructure slows you down. Instead of running the full bunch of tests as a single group, you should categorize your tests. By categorizing tests according to type, your builds become more Agile and tests run more focused and more frequently.

TestNG[1] is a popular Java testing framework that was created with the intent of improving upon the earlier testing frameworks (such as JUnit). TestNG allows you to use test groups, and you can determine which groups of tests run where and when (see the following listing).

1 See Cédric Beust and Hani Suleiman, Next Generation Java Testing, TestNG and Advanced Concepts (Addison-Wesley, 2008).

Listing 6.1. Different test groups with TestNG

Test groups allow you to run smoke tests or sanity checks locally. Then, you can run a full set of tests on the centralized build server. We’ll discuss TestNG in more detail in chapter 8.

6.1.4. Workspace management and dependencies

Many developers find that it’s difficult to manage isolation in a development workspace, and without isolation, you can have unexpected problems with other build dependencies. Managing your code and isolation around your workspace is essential. Automated build scripts can help to build your code and all required dependencies. If something isn’t part of your workspace, you must ensure that it can’t impact the work that you’re doing. Sometimes you may find that other developers have private copies of artifacts that, if left uncontrolled, can impact your build.

Ideally, your workspace should be isolated so that it contains only code that you’ve built locally. Source code of components built by other departments or third-party libraries shouldn’t be included in your workspace. In fact, having unnecessary parts of software in your workspace leads to less reusability and more possible variations, and both can prevent you from improving the whole development process. Sources of third-party libraries can be used to debug, but it’s not your job to build them. Remember, the component vendor or provider is responsible for the build script that constructs the reproducible component.

This is one reason why the Maven approach is appealing. It includes dependent artifacts as binaries and sources in the local workspace are optional. This way you can rely upon stable versions of the component dependencies. Maven describes components through coordinates consisting of a collection of attributes (groupId, artifactId, and version):

<dependency>
    <groupId>junit</groupId>
    <artifactId>junit</artifactId>
    <version>4.0</version>
    <type>jar</type>
    <scope>test</scope>
</dependency>

You can include dependencies by documenting the dependency to another artifact via XML. The rest is done by Maven, eliminating manual and error-prone copying of JARs. The build system shouldn’t depend on where it’s started or how it’s used. For instance, you shouldn’t natively rely upon an IDE to build your system. Instead, use build scripts that you can execute from the command line, or even better, from a CI server.

These build scripts can be triggered from the console (with Ant or Maven commands) or from inside the IDE or from an application (like a build server). Ant has been integrated into IDEs for years and the Maven integration is mature as well. For Eclipse, Sonatype offers the free m2eclipse Eclipse plug-in, which offers dependency management and the ability to use Maven archetypes to search and browse Maven repositories, automatically download dependencies, and edit POMs conveniently, among other features (see http://m2eclipse.sonatype.org/).

 

Maven and Ides

All major Java IDEs have native Maven support, including Eclipse, NetBeans, and IntelliJ IDEA.

 

Dependency management, including the illustration of the dependencies, is a big benefit of Maven and m2eclipse. Figure 6.1 shows an example from a workspace.

Figure 6.1. Dependencies visualized in real time by Maven and m2eclipse. The color of the background expresses the scope. Dependencies with compile scope are displayed with a darker background. Dependencies with a white background have other scopes, such as test or runtime.

All artifacts have dependencies on other artifacts, including transitive dependencies (dependencies of their dependencies)—this is part of the Java classpath approach. Using Maven and m2eclipse, all dependencies are described as part of the POM. M2eclipse fetches this information and provides POM editing support in your IDE, as well as a dependency visualization feature. This allows you to immediately see which artifact (identified through groupId, artifactId) depends upon which other specific artifact. The transitive dependencies are of special interest, as are the conflicts. (A conflict can occur, for example, when two artifacts depend on different versions of the same third artifact.)

 

Sonatype Professional

Sonatype Professional (www.sonatype.com) is a commercial product that ties together open source products like Hudson, Nexus, m2eclipse, and Eclipse, making it easy to install and support out-of-the-box solution.

 

In order to manage projects in the workspace, m2eclipse must use classpaths, and you can configure the classpath to enable compiling and development. POMs and their specified dependencies provide another mechanism that contains information needed to set up the classpath (and related dependencies). But often you don’t want to (or you can’t) work with classpaths on a central build system in an elegant way. M2eclipse manages the Eclipse classpaths for you simultaneously. Add a dependency in your POM, and then m2eclipse automatically downloads the artifact and adds it to the classpath.

This approach is efficient. To accomplish it, m2eclipse uses a Maven classpath container (see figure 6.2). All artifacts (in binary format) are referenced automatically, and they’re stored in your individual local repository. Consequently, you don’t need to repeatedly check the JARs as part of the project; you only manage the meta model (the POM) and put this document into the VCS. In the figure, the local repository is located under C:\app\maven_repo.

Figure 6.2. M2eclipse manages the Eclipse Maven classpath container. In your Eclipse project, all JARs are referenced as binary dependencies. No manual referencing or checking of artifacts is needed, and the approach is congruent, both in your workspace (IDE) and in your build script.

6.1.5. Workspace management and bootstrapping the development

M2eclipse also supports using Maven archetypes, which is a feature that helps to enable productive environments. The Archetype plug-in allows you to create a Maven project from an existing template called an archetype. As a result, you get a basic build script skeleton derived from the common template. You can also create an archetype from an existing project.

Archetypes can be described, built, and delivered with Maven. An archetype project has a special structure—the main difference from a normal Maven project is its archetype-resources and META-INF folders. The META-INF folder contains an archetype.xml file that decides which resources are put into a new project created by this archetype. It typically consists of references to source and test folders and files:

<archetype>
  <id>archetype</id>
  <sources>
    <source>src/main/java/App.java</source>
  </sources>
  <testSources>
    <source>src/test/java/AppTest.java</source>
  </testSources>
</archetype>

The folders and files are put into the archetype-resources folder, which also contains a template for a resulting POM. You can use parameters while executing the archetype by calling the archetype:generate goal. Parameters can be set for filling placeholders in your resulting POM. For many open source projects, archetypes already exist, offering you a smart, quick start. Additionally, archetypes are great for organizations that want to provide a set of standard builds on a central, parent POM, which helps new products set up their environments quickly and consistently.

Another great way to work more efficiently is to use a technique known as mocking, which sets a placeholder for functions that aren’t yet completed.

6.2. Using Mockito to isolate systems

This section contributed by Szczepan Faber

Mocking is a technique of using substitutes for real objects in tests. Mocking lets you test code that’s ready and leave a test stub in place for other functions as they become ready. Additionally, mocks are helpful for executing tests more quickly, because they can use mocked-out infrastructure. Mocks allow you to isolate the code under test so that a test case can fail only for the reason intended.

Gerard Meszaros (in his book, xUnit Test Patterns) defines the following types of objects that simulate behavior:

  • Dummy object— Keeps the compiler happy. Dummy objects are placeholder objects passed to the system under test but that are never used.
  • Test stub— Can be configured to return predictable, canned values. In Meszaros’s terms, test stubs provide the indirect input.
  • Test spy— Can be asked what happened; for instance, “Was this method called on a spy?” Spies are helpful because they can also be stubbed (akin to a spy’s disguise). Officially, the test spy provides a way to verify that the system under test performed the correct indirect output.
  • Mock object— Can be configured to receive expected method calls. Mock objects can also return canned values if needed. Mock objects provide the system under test with both indirect input and a way to verify indirect output.

The main reason to use mocks is to isolate the code under test so that the test case can fail only for the reason intended, and not because a collaborator has a defect or because an external resource is in an unexpected state. If the code under test has dependencies on collaborating classes, environmental configuration settings, or external resources, then the test is fragile and can’t reliably tell you whether the asserted behavior is occurring. Depending on the scope of the code under test, you might isolate larger or smaller subsets of the application code. The purpose of isolation in this context is to guarantee that the test tells you what it’s intended to tell you, and not to merely work around difficulties in using real objects.

It’s useful to know the official language that experts in the mocking world speak. But often I adjust the language for the sake of simplicity and use the mock term, as it’s fairly easy to grasp. As a friend once said, “At the end of the day, a mock is a mock.”

The differences between Mockito and conventional mocking frameworks revolve around the use of the terms spy versus mock, but there are a couple of interesting developer-friendly Mockito features. Some of these aren’t unique to Mockito—other tools might do something similar, and if they don’t, I encourage their authors to implement it! I am certain that when Mockito was developed, I couldn’t find those features in other tools:

  • Clean stack trace
  • Clickable locations in failure information to optimize debugging (debugging is necessary, although debugging time should be reduced to a minimum)
  • Handy annotations that make tests more readable and DRY (Don’t Repeat Yourself, which means that your tests should be made modular to avoid repetition errors)
  • Feedback on what you did wrong if you misuse the API, and how to fix it

Let’s start by discussing the importance of isolation in the testing process.

6.2.1. Isolation and dependency injection

Mocks enable you to test certain pieces of your system in isolation, but it’s wise to remember that isolation itself isn’t a goal. The underlying principle is to write readable and maintainable tests and to gain from fast feedback cycles. That’s it. Isolating is one of the ways to achieve highly maintainable tests.

The fundamental use case for mock objects is a situation where using real objects is impractical. For a trivial example, consider the following listing.

Listing 6.2. Code to be tested with mocks

When testing SmartDictionary, you have to take care of its collaborators: Online-Translator and DatabaseHistory . Both are problematic from the standpoint of testing. OnlineTranslator uses a remote service, so managing this external source that you can’t control may be problematic. The DatabaseHistory talks to the database: It makes the test cumbersome and slow and forces you to consider the leftover and initial database state. To test SmartDictionary, it’s impractical to use real Translator and History objects.

Let’s refactor the code so we can replace instances of collaborators. This is shown in the following listing.

Listing 6.3. Code with dependency injection enabled

The new constructor enables you to inject dependencies into SmartDictionary . In general, dependency injection pretty much enables mocking. You need to be able to inject mocked instances into the system under test.

Given that the API now allows injecting dependencies, the test looks like this:

Listing 6.4. Your first sip of Mockito

The static import of Mockito makes it easy to access the entire Mockito API. To maximize clarity, the test setup is placed outside the test method. Our givens are that the mock is stubbed to return a canned result when a particular method with a particular argument is called using the stubbing API. We ensure that a particular method with particular arguments was called on the collaborator (verification API).

6.2.2. Mocks in test-driven development

I can’t imagine test-driven development (TDD) without mocks. The use of mock objects comes from extreme programming (XP), so no wonder there’s a bond between TDD and mocks. Mockito’s implementation was test-driven from day one. The API and error handling had been continuously optimized for TDD.

I’ve already mentioned that the fundamental use case for mocks is substituting unwieldy objects. But mocks do more than that. Mocks in TDD play a crucial role in interface discovery, a technique that allows you to design the communication between the collaborators from the test. It sounds difficult but it’s not, unless you don’t know yet which collaborators your tested object needs. As you continue implementing a test, you gradually figure out what collaborating roles are required. You can learn more about this in an interesting book called Growing Object-Oriented Software, Guided by Tests by Steve Freeman and Nat Pryce.

 

Test-Driven Development

TDD is a technique that relies on the repetition of a short development cycle: The developer first writes a failing test that defines a functionality, then produces code to pass that test, and finally refactors the new code. Refactoring means changing the code without modifying its external functional behavior, in order to improve its internal quality. TDD, initially defined by Kent Beck, encourages simple designs.[2]

2 See Kent Beck, Test-Driven Development (Addison-Wesley, 2002).

 

Let’s do a TDD exercise that I like to call “ping-pong programming:” I write a test, and you write the code that makes the test pass. The test for the first feature is shown in the following listing.

Listing 6.5. The test for finding word feature

Now running the test, the test will fail. It’s pretty easy to implement the code so that the test passes.

Here’s another test method, this time for a different feature that starts with a test method that initially fails:

Listing 6.6. The test for keeping history feature

It’s your turn to make this test green—go ahead and implement the missing code.

Once you see the green bar, you can do some refactoring. You should watch for duplication in the complete test and then remove it to get the test DRY (Don’t Repeat Yourself). You should make your tests modular to avoid repetition errors and so you have less code to maintain.

6.2.3. The flavor of behavior-driven development

Did you notice the given, when, and then comments in the preceding examples? Those comments are the simplest possible technique for increasing the quality of your tests. Start writing them in your tests for one month, and you’ll never go back. Every test should consist of those three components: given, when, and then to clearly describe the setup, the test, and the result.

Behavior-driven development (BDD) is so much more than those three comments, but this chapter is too short to fully describe the BDD technique. Suffice to say (quoting J. B. Rainsberger), “BDD is TDD done correctly, nothing else.” We’ll get into a detailed discussion of BDD in chapter 8.

Explicit given, when, and then comments are great, but unfortunately, the Mockito stubbing API doesn’t play nicely with them. Mockito stubbing starts with when(), but stubbing is a part of the given component of the test. Mocking rookies often become confused, so Mockito added aliases for stubbing:

//note the different static import
import static org.BDDMockito.*;
//given
given(translator.translate("mock")).willReturn("substitute");

BDDMockito is a base class from the Mockito framework that allows you to work easily with given, when, and then comments. Many other helpful features are available, including a powerful API.

6.2.4. Other handy features of Mockito

Most of the Mockito API is available via static methods. To take the most advantage of static imports, you can apply two configuration tweaks to your IDE:

  • Favorite static imports— Some IDEs (for example, Eclipse) can’t figure out the static import if you type the method name (for instance, verify, mock). It’s useful to instruct your IDE about the static methods you often use. Search in your Eclipse preferences for “favorite static imports” and you’ll be able to add org.Mockito.*; as one of your favorites.
  • Organize static imports smartly —It’s useful to configure your IDE to always use a wildcard (*) for static imports. Decent IDEs allow you to configure the number of imports before a wildcard is used. I usually configure the number to 1 for static imports. This way, import static org.Mockito.* is always at the top of my tests; and I can take advantage of intelligent features (such as IDE auto-completion).

Often, interaction with collaborators means passing specific method arguments. It’s easy to verify and stub interactions that take simple arguments like primitives or strings. The trouble occurs when complex types are parameters of interactions. In that situation, you have the following options:

  • Make sure the complex type implements the equals() method. This method is used by Mockito to match arguments passed to collaborators. This technique is the most natural, but in some cases it may be impractical to set up expected parameters in the test (for example, there may be too much irrelevant setup).
  • Use the ArgumentCaptor to store the arguments of an interaction. This way, you can explicitly and selectively assert certain properties of the argument. It’s a useful technique, as it may provide a more readable and focused test. Here’s an example:
    ArgumentCaptor<Person> argument = ArgumentCaptor.forClass(Person.class);
    verify(registry).delete(argument.capture());
    assertEquals("John", argument.getValue().getName());
    Bear in mind that it doesn’t make sense to use ArgumentCaptor for stubbing.
  • Implement an ArgumentMatcher. It’s a Boolean function that will match the arguments for the purposes of stubbing or verification. ArgumentMatcher is mostly useful for stubbing or when an argument is repetitively matched the same way across many tests.

The @Mock annotation helps you to DRY the test and clarify what’s being mocked in a given test. Using annotations has additional benefits: The field name is used in verification failures, making them more descriptive. That’s shown in the following listing.

Listing 6.7. Using the @Mock annotation

You can see the clarity in the @Mock annotations that mark class fields that are mocked . The initMocks method initiates those fields marked by an annotation . Alternatively, you can use MockitoJUnitRunner for this task.

Now that we’ve covered a number of helpful tips and use cases, let’s discuss some antipatterns that you should avoid.

6.2.5. Antipatterns

Although the Mockito API is fairly straightforward, some users may find it difficult to use unless they read the Javadocs carefully. I’ve also noticed that users who come from other mocking tools tend to occasionally misuse the API.

ASK and TELL (verifying stubs) are the first antipatterns we’ll look at. Mockito makes a clear distinction between ASK-style (in translator.translate(word)) and TELL-style (history.rememberSearch(word)) interactions:

public String search(String word) {
  String translated = translator.translate(word);
  history.rememberSearch(word);
  return translated;
}

The API is designed this way in order to improve test readability and to make the process of writing tests more natural. In object-oriented design, we usually prefer TELL interactions because they promote pushing the responsibilities and complexities into separate objects. We tell the collaborator to do something and we forget about it. It’s their responsibility to deal with it. This leads to better design and more single-responsibility objects. If you write a lot of tests, you already know that TELL interactions are more convenient from the standpoint of testing with mocks. If you’re interested in the subject, you can look up more information on the “Tell don’t ask” principle in object-oriented design.[3]

3 On their website, Cunningham and Cunningham provide an article about the “tell don’t ask” principle: http://c2.com/cgi/wiki?TellDontAsk.

Verifying all surrounding interactions is the second antipattern. Mockito enables you to verify interactions explicitly and selectively. You can verify exactly what you want, which means you can write readable, maintainable, and focused tests. Occasionally, though, you might be interested in verifying all surrounding interactions:

verify(listener).notify(event);
verifyNoMoreInteractions(listener);

This ensures that no other interaction was made with the listener collaborator. Some users tend to exercise verifyNoMoreInteractions() often, even in every test method, but this isn’t recommended; verifyNoMoreInteractions() is a handy assertion from the interaction testing toolkit. Use it only when it’s relevant. Abusing it leads to overspecified, less maintainable tests.

The third antipattern is unnecessary verification in order. Mockito supports verifying interactions in a specific order, and this might be useful on occasion, but it’s certainly not a way to implement all your tests. You can create the inOrder object to pass any mocks that need to be verified in order:

InOrder inOrder = inOrder(firstMock, secondMock);
inOrder.verify(firstMock).add("was called first");
inOrder.verify(secondMock).add("was called second");

Just because you can doesn’t mean you should. Developers tend to test implementation details rather than behavior; I’ve observed that some developers feel better writing more defensive tests, overusing in order verification. In the majority of cases, it doesn’t make sense to verify the order of calls. It makes the test overspecified because the reader feels the order is relevant. It also makes the test less maintainable, as it can break for invalid reasons (such as refactoring of code).

Mockito assumes a lenient mock definition by default, which makes it easier for you to take a behavior-oriented approach to TDD (which is an accepted good practice). If unit tests “know” too much about the internal operations of the code’s interaction with collaborators, the tests become fragile when the application code is refactored. Refactoring shouldn’t break tests, because the behavior of the application shouldn’t change. When refactoring breaks a test, you must be able to depend on the test to tell you that you made a mistake when refactoring. Otherwise, you’ll ignore test failures, and that leads to the creation of defects. Tests that “know too much” make it harder to refactor the application code, and when it becomes harder, developers tend to stop doing it.

Cargo is often used to manage application deployment to containers in a standard way, and we’ll discuss it in the next section.

6.3. Interfacing application containers with Cargo

Cargo, in combination with Maven, provides a multipurpose utility to help manage Java containers in a build environment. You can download, start, stop, and configure Java containers, and you can deploy modules into them. Because there are so many different containers (for example, JBoss, Jetty, Tomcat, WebLogic), Cargo sees itself as a thin standard wrapper around them.

Suppose we need a clean Tomcat server to deploy our web archive (WAR) file and run our tests against this server. The Cargo Maven2 plug-in is a good integration of Cargo into the Maven lifecycle. The plug-in can easily be included in the build section of your POM.

The following listing shows the configuration of Cargo to use Tomcat. We configure the download of Tomcat and where it should be unzipped in the build.

Listing 6.8. Configuration of Cargo to use Tomcat

Now that we have Tomcat in place, we can start it in the preintegration test phase of Maven and deploy the artifact. In the following listing, Cargo starts Tomcat, deploys the WAR, and blocks the execution until it can ping the URL of the deployed artifact.

Listing 6.9. Configuration of execution

As you see in the listing, Cargo provides an API that allows you to easily use and configure your application server. You can deploy your application into the container and run your tests. This dramatically increases your productivity, and it can be integrated into your continuous build script.

Cargo helps to manage containers. In the next section, we’ll discuss another integration server called TeamCity, which has several helpful features, particularly for running remote builds.

6.4. Remote builds with TeamCity

In section 6.1, I mentioned that you should use the same build scripts locally that you use for the central build, but running builds and tests locally has drawbacks. One major drawback is that the procedure allocates your local desktop to this task—you can’t work on other things, or your work is at least delayed until the build is finished. Another drawback is that your private environment probably isn’t equal to the central one that hosts the build server. But you don’t want to trigger the central build server during your development because you don’t know whether your software contains bugs or doesn’t integrate. Besides that, the central build server normally pulls sources out of the VCS that are already committed to VCS. You don’t want to commit your untested changes to the VCS merely to see if they’ll build successfully. One solution is to run your private build using a centralized build server.

To test how successful your changes are, you can create personal builds in JetBrains’ TeamCity (www.jetbrains.com/teamcity/) using its remote run feature. The modified files are submitted to the server, bypassing the VCS. In addition, using the pretested commit feature, the project codebase always stays clean: If the tests fail, the code isn’t integrated into the codebase, the developer can safely work on a fix, and the team’s work isn’t interrupted. If the build is successful, the changes are committed to the VCS automatically (this is an opt-in request). From there, the changes will be automatically integrated into the next regular integration build (see figure 6.3).

Figure 6.3. Continuous integration with remote run and delayed commit. A build doesn’t block the IDE, because it runs on the central build server. If the private build passes, the underlying code changes are committed to the VCS. If the build fails, the central VCS isn’t affected.

Like other continuous integration servers, TeamCity is based on a central job that starts builds and a web application for managing build plans. You must enrich your IDE to support the remote run feature. For Eclipse, you must download the dedicated Eclipse plug-in if it’s not part of the TeamCity standard distribution you may have already downloaded. Support for IntelliJ IDEA and Microsoft Visual Studio are similar to the Eclipse support.

 

YouTrack Bug-Tracking Integration

Because JetBrains’ bug tracker, You-Track, integrates with TeamCity, you can easily see how your code changes align with your bug-fixing activity, and you can determine which bugs have been fixed in a particular product build.

 

What does it look like when you use TeamCity’s remote run feature? Figure 6.4 shows Eclipse’s TeamCity Remote Run dialog box, which is opened by selecting Team > Remote Run in a project’s context menu. You can enter the username and password for connecting TeamCity and Subversion, if necessary.

Figure 6.4. Eclipse’s TeamCity Remote Run dialog box: Although you see all your local changes (compared to the central VCS) and you can commit your changes (both when a build runs and after it succeeds), the changes are committed to the VCS and not the other way around.

After the build is finished and special post conditions are met, you can enter a commit message and decide whether you want to commit your changes to the VCS or not. Besides the traditional condition that no test failed, you can also configure the changes that are put into the VCS if no new tests failed. In figure 6.4, we’ll also be asked if we want to commit, after the build runs.

The Changes pane at the bottom of the dialog box lists all local changes compared to the central VCS. In this example, we changed one file (the POM). In a subsequent dialog box (not displayed here), you must specify which TeamCity build configuration you want to link to your build request. TeamCity lists the existing build plans and anticipates which ones have applicable configurations. Your selection is saved locally, so you must do this linking only once.

After starting the private build on the remote server, TeamCity documents its activity on its web interface. Figure 6.5 shows the UI, illustrating that personal changes are transferred and the remote build will start soon. Although the TeamCity dashboard does show the current activities (including personal builds), the personal build isn’t added to the public build history. You must go into the build’s detail page (by clicking on Build Me) to see a full list of all builds (regular builds and private builds). They’re differentiated by the icons there.

Figure 6.5. The TeamCity web interface, documenting that a personal build has started

When the private build completes successfully (according to the configured metrics), a new dialog box opens in the Eclipse IDE and asks if you want to commit(see figure 6.6). If you confirm at this point, a commit is executed: Your changes will be checked into the VCS. This may motivate a new central build, if you’ve configured new builds to run after VCS changes occur.

Figure 6.6. In Eclipse, TeamCity asks you to commit your changes to the VCS. The remote run of your build completed successfully, and you must now click Yes to commit the changes. All information (changes, commit messages) should already be known.

TeamCity provides several useful features and is the preferred CI server for many development teams. Remote runs (together with delayed commits) are another practice that will help your team enjoy a productive development environment.

6.5. Summary

In this chapter, we talked about productive environments. You learned what it means to work in isolated developer workspaces and which strategies and tools you can use. We talked about general strategies and best practices and how they’re covered by Maven. We talked about mocking with Mockito, using DRY tests, wrapping application servers with Cargo, and running private builds with delayed VCS commit using Team-City. This chapter provided a number of helpful examples and use cases to explain what it means to have a productive environment and how to implement productive environments.

In the next chapter, we’ll focus on CI recipes and tools.