List of Figures – Agile ALM: Lightweight tools and Agile strategies

List of Figures

Chapter 1. Getting started with Agile ALM

Figure 1.1. Agile ALM enriches ALM with Agile strategies. ALM is heavily inspired by and based on configuration management, which in turn is based on version control.

Figure 1.2. ALM bridges the development disciplines and phases of requirements definition, design, code, test, and run.

Figure 1.3. Development phases like design and development are often unreliable with unpredictable results, tools, and data. Phases are isolated and only loosely linked, as illustrated by the dotted lines in the figure. SCM activities like build/deploy are orthogonal to the phases and span them. They are often not reliable.

Figure 1.4. The first evolution of SCM toward an approach that can be called ALM spread over phases and synchronization. Unlike the design in figure 1.3, single phases now contain aspects of ALM but often have disparate data stores and processes, and there are challenges to sharing data and knowledge.

Figure 1.5. ALM is an implicit, pluggable hub: barrier-free engineering without redundant activities or redundant data. We now have neither orthogonal (as in figure 1.3) nor fragmented ALM aspects (as in figure 1.4).

Figure 1.6. Pyramid of steadiness: People and culture own and drive the processes and tools. All four aspects are important.

Figure 1.7. Transparency, people, changes, risk, and concrete business value are essential factors that influence software engineering and that should be stressed in an ALM project.

Figure 1.8. Interdependency of complexity and tool usage: In complex environments, it’s essential to use an integrated toolchain that glues together the best-of-breed tools to serve all stages of development in an end-to-end approach. Each circle must build on the previous one, so the end-to-end focus needs to be integrated, best-of-breed, and pragmatic.

Figure 1.9. Agile ALM integrates platforms, tools, and languages, all driven by people (illustration ©

Chapter 2. ALM and Agile strategies

Figure 2.1. The Agile ecosystem: Agile processes, values, and strategies

Figure 2.2. The magic barrel. The limited content of the barrel is allocated among four pitchers. If one pitcher is full, then another pitcher will have less liquid. In Agile projects, the quality pitcher is filled to some reasonable amount, with the rest being balanced among the three other pitchers. (Illustrations ©,©

Figure 2.3. The integration effort spent on repairing errors increases exponentially.This illustration isn’t based on an empirical study. Data is based on experience of the author and others.

Figure 2.4. Version management servers store sources, and distribution management servers store binary artifacts

Figure 2.5. Artifacts in configuration management: Artifacts that are updated continuously and are of special interest are put into configuration management. Sources and tests are put into version control; libraries are put into distribution management.

Chapter 3. Using Scrum for release management

Figure 3.1. Software engineering goes through a workflow cycle: requirements management kicks off the development process. Requirements belong to releases. Release management triggers the design/development of software, which in turn creates the artifacts implementing the requirements. Those implementations are then put into version and build management and are provided in releases.

Figure 3.2. The Scrum process includes assigning items from a product backlog to a release backlog, and the items are then implemented as part of the release. The output of the release is a working increment (built and delivered software). A typical release duration is 30 days. The team synchronizes daily in a Daily Scrum meeting.

Figure 3.3. The Scrum process supports the user in transferring prioritized customer requirements into consistent releases and working increments.

Figure 3.4. Feature team and component team in coexistence. The feature teams work cross-functionally on a feature, while the component teams work on a component. A developer may work as part of the component team and additionally be drafted into a feature team to work on a specific feature. The developer may work on both teams, or may solely work as part of the feature team for a defined period. A caretaker looks out for impediments and mentors a team.

Figure 3.5. The frozen zone and the code freeze for stabilizing and finally releasing the software before the next release is developed (F = feature; B = bug). The way testing is done in an Agile environment often eliminates the need to have a project phase dedicated to detecting and fixing last-minute bugs. There may be a few last-minute bugs, but the dedicated phase should be as short as possible. In the best case, the number of last-minute bugs doesn’t justify a scheduled phase in the lifecycle.

Figure 3.6. Staging software: requirements will be implemented in the developers’ workspaces. The central development environment integrates all respective configuration items and is the base for releasing. Software is staged over different environments by configuration, without rebuilding. All changes go through the entire staging process.

Figure 3.7. The staging of software is allowed only if it successfully passes through the defined quality gates. Quality gates have an obligatory character, validating the defined quality requirements (tests and metrics).

Figure 3.8. The release calendar defines the dates and activities. It’s the single view on the timeboxed releasing that may include activities and deliveries.

Figure 3.9. Release screenplay balanced at day X (release day), aligning times, actions, responsibilities, and the stakeholder.

Figure 3.10. Release screenplay, aligned with one role.

Figure 3.11. Accessing Subversion on Windows with TortoiseSVN. Menu action items are part of Windows Explorer after installing TortoiseSVN.

Chapter 4. Task-based development

Figure 4.1. The Agile ALM infrastructure: the unique ticket number of a task connects the different participants (nodes) in the system. This results in traceability and transparency, and it ensures the alignment of activities with specific requirements.

Figure 4.2. This toolchain enables task-based development. It’s based on JIRA, FishEye, Subversion, Bamboo, Eclipse, and Mylyn. Requirements are managed with JIRA. GreenHopper enriches JIRA with further features for Agile development. The CI server pulls sources from the VCS to build the software. FishEye is a convenient VCS browser that makes changes visible in a convenient way. Both Bamboo and FishEye are integrated with JIRA. Developers use Eclipse with Mylyn to work on code; they access code with Eclipse and the plug-in that’s available to connect to the VCS that’s used.

Figure 4.3. The ticket AGILEALM-4 is of type New Feature and is linked to three subtasks.

Figure 4.4. GreenHopper’s planning board: Visualizing tickets as cards enables you to create new ones or change their status. Cards can be filtered to show only cards that are open or only your tickets. The GreenHopper view also visualizes target versions and uses different icons to express information such as priorities.

Figure 4.5. Mylyn adds a task-based view in Eclipse. In the Eclipse task view, important information about the tasks (such as issue type, status and priorities) are shown directly, but double-clicking will open the complete ticket in your Eclipse editor. Tickets can be grouped by relationships of subtasks. In the left area, incoming and outgoing changes made by you or other developers are identified with icons. Unread tickets—tickets that haven’t been opened yet—are identified with a question mark.

Figure 4.6. Committing changes to VCS in Eclipse. In the commit dialog box, developers add a reference to the tickets they’ve worked on. In this case, the ticket AGILEALM-10 motivated the code changes that are committed now.

Figure 4.7. The FishEye web application: browsing the repository. Changes (activities) are displayed in a timeline. Revisions with their changes (for instance, how many lines were added) are visualized and can be compared with each other. You can zoom in to see the respective versions of the sources.

Figure 4.8. FishEye integrated into JIRA: A dedicated tab shows source changes associated with each ticket. One Subversion commit resulted in this entry: 1 file was changed, 15 lines were added in the file, and 2 were removed. Icons provide links to FishEye so you can zoom in if you’re interested in more information.

Figure 4.9. Bamboo integration in JIRA: builds associated with this ticket. If you’re interested in more information, you can easily navigate to the Bamboo web application by clicking a link.

Figure 4.10. The build result summary of the Bamboo web application. The summary page shows details about the build, such as why the build was triggered and what changes were newly integrated with this build. A history feature and further tabs allow you to navigate through builds and to zoom in to respective test results, generated artifacts, logs, and so on. Besides its continuous integration and build server features, it links to issues in the ticketing system.

Figure 4.11. A system for task-based development, based on Trac. Trac is the ticket system that integrates with the VCS and CI. Eclipse and Mylyn are used on developers’ desktops. This example shows Hudson/Jenkins being used as the CI server.

Figure 4.12. The start page of Trac: the edited wiki entry page, including access to the timeline, roadmap, source browser, and ticket viewer

Figure 4.13. The timeline view showing who changed what in the system, including changes on tickets, the subversion repository, wiki pages, and milestone planning

Figure 4.14. A changeset, including changes on the code base done in this atomic commit

Chapter 5. Integration and release management

Figure 5.1. Integration and release management system: Sources and build scripts are shared in a VCS; a CI server builds, tests, and deploys versions; and a component repository stores ongoing versions as well as releases of the software (as binaries). Several roles and responsibilities exist; a release manager takes care of creating and staging releases.

Figure 5.2. Maven repository topology: developer repositories, a central proxy repository, and a remote public repository. The public repository is an external system, normally on the internet. All other repositories are internal systems.

Figure 5.3. Extract of a local developer’s repository

Figure 5.4. Branching strategies and the deployment of binaries (snapshot and release) are used in conjunction with each other. The frozen versions that were tagged from the head are 1.0.0 and 1.1.0. Version 1.0.0 had two bugs, so a branch was created, where the two bugs were fixed; the bug fixes were merged into the head as well. At specific moments (the diamonds in the figure), the versions are frozen. The head is frozen to version 1.1.0 and the branch is frozen to version 1.0.1. Both new versions contain the two new bug fixes.

Figure 5.5. Artifacts in the Artifactory repository browser

Figure 5.6. Artifactory’s search facility: POM/XML search

Figure 5.7. Property search: searching repositories for properties. Examples of properties are and build.number.

Figure 5.8. Working on search results, and copying an artifact set to a different repository. This allows smart artifact staging and promoting.

Figure 5.9. Sample Subversion folder layout

Figure 5.10. This expanded repository tree contains various deployed artifacts.

Figure 5.11. Copy artifacts to a staging repository withArtifactory

Chapter 6. Creating a productive development environment

Figure 6.1. Dependencies visualized in real time by Maven and m2eclipse. The color of the background expresses the scope. Dependencies with compile scope are displayed with a darker background. Dependencies with a white background have other scopes, such as test or runtime.

Figure 6.2. M2eclipse manages the Eclipse Maven classpath container. In your Eclipse project, all JARs are referenced as binary dependencies. No manual referencing or checking of artifacts is needed, and the approach is congruent, both in your workspace (IDE) and in your build script.

Figure 6.3. Continuous integration with remote run and delayed commit. A build doesn’t block the IDE, because it runs on the central build server. If the private build passes, the underlying code changes are committed to the VCS. If the build fails, the central VCS isn’t affected.

Figure 6.4. Eclipse’s TeamCity Remote Run dialog box: Although you see all your local changes (compared to the central VCS) and you can commit your changes (both when a build runs and after it succeeds), the changes are committed to the VCS and not the other way around.

Figure 6.5. The TeamCity web interface, documenting that a personal build has started

Figure 6.6. In Eclipse, TeamCity asks you to commit your changes to the VCS. The remote run of your build completed successfully, and you must now click Yes to commit the changes. All information (changes, commit messages) should already be known.

Chapter 7. Advanced CI tools and recipes

Figure 7.1. Advanced CI scenarios for Agile ALM that are covered in this chapter: building and integrating platforms or languages (.NET and integrating Cobol by using Java and Ant), enabling traceable deployment of artifacts, building artifacts for multiple target environments (staging these artifacts by configuration, without rebuilding them), bridging different VCSs, and performing audits.

Figure 7.2. The processing of Cobol sources is based on Ant scripts that are dynamically generated and triggered by the CI server. Files are transferred from the CI server to the host and vice versa via FTP. CI with Cobol is similar to how we integrate Java applications: Cobol sources are managed by the VCS. The CI server checks Cobol sources and triggers and then monitors the success of Cobol compilation on the host. Compiled Cobol sources can be loaded into libraries on the host and transferred back to the CI server to store them in the VCS or a component repository for further reuse.

Figure 7.3. Configuration in TeamCity: selecting a build runner (such as MSBuild or NAnt)

Figure 7.4. Example TeamCity output screen showing test results

Figure 7.5. Selecting a VCS for the .NET project build (with Subversion)

Figure 7.6. TeamCity showing agents

Figure 7.7. Server configuration, defining a cloud profile for EC2

Figure 7.8. Directory structure, including configuration for different environments (dev, prod, test)

Figure 7.9. Jenkins dashboard listing the configured jobs and information about them (result of the last build, trend, duration). You can also start new builds by clicking the buttons at the far right.

Figure 7.10. On the build detail pages, Jenkins provides more information, including build artifacts, details on why the build was triggered (here a change in Subversion, revision 255, detected by Jenkins), and an overview of static code analysis violations.

Figure 7.11. On the build detail pages, Jenkins links to test results, dedicated reporting pages according to code violations (here Checkstyle, FindBugs, and PMD), an aggregation page of violations (static analysis warnings), and to Artifactory and individual modules of the Maven build.

Figure 7.12. Checkstyle found an antipattern: this method isn’t designed for extension.

Figure 7.13. PMD detects an empty catch block.

Figure 7.14. FindBugs points to a repeated conditional test, which is most likely a coding defect.

Figure 7.15. Code coverage breakdown by package, showing packages and their files, classes, and methods coverage

Figure 7.16. A project inspected by Sonar, showing the results of FindBugs, Checkstyle, and PMD inspections, and the results of code coverage

Figure 7.17. Configuring the integration of Jenkins with Artifactory. Jenkins resolves the central settings that you’ve set on the Jenkins configuration page and suggests valid entries for Artifactory server and target repositories. In this example, Jenkins will deploy artifacts to Artifactory after all single Maven modules are built successfully. It will also capture build information and pass it to the Artifactory server. Help texts are available on demand for all configuration settings (by clicking the question marks).

Figure 7.18. The default way of deploying Maven artifacts in a multimodule project: All single Maven projects are deployed one by one. If the multimodule build fails, some artifacts are deployed to the component repository, and others aren’t. The result is an inconsistent state.

Figure 7.19. Deploying Maven artifacts in a multimodule project, with Artifactory and Jenkins: All single Maven projects are installed locally. If the complete multimodule build succeeds, all artifacts are deployed to the component repository. In the case of a build failure, no single module is deployed. This result is a consistent state.

Figure 7.20. Artifactory’s Build Browser lists all builds for a specific build name (in this case, Task-based). The build name corresponds with the name of the Jenkins job that produced the builds. You can click on one specific build to get more information about it.

Figure 7.21. Artifactory shows published modules for all builds, including in which repositories the artifacts are located (in the Repo Path column).

Figure 7.22. Artifactory shows the producers and consumers of artifacts built by Jenkins.

Figure 7.23. Configuring the Jenkins build job to use the Jenkins/Artifactory release management functionality. The VCS base URL for this Jenkins build job must be specified. Among other options, you can force Jenkins to resolve all artifacts from Artifactory during builds.

Figure 7.24. Staging the artifacts that were produced by a past Jenkins build. Before starting the staging process, you must configure versions and a target repository.

Figure 7.25. Promoting a build from inside Jenkins requires selecting a target promotion repository. You can configure it to include dependencies and specify whether you want to copy the artifacts in Artifactory or move them.

Figure 7.26. Mainline CI without feature branching

Figure 7.27. Feature-branching CI

Chapter 8. Requirements and test management

Figure 8.1. A test matrix (a skeletal based on Lisa Crispin’s version of Brian Marick’s diagram) that arranges acceptance tests and BDD in quadrants. Tests can be divided into business-facing and technology-facing as well as those that support the team and those that critique the product.

Figure 8.2. Swing application with a table and a corresponding TableRowSorter

Figure 8.3. The HTML specification in the Fit format, viewed in a browser

Figure 8.4. The Fit result document shows successful checks with a green background.

Figure 8.5. ReportNG aggregating TestNG tests, including the functional tests in the common test suite

Figure 8.6. The Maven site configured to include the Fit specifications and results page as links in the left sidebar

Figure 8.7. Our first, simple test setup, including a fixture

Figure 8.8. Steps implemented (with default method bodies), no exceptions; all failed tests

Figure 8.9. Passing scenario tests

Chapter 9. Collaborative and barrier-free development with Groovy and Scala

Figure 9.1. A Forms sample, BDD with Scala