Chapter 1. Getting started with Agile ALM – Agile ALM: Lightweight tools and Agile strategies

Chapter 1. Getting started with Agile ALM

 

This chapter covers

  • An introduction to Agile ALM
  • The evolution in software engineering leading to Agile ALM
  • The aspects of ALM that are covered in this book

 

This book is about Agile application lifecycle management (ALM) and brings together the best of two worlds, Agile and ALM. I’ll discuss ALM as a way to develop and release software in a coherent, integrated way, spanning all development phases, artifact types, roles, and business units. Bringing ALM and Agile together and using the right tools leads to a modern, efficient way of developing software. Consequently, you’ll reduce costs, boost your productivity, and accelerate your team’s collaboration. And you can make developing software a lot more fun.

Agile ALM enriches ALM with Agile strategies. In my opinion, ALM is based on software configuration management (SCM). SCM, in turn, is based on basic version control (see figure 1.1).

Figure 1.1. Agile ALM enriches ALM with Agile strategies. ALM is heavily inspired by and based on configuration management, which in turn is based on version control.

Agile ALM

  • Helps overcome process, technology, and functional barriers (such as roles and organizational units).
  • Spans all artifact types as well as development phases and project roles.
  • Uses and integrates lightweight tools, enabling the team to collaborate efficiently without any silos.
  • Makes the relationship of given or generated artifacts visible, providing traceability and reproducibility.
  • Defines task-based activities that are aligned with requirements. This means that the activities are linked to requirements and that all changes are traceable to their requirements.

Agile ALM can be used with all kinds of process models and methodologies, including traditional ones, such as waterfall or spiral models. There are also ALM approaches that can hardly be called Agile or that are based on large-scale commercial tools; these can be difficult and expensive to implement. Agile ALM focuses on driving the process through people and not merely through tools. Where tools would be of benefit, such as continuous integration server, they should be lightweight and primarily open source. The Agile ALM approach results in processes and lightweight toolchains that are flexible, open to change, and high quality. This approach helps to make the ALM more Agile and leads to what I call Agile ALM.

This chapter introduces the concepts that are essential to understanding Agile ALM, including the evolution of software engineering with its migration to Agile ALM. I’ll also discuss the essential impact that SCM (and version control) has had on ALM, including some of the first pilot projects to use Agile and ALM together. In addition, I’ll explain my view that SCM is the basis of ALM and how these practices help develop ALM today. Many Agile books make the case that one doesn’t adopt Agile practices, but rather one becomes Agile. It’s important to establish an effective ALM through people, culture, processes, and tools. This chapter will also focus on open source and lightweight tooling along with the building blocks of Agile ALM. That’s not to say that some large-scale commercial tools aren’t worth using, but they won’t be the focus of this book.

It’s essential to take a stakeholder focus in any Agile effort; one must consider the role of releasing code in Agile ALM as well as the service orientation and application architecture. I place a strong emphasis on task-based development and the Agile ALM premise of aligning work to the customer’s requirements (including setting up the most effective toolchain for a given context). I’ll also explain one approach called “outside-in,” which takes the customer’s point of view in some specific (and important) ways. We’ll consider the importance of configuration, customization, plug-ins, and our ever-growing, multilanguage, polyglot world, including my view that we can’t forget the existing legacy systems that are often valuable to the organization. But first, let’s take a step back and consider Agile ALM at a glance.

1.1. Agile ALM at a glance

ALM describes the coordination of development lifecycle disciplines, including the management of requirements, changes, configurations, integrations, releases, and tests. These functions span development phases, including requirements definition, design, code, test, and run, as shown in figure 1.2.

Figure 1.2. ALM bridges the development disciplines and phases of requirements definition, design, code, test, and run.

Application lifecycle management

ALM is aligned with the engineering process, spanning development phases. This results in releases that are functionally and technically consistent. ALM also manages the relationships between various artifact types, including requirement documents, coding artifacts, and build scripts that are used or produced by the engineering process. By organizing, linking, and referencing activities and artifacts, you can track the development progress as a whole. Through the use of integrated toolchains, ALM helps you to overcome the biggest challenge in the software creation process: the technological and functional barriers that make it difficult to implement a transparent and consistent development process.

ALM is a task-based approach in which all activities are linked to requirements, and the relationships between all artifacts are visible; therefore, artifacts can be traced to the requirements they are based on.

 

Agile ALM

The Agile ALM approach

  • Is the marriage of business management to software engineering
  • Targets processes and tools working together seamlessly, without silos
  • Covers the complete software development lifecycle, including requirements management, coding, testing, and release management
  • Enriches ALM with Agile strategies
  • Is based on software configuration management and version control
  • Is based on a set of tools, enabling a team to collaborate efficiently

 

By using an Agile ALM approach, you’ll gain from improved productivity that helps keep costs down, reduces time to market, and improves return on investment (ROI). All stakeholders have easy access to the information they need and can collaborate efficiently. They have real-time visibility and participation in the process lifecycle. This means that the technical infrastructure is aligned with business and business value. It also means that, through interactions between business and technical personnel, questions can be answered quickly in a user-friendly way. That leads to concrete, positive business outcomes.

Agile ALM’s integrated approach leads to the protection of software assets, improved reuse, better requirements traceability, cleaner code, and improved test results. A high level of automation, seamless integration, and service orientation leads to a successful project and better team awareness.

An Agile ALM enriches a traditional ALM with Agile values and strategies. With a focus on communication and collaboration, ALM processes already have the prerequisites to support Agile software development. An Agile ALM has its major focus on human interaction (“peopleware”), increasing their communication and interactions by implementing Agile strategies (like continuous integration) and always weighing value and effort. The Agile approach uses lightweight tools as needed, based on concrete requirements. In this book, I will refer to large, bureaucratic, heavyweight systems as being monolithic, as discussed later in this chapter. A nonmonolithic approach uses open standards and helps to implement an Agile software development process. Agile ALM can also support other kinds of development processes.

In summary, Agile ALM consists of the following four major fundamentals:

  1. Collaboration— All team members are aware of what others are doing. That way, choices can be made that best facilitate the progress of the entire project. This is achieved by focusing on personal interactions, the outside-in (customer-focused) development approach, and task-based development supported by tools.
  2. Integration— Achieving business targets requires an enterprise infrastructure to integrate roles, teams, workflows, and repositories into a responsive software delivery chain. People must be connected wherever they are located (distributed, collocated) and must have the assets they need to get the information they seek. Integration occurs at several levels, including developer builds and integration builds, and is seamlessly maintained with comprehensive testing throughout the lifecycle.
  3. Automation— The streamlining of the full lifecycle is heavily based on end-to-end automation.[1] For example, all of the steps in a build, including preparing the build system, applying baselines to source control systems, conducting the build, running technical and functional tests as well as acceptance tests, packaging, and deploying and staging the artifacts, are automated with the appropriate tools.

    1 The end-to-end process, its people, and its processes are sometimes called the “value stream.”

  4. Continuous improvement— You can improve only what you can see and measure, so building and delivering software that minimizes manual work is a requirement for easily identifying where you are in your process. Comprehensive testing, regular retrospectives (where you discuss what went well and what needs improvement), project transparency, and project health (balancing work to eliminate work peaks) allow you to improve continuously.

To better understand ALM and its features and benefits, it’s helpful to look at the history of software engineering in the context of ALM. We’ll now take a quick tour through the evolution, from the pragmatic approach to software configuration management to ALM.

1.2. Evolution of software engineering: moving to Agile ALM

Software engineering has always focused on improving quality and productivity. This may involve reusing well-defined requirements or software components. Many companies spend years developing applications and continuously extend their portfolio with new ones. By repeatedly implementing the same requirements instead of reusing existing assets (which exist because you’ve implemented the same requirements before) and not using strategies for tracking artifacts (such as builds, test results, packages), the development team will be ineffective and inefficient. Developing software in a suboptimal way leads to poor quality, missed customer needs, and late arrival to the market.

From a technical view, without any comprehensive strategy for managing artifacts, integration is often a game of roulette that involves changing an existing application rather than merely transferring a set of changes into another stable and well-known system state. This is why technology professionals have worked hard to improve the application-development process and have tried to find answers to these common questions:

  • How can I accelerate activities and exclude error sources?
  • How can I improve communication in the team?
  • How can I significantly improve the quality of my software?
  • How can I keep in touch with the current state and quality of the developed software?
  • Which tools fit my requirements and basic conditions best?
  • How do the single building blocks of my infrastructure interact with each other?
  • How can I set up a flexible infrastructure to secure the assets of my company?
  • Which changes (requirements, bugs) are implemented in which artifacts?
  • Which changes are provided in which builds/releases?

In order to appreciate this effort, we need to review software configuration management and its essential practices.

1.2.1. SCM and the first ALM trial balloons

Software engineering, from a historical perspective, is a young discipline. In the early stages, software development was seen more as a factory process and not as a complex activity aimed at implementing business objectives. In recent decades, software development has been evolving, initially to make life easier for developers and finally to answer internally and externally driven needs for productivity and quality.

The Basics of SCM

Running an unreliable, fragmented development process doesn’t empower a team to track and improve their work continuously. This is where software configuration management (SCM) can help. SCM helps improve the development process by tracking and controlling changes in the software. These processes are commonly implemented using version-control systems (VCS). SCM makes changes in the VCS visible, and it adds meta-information to the system to track why the change was made. Above all, baselines (identifying a specific version of the code suitable for release) and change-sets (linked to artifacts or work items) provide traceability to the development process and accelerate release management.

The following are the goals and values of SCM:

  • Identifying and administering configuration items
  • Controlling, versioning, and resolving conflicts for artifacts of any type (which impact the release) that may change over project time
  • Documenting changes of all configuration items and their degree of maturity
  • Supporting branching and merging
  • Configuring software for use (deployment) on different target environments
  • Performing functional auditing, reproducibility, traceability, and controlling configuration item dependencies
  • Minimizing bad builds and detecting them early
  • Reducing communication mistakes
  • Providing a history of builds and releases in order to investigate issues
  • Defining processes and tools
  • Eliminating redundant tasks and streamlining processes
  • Improving efficiency while orchestrating, packaging, and distributing software
  • Establishing access control
  • Saving time and money and satisfying the customer

 

Software configuration management

Software configuration management consists of four major functions:

  • Configuration identification— Select and identify all configuration items and establish baselines on them so you can then control, audit, and report changes.
  • Change control— Control changes to configuration items.
  • Configuration audit— Ensure correctness, completeness, and consistency of baselines by examining the baselines, configuration items, and related processes.
  • Status accounting— Report on the status of all configuration items throughout their lifecycle.

 

Using SCM, you can track incremental changes and compare and analyze stable baselines of the software. The SCM focus is primarily on physical changes as opposed to business objectives, as illustrated in figure 1.3.

Figure 1.3. Development phases like design and development are often unreliable with unpredictable results, tools, and data. Phases are isolated and only loosely linked, as illustrated by the dotted lines in the figure. SCM activities like build/deploy are orthogonal to the phases and span them. They are often not reliable.

 

Configuration items

Configuration items refers to produced or consumed (used) artifacts or to (environmental) artifacts that created final artifacts. Depending on the point of view and the context, artifacts go by different names, products or deliverables (project management) or deployment units (software architecture), for example. The rules for identifying a configuration item may vary and depend on individual requirements. As a basic rule of thumb, a configuration item is any item delivered to a stakeholder. Examples include coding artifacts, design documents, user manuals, requirements, technical specs, test cases, build scripts, and so on.

For a detailed discussion of configuration items, and configuration management in general, see the following sources: Alexis Leon, A Guide to Software Configuration Management (Artech House, 2000); Mario E. Moreira, Software Configuration Management Implementation Roadmap (Wiley, 2004); Mario E. Moreira, Adopting Configuration Management for Agile Teams (Wiley, 2010); Larry Klosterboer, Implementing ITIL Configuration Management (IBM Press, 2008); A. Mette and J. Hass, Configuration Management Principles and Practices (Addison-Wesley, 2003); Bob Aiello, Configuration Management Best Practices: Practical Methods that Work in the Real World (Addison-Wesley, 2010).

 

The Development of SCM

In the early years of software development, the main challenges related to the fact that a team worked on software and data concurrently. Classic problems included the following:[2]

2 See Wayne Babich, Software Configuration Management (Addison-Wesley, 1986), 9ff.

  • The double maintenance problem, which arises from keeping multiple copies of software
  • The shared data problem, arising from many people simultaneously accessing and modifying the same data
  • The simultaneous update problem, arising from multiple people changing a piece of software at the same time

Database management systems and version-control software help us manage the daily challenges of software development, and the solutions are found in almost every project toolchain in use today. Yet the challenge of accelerating the development process and improving software traceability and quality remained difficult to achieve.

SCM strategies were developed and refined to further optimize the engineering process.[3] Over the years, it became increasingly clear that this wasn’t enough. Focusing on only the technical view didn’t improve the quality of software development.

3 See Stephen Berczuk with Brad Appleton, Software Configuration Management Patterns (Addison-Wesley, 2003).

Tracking the progress of the development efforts, and the changes on artifacts, including source code, design, and requirements documents, isn’t always easy because it usually requires working with manual lists and accessing multiple data repositories. What makes implementing SCM even more difficult is the amount of manual work required, particularly for validating the current status of the software at any given point.

Technical release management is an explicit time-consuming activity. The solution was to focus on automating every aspect of SCM, from application builds to release packaging and deployment. For example, development managers need procedures for continuous auditing and change tracking. The release management process should be implicit, integrating across artifact types and development phases, with uniform or highly integrated tools and data repositories.

As a result of improved procedures for continuous auditing and change tracking, developers found themselves with a more integrated approach that was free of common obstacles. Instead of each organizational group suboptimizing its own work, barriers between areas were broken down and all stakeholders worked as a team for the company.[4] I refer to this approach as being “barrier-free.” Figure 1.4[5] shows that SCM became more and more an implicit task embedded in all development phases. At this point, it becomes more appropriate to talk not only about SCM, but also about management of the application lifecycle (ALM).

4 See W. Edwards Deming, Out of the Crisis (The MIT Press, 1982), pp. 62–65.

5 Compare Carey Schwaber, The Changing Face of Application Life-Cycle Management (Forrester Research Inc., 2006).

Figure 1.4. The first evolution of SCM toward an approach that can be called ALM spread over phases and synchronization. Unlike the design in figure 1.3, single phases now contain aspects of ALM but often have disparate data stores and processes, and there are challenges to sharing data and knowledge.

 

The Test Phase

There are many situations that warrant a separate testing phase, as shown in figure 1.4; other situations don’t. In many projects and companies, the test activities can be part of the release/build/deploy/config block or part of the development block.

 

ALM features and activities are implicit throughout the entire lifecycle, meaning that all phases of the process include ALM features and activities now, instead of taking an orthogonal approach, as in the first evolution step. Early approaches to ALM resulted in significant improvements, but these early efforts were not always successful because many roles had their own tools, which were not integrated with the tools used by other team members. For example, there are many tools that are specific to requirements engineering but that don’t integrate well with other toolsets. In addition, many development teams grow attached to their specific tools and find it difficult to switch to using one integrated toolchain. The toolchain is often compared to a Swiss army knife, which has many clever tools but may not be completely effective for a specific task. As a result of all these issues, the wrong tool infrastructure may sometimes have slowed teams down instead of speeding them up! Additionally, some tools have their own proprietary data-storage approach, so tools were often not integrated and their database repositories can’t share related information.

To span separate roles and phases, it’s necessary to synchronize data among the various tools. This results in a more complex technical solution. Another idea to improve tracking software development is to set additional tools (such as a tool that aggregates the individual outcomes of other tools, or a dashboard) or manual activities on top of the infrastructure. In this way, information could be extracted or collected and offered at a central entry point.

In summary, the combined complexity and the missing “single point of truth” increases costs and may lead to inaccurate information. Tools that aren’t aligned properly may lead to unneeded complexity. The lack of integration and collaborative features leads to a lack of transparency because requirements can’t be traced throughout the lifecycle. All of these points make it a lot harder to align tools and processes.

Let’s summarize the key issues in how ALM was commonly organized before the next evolutionary step occurred:

  • Alignment of tools, roles, and organizational borders— The approach was mainly aligned at organizational borders and roles. As a result, there often was a single tool for each role or business unit. For example, let’s consider a big integration project that has a central build/config/release business unit. In another business unit, the test management runs in isolation, starting its main activities only after the software is released. The isolated tool used by test management may enable writing test cases and tracking tests, but it isn’t integrated into the other part of the toolchain; the organization may even have different accounts for those tools and micro workflows.
  • Suboptimal collaboration— Another point where the toolchain wasn’t balanced was in the lack of collaboration between team members and interaction features with other tools. For example, among the available (commercial) tools, none had a central wiki in which different users could discuss topics and exchange information. None enabled a shared view on information collected from multiple units, phases, or roles. This is like having a software application open on your desk in multiple isolated windows with no way to see the information as a whole or in different views.
  • Lack of transparent processes— If you analyze how the workflows in companies and projects are implemented with tools—how the processes are implemented—you’ll see that there are many proprietary scripts that drive and configure each tool, but that those individual tools are connected in clumsy and often proprietary ways. Single scripts can be versioned, but the whole integration itself can’t be versioned or managed explicitly. This is suboptimal, because the companies’ processes are part of their core assets, and you can’t improve anything that you can’t identify, describe, and measure.
  • Unreliable data synchronization— Tools and workflows extract information out of the development process, but tools often have their own proprietary data-storage mechanisms. Using a collection of unrelated tools (for instance, a requirement-tracking tool and a test tool) can quickly lead to a data-integration nightmare. Many tools have an open API to facilitate the sharing of data, or specific features that will handle data integration programmatically (for example, import/export via XML), but the results of programmatic synchronization are often cumbersome and error-prone, increasing the complexity significantly. The lesson learned is that just because you have a requirement-tracking tool and a test tool doesn’t necessarily mean there is an ALM connection between these tools.

We need to consider these key issues, which have certainly impacted ALM.

1.2.2. The dawn of ALM

Years of experience with implementing software development lifecycles (SDLC) made many people realize there had to be a better approach. These improvements have evolved into what we call application lifecycle management (ALM) today. ALM is implicit in every phase of the lifecycle and impacts all roles, organization units, and development engineering phases, as shown in figure 1.5.[6]

6 Compare Carey Schwaber, The Changing Face of Application Life-Cycle Management (Forrester Research Inc., 2006).

Figure 1.5. ALM is an implicit, pluggable hub: barrier-free engineering without redundant activities or redundant data. We now have neither orthogonal (as in figure 1.3) nor fragmented ALM aspects (as in figure 1.4).

All phases (and the stakeholders in those phases) should be involved with the complete ALM. All stakeholders have connection points to the uniform, comprehensive information hub. Let’s look at three major facets of modern ALM:[7]

7 Also see Carey Schwaber, The Changing Face of Application Life-Cycle Management (Forrester Research Inc., 2006).

  • ALM is both a discipline and a product category. There are many vendors selling full-fledged ALM suites, and others claim they have them in their portfolio. Lightweight, open source tools are also available and are often much easier and more cost-effective to implement. ALM doesn’t rely on using any specific ALM tools suite. Your work with ALM should start with the concepts and ideas behind it, such as traceability, automation, and reporting. Also, ALM activities should be strictly based on the requirements of “task-based development,” which we’ll discuss later.
  • ALM keeps lifecycle activities in sync. ALM doesn’t introduce any specific new methods of developing software. It’s more about introducing a supportive and implicit discipline to reduce complexity, keeping the people and processes in sync.
  • ALM integrates tools. ALM isn’t only about tools and using them but also about picking the right tools, using them effectively, and, above all, integrating them. Integration implies a barrier-free chain of tools that share a high-level workflow and consolidated data sets.

These aspects of the ALM approach led to the following key benefits:

  • Traceability of relationships between artifacts— Created artifacts such as documentation, requirement documents, tests, build scripts, change requests, and source code (for example, changesets) are synchronized and traced automatically. A unified view of them is provided to gain continuous insight into the current status of the development process; this clarifies which requirements were implemented where and which were tested with what results.
  • Automation of high-level processes— Technical people have been talking about automation for years, and continuous integration and other development-centric strategies are gaining momentum. ALM has continuous integration, but that’s not all. When we talk about ALM, we also talk about high-level processes and workflows (such as those for releasing software) that are automated. These workflows should be unique and barrier-free across tools and organizational units. This is an evolved step in integration. ALM deals with maximizing business value, efficiency, flexibility, and the protection of company assets. This can be achieved only through a high-level approach that connects business and technology.
  • Visible progress of development efforts— Often there’s a big gap between the real status of the development and the view available to managers and developers. This gap often increases the higher you climb up the management chain. Frequently, the technical staff reports an overly optimistic view of the current software development status. Managers also do that when they report to their superiors, as they are eager to show they have reached forecasts, objectives, or milestones. But the end result is ugly: Deadlines are missed because risk management was removed from the process, and a lack of transparency conceals progress right up to the end. The goal of ALM is to collect the relevant information, transform that information into knowledge, and generate high-level insight into problems and progress. ALM circumvents old processes that extracted the view manually; communicated it personally; or generated project reports, status meetings, and the like. Instead, the ALM system provides the information continuously.

Obviously, there’s a huge benefit to adopting an effective ALM. It’s also essential to understand that one must become Agile in order to truly be successful in implementing Agile ALM.

1.2.3. Becoming Agile: Agile ALM

Agile teams produce higher quality work, deliver results more quickly, and are more flexible, allowing them to respond to changes in requirements (as those requirements are understood by all stakeholders), and making them more likely to create a greater (and often a quicker) return on investment (ROI). The dominance of single large projects is gone. In recent years, IT projects have become smaller and smaller. It’s increasingly important to deliver low-cost solutions quickly, in small to midsize projects, or to use scoped milestones in big projects.

It’s also important to set up an efficient, lightweight infrastructure in order to gain the benefits of knowledge and synergies. There’s no “one size fits all” infrastructure for an ALM, mainly because every company and every project has its own basic conditions and culture. A plain-process or tool-centric approach obscures the fact that software is made by and for human beings, and therefore requires constant oversight by a human being. ALM can provide that oversight. This is one of the ways in which ALM helps to provide structure for Agile.

In this book, we’ll also focus on the processes and tools that play a major role in supporting the ALM, but in the center of an Agile ALM project, people, culture, processes, and tools are important for establishing stability, or what I will refer to as steadiness. Figure 1.6 illustrates these relationships, with tools and processes at the top of the steadiness pyramid. People are the foundation of the steadiness pyramid, followed by culture. You don’t want to use tools that will force you to practice specific processes. It needs to be the other way around: You identify and define the processes and decide on the tools to help you in applying those processes. To take the full path, you identify and define your goals as a first step and then use the processes best suited to achieving those goals.

Figure 1.6. Pyramid of steadiness: People and culture own and drive the processes and tools. All four aspects are important.

Culture is heavily affected by former projects, historical events, and the collective experience and knowledge of the company. Though it’s hard, culture must change if the organization is going to succeed with software development in the long run. Persuasion doesn’t work. Management has to commit to delivering high-quality software by building in learning time and providing support for studying new practices and processes that will work better. They need to provide the right intrinsic motivators, such as autonomy and ability to innovate. The development team has to commit to building high-quality software. The development team needs to understand the definition of quality, and management needs to value quality preferably over time, scope, and cost. As people start to understand, they’ll be allowed to “do things right,” and then they’ll be motivated to choose and learn the tools that best fit the process. Choosing the tools and the process is the easy part—it’s easy to implement a framework like Scrum. But it’s hard to flesh it out with a real commitment to quality and to business–technical collaboration.

All of this means that if you want to change a software development aspect in your company, you need both a bottom-up and a top-down approach. At the bottom level, you should persuade people to support the goals that you have articulated. It’s much easier to overcome resistance to change when you have support from the key stakeholders, such as an experienced programmer who recommends or already uses a specific tool (often an open source tool). Relying on the opinions of experienced people is better than having a bureaucracy deploy large, cumbersome tools. At the top level, you also need a strong commitment from management to change processes or tools, because people become attached to using certain processes and tools and won’t want to change without good reason.

Generally, stability is an advantage, so there should always be good reasons for changing something that’s successfully in place. But developing software is about change, and Agile addresses exactly that. It can be hard to change to an Agile environment, so it’s imperative to focus on selecting the tools best aligned to your flexible processes (not the other way around). These tools should have an open architecture, be simple to use, interchangeable, extensible, and interoperable.

Being flexible and agile in the classic sense requires an openness to change. Additionally, an integral, continuous risk management and review process is needed to quickly identify issues and their potential consequences. Modern software development consists of managing change and understanding all development activities as a defined and traceable process. Agile helps with change and risk management, independent of the overall development process you are using. Agile also focuses on the importance of transparency and on alignment with business value, as illustrated in figure 1.7.

Figure 1.7. Transparency, people, changes, risk, and concrete business value are essential factors that influence software engineering and that should be stressed in an ALM project.

We’ll learn more about concrete Agile strategies in chapter 2 and we’ll discuss lightweight, primary open source tools throughout the rest of this book. Right now, we’ll look at the building blocks of Agile ALM. This information will give you the necessary preparation for subsequent chapters.

1.3. Building blocks of Agile ALM

What exactly is Agile ALM and what value does it add? In this section, we’ll consider that question in the context of software releases and service orientation. We’ll also discuss how important it is to be focused on the stakeholders’ needs and to use a task-based approach. We’ll consider configurable, pluggable systems and standards. Finally, we’ll talk about what it means to use and cope with “polyglot” environments, with their many languages and technologies, and how to apply open source methods and automation. The tools we’ll cover in this book will enable you to implement and support these building blocks. Tools are important, but it’s also important to start with a stakeholder focus.

1.3.1. Stakeholder focus

Developing a software application isn’t just about writing code. Once developed, code must be tested, approved, and deployed to the live environment where it must be maintained. Many programmers will expect their code to migrate to the live environment as soon as it’s completed to see that their deliverables are used, whereas others understand that the release will be promoted when it’s tested and approved.

 

Interdisciplinary Roles

In this book, the developer is a person who not only develops or programs code but also has an interdisciplinary skill set. Developers are skilled in coding, but they should also know how to test, configure, and ship features. As a consequence, some people don’t like to see the word developer used for programmer. They argue that everyone involved in delivering software is a developer, including the testers. The DevOps movement (the word is a blend of development and operations) similarly brings development and operations together regarding communication, collaboration, and integration. All developers—programmers, testers, database administrators (DBAs), and other people on the team—need to take responsibility for quality and for ensuring that all testing, configuration, and deployment activities are completed at the same time as the coding takes place.

 

Depending on their role within the organization, each person may have a different focus.[8] For instance, a developer should usually work in an isolated environment (with an IDE) and then commit code only if it won’t break the build for the rest of the team. Continuous integration (CI) provides immediate feedback to developers if code committed to the trunk can’t successfully build. The release manager needs to have a clear overview of the status and must know whether the latest code (on the trunk) will build successfully and pass all relevant unit and automated tests. The release manager should also be kept advised on the state of QA testing and should know the current version in production. Production operators prefer an automated deployment process, where they can control the environment variables and flawlessly release a specific baseline of the code (or revert back if necessary). Finally, the CIO and CEO of a corporation (among other senior managers) want to see an automated and repeatable process with an audit trail.

8 In The Art of Project Management (O’Reilly, 2005), Scott Berkun talks about three different perspectives: the business perspective, the technology perspective, and the customer perspective (chapter 3).

ALM consists of several steps, including traditional versioning, and ends with deployment, always weighing the importance of the underlying business (the target domain). All those aspects are important for individual stakeholders, as outlined in table 1.1.

Table 1.1. Stakeholder focus in an Agile ecosystem

Why . . .

Developer

Production[*]

Management

Customer

Versioning? Keep track of the changes Easily revert to a prior version No loss of data Reliability
Continuous integration process? Concentrate on developing software
Early feedback
Integrate with code from others
Get high-quality production code Fewer errors Repeatable process
Faster and shorter release cycle
Early feedback Working software
Automated build? No loss of valuable time Everything is coordinated by a script Prevents mistakes Fast feedback cycles
Automated deploy? Guarantee that production will receive the quality code Consistent and reliable process for deployment No manual intervention reduces risk Increases the possible release cycle frequency and productivity High quality
Process? Easier to build code for the test or production environment Automate production deployment Reduce rework Answers questions of who, when, why, and what occurred? Comprehensive view Bridging technology and business

* For example, deployment, delivery, maintenance

We need to consider all stakeholders and their interests in an Agile environment (stakeholder focus). We also need to consider both the functional and technical views of release management.

1.3.2. Views on releasing and Agile ALM

Agile ALM can be split into a functional view and a technical view.

The goal of a functional view of ALM is to assign and track the implementation of requirements. Effective release management is at the core of successful Agile ALM, and this can be implemented with the help of Agile process frameworks like Scrum. Even if you don’t use an overall Agile process model, applying Agile strategies can still improve the development process.

On the other hand, a technical view of ALM deals with integrating components (integration management) and increasing productivity by improving the development process, such as with continuous integration and installing productive development environments. A technical process and infrastructure hub enables automatic building and releasing, and incorporates testing, quality auditing, and integrating requirements.

In chapter 3, we’ll discuss the functional view by implementing Scrum. In the same chapter, we’ll bridge the gap between the functional and technical views.

Agile ALM also places a strong value on understanding the impact and value of the service orientation.

1.3.3. Service orientation, SaaS, IaaS, PaaS

Providing a service on demand (or Software as a Service—SaaS) isn’t new. There have been a number of successful SaaS systems, including customer relationship management (CRM) systems such as www.Salesforce.com, which introduced this approach long ago. Today, there are many successful SaaS services, including several from Google: Gmail, Google Docs, and Google Calendar.

SaaS applications are often hosted on the providers’ web servers to be used whenever the customer needs the service, and the vendor usually provides an API with a well-defined interface to make these services available. Normally, the functionality can be used by web services through a service-oriented architecture (SOA). With this approach, you distinguish business services from technical, isolated services. Technical services, for instance, encapsulate data (data access). The SOA approach introduces producers of services and consumers of those services. Reusing services and assets can help improve productivity. A repository of services on a consumer level is as important as a repository of technical components on a detailed, technical level. We’ll look at this in more detail in our discussion of component repositories.

Besides SaaS, Infrastructure as a Service (IaaS) is also relevant today. A popular example is Amazon Web Services (AWS), a set of web services, and Elastic Cloud Computing (EC2), a big pool of hardware that can be used dynamically. A standard use case is to install your own images on those remote computers to extend processing power. Meanwhile, IaaS has become easy to use (a commodity). For instance, tools provided by VMWare provide a service to set up and roll out full images of computer systems and virtual machines.

Finally, Platform as a Service (PaaS) focuses on a platform that itself (the runtime environment) is hosted and scaled dynamically. An example of this is the Google App Engine, which lets you deploy and run your own applications.

Cloud computing is an example of IaaS that can also include PaaS and SaaS. Cloud computing describes scaled, configured, and dynamically provisioned infrastructure. The cloud can be publicly accessible on the internet, private (internally accessible), or a hybrid of the two. One cloud computing scenario in an Agile ALM context is to run agents (slaves) of a build agent in the cloud, adding additional, temporary power to your build grid as needed.

I refer to SaaS, IaaS, and PaaS collectively as (X)aaS. They are affected by Agile ALM, and vice versa. Although outsourced, the hosted items should be included in the ALM and not treated separately, though this isn’t mandatory. You can use (X)aaS without any ALM in mind.

Agile ALM does comprise slicing services, focusing on core expertise while reducing costs and delivering reproducibility. You need to know which services and assets your project or company has in order to decide what you could add to this service “zoo” with an (X)aaS. This doesn’t depend on the scope: On a business level, you benefit from knowing your services and the functions that they provide.

You may think this is obvious, but many companies don’t know what functionality they have built up over the years. Identifying the services while starting with (X)aaS is a big value to begin with. You can’t distribute a service into the cloud if you don’t know which services you have. The same is true on a more technical level. You can garner huge benefit by identifying the components and their dependencies. Many companies don’t know which technical assets they have, nor do they know the asset dependencies. They set out a big package of deployment units containing a nontransparent object meshwork without knowing if those units are necessary in that context. This is a use case where a build and release tool, such as Maven, can help to identify what units you have in your technical portfolio, including specific versions and dependencies.

1.3.4. Task-based and outside-in

Working in a task-based way means that, first, all activities are based on specific requirements or tasks, often called work items. Task-based also means tracing each task and the changes it creates, and this becomes even more appealing if you span the tracing over all roles, phases, and organizational units, including production. When are you close to the maximum of improving your process? When the production crew (and any other stakeholders) not only host final deployment units, but also know exactly which units are based on which sources. Additionally, you will know which sources were touched because of which changes were requested. This doesn’t depend on languages, systems, and organizational barriers.

You can link requirements and defects to coding items, and vice versa. This referencing makes it much easier to validate that the work is done to plan and that the plan is getting done. This end-to-end referencing provides much more scale than using the plain story cards that are used by some Agile approaches, although this may be sufficient in many circumstances too. A common method is to add a ticket number to the check-in command so tools can cross-reference requirements with coding artifacts. An essential method is using change-sets. A change-set is a group of changes made to the system but processed as an atomic unit. Consider the different changes a developer must make to implement a new feature. Instead of checking in each change separately, they can be checked in as a single atomic transaction. This way, the system can verify that all changes are traced to their respective requirements and can update the status of the baseline.

A basic premise of Agile ALM is that work should be aligned to the customer’s requirements. One approach to doing this is called outside-in. Too often, work isn’t based on specific customer requirements, and sometimes requirements aren’t defined at all or aren’t tracked through the process. Other times, the technical staff and the customer, may be speaking different languages in defining the requirements of the software. The outside-in approach takes the right focus and it leads to a different approach in measuring success; it values customer satisfaction and other soft attributes. Its main drivers are as follows:[9]

9 See Carl Kessler and John Sweitzer, Outside-in Software Development (IBM Press, 2008).

  • Understanding your stakeholders and the business context
  • Mapping project expectations to outcomes more effectively
  • Building more consumable software, making systems easier to deploy and use
  • Enhancing alignment with stakeholder goals continuously

In this way, the customer requirements are implemented in the software development system.

Another important approach is known as a balanced scorecard, and it has much in common with outside-in. The customer (internal or external) requests a job and wants the job completed in a way that meets the business requirements. The customer pays for functionality, not for technical solutions, design patterns, or prefactoring patterns on their own. The customer is interested in the resulting software and values working software more than comprehensive documentation. An unfortunate consequence of this approach is that you may deliver the software late. If you can deliver the product only when it’s completed, the customer will expect that the release has been rigorously tested and is production-ready. Failing to communicate the status of the software and the development of the requirements to the customer is a missed opportunity at best, and will likely result in poorer quality software, as measured by the many bugs that will be detected late in the process. The outside-in approach is driven by communicating the status of the software to the customer early, which enables you and the customer to make decisions sooner rather than later.

You should communicate with the customer and the whole team in real time by setting up a task-based infrastructure. With the help of this infrastructure, all stakeholders, including the developer (in his workspace), the project or release manager, and the quality team, are kept informed about the implementation progress. The most important stakeholder—the customer—is also able to get honest answers about the project’s current status.

 

Outside-in and Balanced Scorecard (BSC)

There are parallels between the outside-in and balanced scorecard (BSC) approaches. Robert S. Kaplan introduced BSC as a strategic performance management and controlling tool. It also values nonfinancial measures and adds them to project reporting. BSC has four perspectives: financial, customer, internal business, and innovation/learning.

 

In chapter 4, I’ll describe task-based development, and in chapter 8, I’ll explain collaborative development and testing and provide a concrete implementation of outside-in. Acceptance tests and behavior-driven development (BDD) are properties of collaborative testing. Part 4 covers this major aspect of Agile ALM.

1.3.5. Configuration, customization, and plug-ins

The days of proprietary, heavyweight, monolithic tools that constitute the one-size-fits-all solution are ending. Tools that can be orchestrated and configured according to individual needs are the new trend. They provide features in an open, standardized way (for example, as a service), but they can also be configured and extended as needed.

Tools don’t have to be reimplemented or extended programmatically to fit to the latest project needs. Continuous reimplementing is a nice business strategy for tool vendors, because it generates steady sales revenue, but companies change their minds and therefore require flexibility. Tools nowadays can be reconfigured extensively without touching the sources and without needing upgrades or replacement. Customization is easy enough that the project members can implement it on their own, without requiring a long learning curve.

Moving away from a more monolithic infrastructure, we can turn to a development system involving fine-grained modules. Application suites are customized as needed, and functionality is added where necessary with the help of plug-ins. These plug-ins may be part of the tool vendor’s product portfolio, or they may come from a third-party institution or from the open source community. The overall tool integration infrastructure is evolving into what is known as a mashup, which refers to a toolset that combines data, user interface, and other functionality from two or more sources to create a new service.

Other key issues with monolithic infrastructures include the effort users have exerted to personalize an application to their needs and how many UI controls they must use to access their data. Today, role-based applications with complex dashboard functionalities are state of the art. Dashboards can be configured and customized to individual needs, and they offer many customization features out of the box, without the need to contact the vendor. Dashboards offer views on aggregated data and allow you to zoom in to get more details.

1.3.6. The polyglot programming world

We have already discussed the need to provide integrated access to the requirements and tests. But what happens with all those coding artifacts, themselves? Many big companies still use Cobol, for example. Others use only Java. To protect company assets, businesses integrate their legacy applications or partially enrich them with new technologies and components, such as providing new, more convenient user interfaces. Also, those different language sources have to be developed and should be managed in an integrative fashion. To enable that, integrated development environments (IDEs) support more and more development with different coding types. You can store all of those artifacts in one version-control system (VCS).

 

Some Words about Legacy Code

Legacy code can sound pejorative, but legacy code, written in older languages, such as Cobol, may be an essential company asset. Billions of lines of Cobol code were created, and applications based on Cobol do their job continuously, and are still extended with new Cobol code. For other people, legacy code means code written by someone else last week. A third meaning of legacy code is code that neglects to include significant test coverage. For a detailed discussion of legacy code, see Michael Feathers, Working Effectively with Legacy Code (Prentice Hall, 2004).

 

Challenges in software development can be complex and individual. Having the choice of using a special programming language to solve a specific problem can be valuable. You can be more effective when you have an open landscape where you can use the technology and programming language best fitted to your task.[10] Whatever technology you use, it should be the best fit for the task.

10 See Andrew Hunt and David Thomas, The Pragmatic Programmer (Addison-Wesley Professional, 1999).

For example, it could be better to write a neat file by copying scripts with the Ant tool than to set up a full-fledged Java application. Or you might use a dynamically typed language like Groovy to write tests easily, or a statically typed language like Scala to enhance your software system, because it can be smoother to use these languages than Java, even though all these languages (Groovy, Scala, and Java) compile to byte-code and share the same JVM runtime environment. We look at Groovy and Scala in chapter 9, but the point isn’t that you need to learn new languages, but rather, that you may have to cope with multilanguage environments.

In The Productive Programmer (O’Reilly, 2008), Neal Ford defines polyglot programming as “building applications using one or more special-purpose languages in addition to a general-purpose language” (p. 169). In this book, we’ll talk about how you can integrate different artifact types in a continuous integration context. Furthermore, we’ll discuss and look at examples of how to use and integrate other languages to accomplish special tasks within the overall process.

1.3.7. Open source culture

Development tends to rely more and more on lightweight, primary open source tooling, which supports using Agile strategies. Companies have learned they can’t cope with time and cost pressures by focusing only on heavyweight processes and tooling. Lightweight tools can help here, and we’ll discuss and integrate a lot of them in this book. But lightweight, open source tooling can require a cultural rethink within the company to overcome the tendency to resist change. Don’t be afraid to change processes and tools where needed. Keep your solution aligned to your requirements as they evolve. Many tools don’t evolve rapidly enough, others are evolving rapidly, and new tools are continuously entering the field.

With a lightweight toolchain, you should watch the market continuously and acquire new tools with better features as they become available. There are many open source tools available, but only successful open source products have a broad supportive community. If the community is supportive and the products are powerful, as well as easy to use, they’ll maintain a leading market position by attracting more people to invest time in further developing the product.

If new open source competitors surpass a former market leader, it can be dangerous to ignore this development. A good approach is to be flexible in your decisions, to continuously monitor the market, and to focus on the tool mainline consisting of de facto standards and popular tools.

Buying commercial tools also requires you to watch the market. But once you’ve bought an expensive tool, you’re often stuck with it for a long time. Another problem is that all vendors of commercial products claim their products are the best. Running an open source culture means being open-minded and preferring open, flexible solutions that can replace approaches and tools quickly with new and better ones. It also means you should constantly evaluate whether what you did yesterday still works best today and experiment with alternatives.

1.3.8. Open technology and standards

We’ve already talked about how Agile ALM facets and services should be orchestrated on demand, driven by the specific needs of a project. Tools have their interfaces, and ALM encourages the seamless integration of tools without barriers. Now the question is, how can we integrate those tools efficiently across different vendor products to provide services for the customer? For instance, it’s pretty common to have multiple independent databases in your infrastructure. The minimum solution is to have open standards, such as internet protocols, to connect them. Integration shouldn’t be done via data import/export routines, though; rather, data should be integrated where it’s located.

What standards address these kinds of questions? The Open Services for Lifecycle Collaboration (OSLC, http://open-services.net) is a community-driven effort, mainly sponsored by IBM, to improve the integration of lifecycle tools. Members of the alliance are commercial vendors of ALM tools and other stakeholders, including IBM, Oracle, Accenture, Shell, Citigroup, Siemens, and many others. Commercial vendors drive the OSLC, which is aligned with feature-rich tools (like the IBM Rational product family), but the program is tool-category agnostic, meaning that it also encompasses open source tools.

The OSLC program established special interest groups working on individual ALM areas, including change management, requirement management, and software configuration management, providing public descriptions of interfaces for integrating these features. The interfaces are specified in a REST web service style.

The OSLC Open Source Project aims to encourage the creation of other components and contributions that can help support the OSLC community’s goals. As part of the project reference implementations, sample code and test suites for testing OSLC service provider implementations are provided.

In this book, we’ll discuss the concepts and solutions for ALM with lightweight, primarily open source tools and how to integrate them seamlessly. One benefit of choosing best-of-breed, lightweight, open source tools is that integrating them is often easier than integrating monolithic commercial tools. I call this the “Agile way.” It’s also based on technology standards, but without any cross-tooling interface standards.

OSLC has had a slow beginning. Although first specifications have been finalized, many are still in development, and the community is growing continuously. A prominent implementation of OSLC is available with IBM’s Jazz platform (http://jazz.net). It’s based on OSLC, extending it with its own Jazz Integration Architecture. IBM wants to further integrate its single Rational products with the help of that approach, including Rational Requirements Composer (a requirements management tool) and Rational Quality Manager (test management), and incorporate them with Rational Team Concert.

The impact of OSLC on Agile ALM development has to be monitored. Some people have reservations about big, traditional product vendors and their motivations. The question that many people will ask is, will OSLC have any significant influence on open source or commercial tools at all, or is it a “founder’s toy”? If the latter is true, then only the original participants will benefit from this latest attempt at creating open standards.

1.3.9. Automation

Automation is the use of solutions to reduce the need for human work. “Automation can ensure that the software is built the same way each time, that the team sees every change made to the software, and that the software is tested and reviewed in the same way every day so that no defects slip through or are introduced through human error.”[11] In software development projects, a high level of automation is a prerequisite to quickly delivering the best quality and getting feedback from stakeholders early and often.

11 Andrew Stellman and Jennifer Greene, Applied Software Project Management (O’Reilly, 2006), p. 165

Automating the most error-prone, repetitive, and time-consuming activities is most essential. Additionally, automation is necessary in all areas where you are interested in objective, reproducible results. Another good impulse to start with automating is if some parts of the process aren’t transparent for the team. You should automate parts of the process that you don’t understand so far; you can only automate what you understand and are able to describe. Finally, automation helps in areas where manual work is annoying—good developers have always automated repetitive aspects of their work.

A system can be evolved to have a high level of automation if the process is based on the building blocks of Agile ALM, as they are illustrated in this book. Continuous improvement should be part of your process. For improving the level of automation (and for improving anything in general), self-reflection is essential. You can best improve what you measure, and to measure something, you need a process that delivers results in a reproducible way.

1.4. Comprehensive Agile ALM with lightweight tooling

Complexity can take many forms. Organizational aspects create complexity, such as team size, distributed development, or antipatterns such as entrenching people. Technical and regulatory specifications are basic conditions that also influence complexity.

Small teams with low organizational or technical complexity can completely self-organize and choose the tools they want. But a loosely managed infrastructure may be unmanageable as soon as complexity increases. To improve awareness across the team in complicated scenarios, it’s necessary to use leading tools and their powerful features. High demands for traceability and full automation, as well as for accelerating knowledge sharing, can only be fulfilled using integrated toolchains consisting of best-of-breed tools while driving an end-to-end approach.

A focus on integration occurs as complexity increases. Communication becomes more difficult, as does extracting knowledge from information and aggregating information from data. To reproduce and audit the full process at its most complex point, you should use a comprehensive end-to-end approach that includes all stakeholders, workflows, and configuration items. Tools that are seamlessly integrated will immediately add considerable value to the system. Understanding the overall process and the status of both the project and the artifacts becomes a necessary part of coordinating the work.

In software development projects, an end-to-end approach delivers the best results, where you automate and integrate activities across phases, including building, developing and testing, releasing, deploying, and staging (configuring) artifacts with appropriate tools. Figure 1.8[12] shows the increasing importance of tools in the context of organizational and technical and regulatory drivers.

12 Inspired by and derived from Scott Ambler, Collaborative Application Lifecycle Management with IBM Rational Products (IBM Redbooks, 2008), p. 41, Figure 2-14.

Figure 1.8. Interdependency of complexity and tool usage: In complex environments, it’s essential to use an integrated toolchain that glues together the best-of-breed tools to serve all stages of development in an end-to-end approach. Each circle must build on the previous one, so the end-to-end focus needs to be integrated, best-of-breed, and pragmatic.

1.4.1. Toolchains and accidental complexity

The benefits of an integrated end-to-end tooling approach extend beyond coping with the complexity itself; it must minimize accidental complexity. Accidental complexity is that which is nonessential to the specific task to be performed. Whereas essential complexity is inherent and unavoidable, accidental complexity is caused by the approach chosen to solve the problem. An effective toolchain helps reduce complexity by providing traceability to show what has changed, when it was changed, who changed it, and who approved the change for promotion. A good toolchain also accelerates communication (for example, transparency and visibility) expressing the current state of the software, and it’s the communication vehicle for all stakeholders.

The toolchain is both the glue that holds together the various components and phases of the application lifecycle and the oil that lubricates the smooth and efficient interaction of those components. It delivers an automated workflow, drives a continuous stream of activity through the development lifecycle, and efficiently coordinates and streamlines development changes.

 

Vendors

There are many proprietary, commercial (and expensive) tools and tool suites on the market, such as AccuRev AgileCycle, CollabNet TeamForge, codeBeamer, MKS Integrity, Synergy, IBM Rational Team Concert, PDSA Agile ALM, Rally ALM, Visual Studio Team, and Borland Management Solution that can help (or that claim to help) implement an (Agile) ALM. Others, such as StarTeam and DOORS, provide support for single aspects in the overall process.

 

Chains of lightweight tools help you to deliver solutions across development phases, addressing even more stakeholders and keeping businesspeople and developers on the same page. Lightweight tools offer the features you need based on your project’s requirements. They are customizable and straightforward to use, they have an open architecture, they’re mostly free or moderately priced, and they can be easily integrated with other tools. My definition of Agile ALM results in processes and toolchains that are flexible, open to change, and high in quality. But always keep in mind that Agile ALM isn’t only a product category, but also a discipline and a mental approach. Working with Agile ALM should start with values and people as well as the concepts behind it.

 

Free and Open Source Tools

This book doesn’t strictly distinguish between free software and open source software, and it contrasts both to commercial software. There are many possible variations and license models, but in this book, I use open source in its classic sense: when the sources of the tool are available and the tool is free. We’ll discuss lightweight and primarily open source tools. Open source tools covered in this book are considered to be lightweight too; some lightweight tools covered here aren’t open source and cost money, but they’re cost-effective and low-priced in comparison to feature-rich products from big traditional vendors. Examples of lightweight commercial tools are those from Atlassian, such as JIRA. Consult the individual tool licenses for the details on each tool.

 

1.4.2. Agile ALM tools

Some software development tools are too heavy, are monoliths, or offer functionality you seldom use. Often these tools are pretty expensive and difficult to roll out. Depending on your particular requirements, commercial, feature-rich tools or one-stop-shop tool suites may be a good fit for you, but these tools aren’t the focus of this book. Recently, the ALM space saw a surge of integration with Agile concepts. Tool vendors understand more and more that it’s crucial to become agile in order to cope with continuously changing requirements and contexts. This results in more and more companies using the term “Agile ALM” to describe their ALM suites. The origins of this book are different. Here, Agile strategies are introduced and implemented by lightweight tools. Chains of integrated tools lead to tailored, orchestrated ALM solutions.

An Agile ALM tool is one that fosters an Agile process. There’s no strict checklist to categorize whether a tool is an Agile ALM tool, but the tool must enable you to become Agile—the tool must help the team do its job better, aggregating and providing information in an integrated, interdisciplinary way. An Agile ALM tool must add value to the system and improve the collaboration of the stakeholders. In my opinion, an Agile ALM toolchain must implement the essential Agile ALM strategies discussed in this book.

Some organizations use Agile ALM single-point solutions; others feel more comfortable with an orchestration of single tools. Both scenarios have their advantages and drawbacks. Too much complexity is a potential risk for both cases; the goal should be to minimize the accidental complexity. Relying on lightweight toolchains can dramatically improve flexibility because you can easily replace small units of the overall infrastructure without touching other parts. Many companies experience their best results (as the ratio of minimized complexity and optimized flexibility) while driving an open source culture. This means they use a mashup of configurable tools that offer exactly the features that are needed to solve a given task, and they evolve the infrastructure incrementally. Configurability, service orientation, and an open architecture (such as a plug-in system) can help to decrease complexity and increase flexibility. For “ready to go” tool suites, configurability is even more important. The market doesn’t offer an Agile ALM tool suite that could serve as a golden hammer for all projects without having any configuration capability. Using a comprehensive one-stop-shop solution that can’t be customized or extended as needed leads directly to “shadow processes” or retrofitting your process to work with the tool, which is a pretty bad approach.

There are Agile ALM tools or tool suites that cover (or claim to cover) many development phases. But it’s not mandatory for a single tool to span all phases. Agile ALM tools can’t and shouldn’t automate everything. For example, consider build scripts: Tools should be able to trigger existing build scripts. But it’s not the one-stop-shop Agile ALM tool suite that compiles the code; rather, it’s the underlying solution that’s already in place and successful.

1.4.3. Effective and efficient tooling

The process of picking the right tool should be aligned with your particular requirements. You may find that an out-of-the-box suite fits best to your individual context. Alternatively, you might prefer to orchestrate individual tools in a flexible way, where a single tool focuses on a special task and is able to easily integrate with the overall tool infrastructure. A toolchain that spans different development phases is sometimes called software development lifecycle (SDLC) tooling. Integration management integrates the work of your team and leads to technically and functionally consistent software. From a tool perspective, an Agile ALM tool integrates with other tools. An isolated, standalone tool, acting as a silo and satisfying only a minor subset of your stakeholders, will probably neither accelerate collaboration nor improve the time to market of your software product. You can also use tools successfully without connecting them to an overall Agile ALM ecosystem. Additionally, there are many great tools, market leaders in their field, whose users would never hit on the idea that they’re using a tool that could be an essential part of an Agile ALM toolchain. I’ll cover examples of those tools throughout the book.

Figure 1.9 gives an overview of the tools, languages, and platforms we’ll discuss and integrate in this book. Lightweight tools are used throughout the complete development chain. In terms of languages and platforms, we’ll mainly talk about Java, but we’ll also discuss Cobol, Scala, Groovy, and .NET.

Figure 1.9. Agile ALM integrates platforms, tools, and languages, all driven by people (illustration © iStockphoto.com/sellingpix).

Normally, there’s one leading tool that drives the process. It’s the central entry point, which is generally also responsible for the workflow or that acts as a central dashboard. Building and releasing a software product involves many complex processes, roles, and deliverables, which need to be managed so they fit together, and streamlining these processes is a major effort, particularly when there are many people involved. An Agile ALM solution manages not only the simple versioning of your source code files, but it also facilitates support for continuous integration and build management. An Agile ALM solution also enables you to deploy the end result, and it offers approval processes and can manage complex runtime dependencies. Agile ALM tools have much more flexibility than the first-generation library tools that once enabled you to pump out a single software version at a time to a target library.

In modern Agile settings, the whole lifecycle is managed and tracked. With effective and efficient tooling, it’s much easier to determine which requirements are already implemented in which artifacts and which bugs can be traced to specific artifacts. The artifacts can be compiled and deployed as a repeatable process. Continuous integration and audits can show the status of the development and provide synchronization points. But this doesn’t happen manually—a toolchain should be used to connect and integrate both functional and technical release management.

In order to understand Agile ALM more clearly, let’s consider an example use case.

1.5. Example use case

Let’s look at an example Agile ALM use case. Before committing a change to the central VCS, the developer runs a private build on his desktop. This developer build is comparable to a nightly build (in continuous integration), which is often the same as an integration build. The difference is that the developer build runs in a local, isolated environment instead of the shared integration environment.

The required versions of identical, properly sliced component dependencies or transitive component dependencies are used, and if necessary they’re replaced with mocks. Component developers include in their workspaces only those external components that are necessary to do their work. The external components are included as clean dependencies, which means that binaries, rather than sources, are included in their development environments. The dependencies are tested at a central place and are provided via a central component repository.

During the private builds, tests are run, including unit tests, component tests, functional tests, and smoke tests. The integration builds also run these tests, but they enrich them with more detailed integration tests.

 

Note

A smoke test (or sanity test) is a first test made to provide some assurance that the system won’t catastrophically fail. In an IT project, a smoke test could be an automated test that starts the application and simulates a first user interaction by opening a visual control on the user interface.

 

 

Builds

A build is a standard repeatable and measurable compilation and packaging process done automatically. The specific steps of this process may vary. Some argue building means compiling. Others may include a preparation of the system, working on baselines in VCSs, compiling sources, running different sorts of tests, and packaging and distributing the configuration items. Build can also refer to the greater automation flow, including static code analysis, unit testing, and deployment. The output (the deliverables) of this process is often called a build.

 

The changes from multiple developers are integrated continuously and automatically. Each version of the build is the basis of later releases and distributions.

The build is completely configurable and can produce multiple versions that run on different target environments, including Windows and Linux machines. But be aware that even early in the process you should use environments (including operating systems) that match those you’ll use in production.

This task-based build process overlaps different phases and roles. The developer codes according to tasks that are tracked and assigned. Part of the application is a bouquet of languages that are either company assets developed with commonly used, mainstream programming languages or targeted languages chosen to solve a specific programming task. For testing, Groovy and Scala are used, and all the tests are integrated into a seamless infrastructure without any cross-media conversations or friction losses.

The status of the software is always visible across the system. The toolchain is highly integrated and connects all developers and customers. In addition to the technical release infrastructure, the functional release management slices work items into more fine-grained tasks and assigns features to releases.

We’ll discuss the different aspects of this example use case throughout the rest of this book.

 

Field report: “ALM at Siemens CT”
By Rainer Wasgint, program manager at Siemens Corporate Technology

At Siemens Corporate Research and Technologies, we’re addressing cross-business sector topics and applied science and sharing best practices. Within the Global Technology Field (GTF) System Development Technologies, we’re looking for the top tools and technology to give our development teams the best possible support. Over the last several years, we could clearly see software development infrastructures moving away from ad hoc toolchains and toward carefully considered, well-managed, and integrated solutions, now called ALM. For us, this meant finding an integrated solution that offers seamless project support to increase efficiency and provide transparency within the development process; impact analysis of potential changes and the identification of the involved parties; and project status reporting, including dashboarding for customers or upper management.

For example, when we were faced with the task of meeting the CENELEC/EuroNorm regulatory quality standards for motors, we needed a process and toolchain that was adaptable to different kinds of environments, ranging from small, colocated development teams to large, worldwide distributed project teams. It also needed to be applicable to different project processes, from heavyweight processes like RUP or the V-Model to Agile approaches.

To support such a huge range of potential ALM solutions, our approach is to implement scenarios based on a “meta-model” specific to each project. The meta-model describes the interconnection of the involved disciplines and the associated development artifacts like requirements, models, tests, and configurations. This allows us to control the resulting interaction paths in a structured manner. Rather than claim overall completeness for every thinkable relation, the model focuses on the information needed for traceability, impact-analysis, and the automation of tasks within ALM.

An optimal instantiation of this meta-model requires a flexible and configurable interaction between the software development tools used. Today, mainly interface-based, plug-in, or middleware-bus-based approaches dominate the markets, and unfortunately, there is no major standard for intertool communication and data exchange. We see a promising movement toward service-based and on-demand tools built on open standards, such as the Open Services for Lifecycle Collaboration (OSLC), which is community hosted and driven by IBM Rational to develop a specification for vendor-independent tool integration.

 

1.6. Summary

In this chapter, you learned about challenges common to software development. You saw how this prompted the evolution in software engineering that led to Agile ALM. You also learned what Agile ALM is for and what its features are.

The evolution of software engineering has moved from a cumbersome, fragmented approach to a more comprehensive, lean, integrated, crosscutting discipline that supports the whole development process. ALM evolved from software configuration management; it’s a comprehensive activity spanning the entire development lifecycle, from requirements engineering to maintenance and production, iterating over those phases continuously. It can be understood as a continuous, comprehensive task that incorporates several management disciplines, including build management, release management, configuration management, test management, quality management, and integration management. This integrated approach is called applied change management. The goal is to provide high-quality, functionally and technically consistent software. Integration also means the ALM infrastructure provides a single view of the truth as opposed to multiple views.

Using Agile strategies, Agile ALM enriches ALM with further features like people-ware and transparency. Agile ALM is agnostic concerning process models, development categories (product versus project development), and project types (from greenfield to ongoing maintenance projects). It’s also agnostic concerning tools and is applicable to both open source and commercial products. In this book, we’ll focus on lightweight, primarily open source toolchains.

The remaining chapters provide more details on the different aspects of ALM and on how to implement them with lightweight tooling using Agile strategies. In chapter 2, we’ll take a deeper look into Agile in the context of the ALM, and I’ll describe my preferred Agile ALM approach.