Chapter 3: Tracking IT Performance – Running IT Like a Business: A Step-by-Step Guide to Accenture's Internal IT

CHAPTER 3: TRACKING IT PERFORMANCE

Operations in today’s IT department are typically measured against a wide variety of technical standards: network uptime, incidents logged and resolved, workstation failure rates, and so on. While this is an excellent start, these technical metrics are only the beginning of a comprehensive reporting framework. If your goal is to run your IT function like a business, then you need to track the performance of your function in the same ways that any business measures its performance. To make informed decisions as you manage the IT function, you need accurate information on what is working. To articulate IT performance to your enterprise leadership and to stakeholders, you must be able to translate technical metrics into credible measures of value creation. In this chapter, we will examine the tools and techniques of performance measurement.

Accurate metrics and reporting are invaluable in helping IT executives make good decisions. As managers, we need to know how the IT function is performing in order to know where to focus improvement efforts and new investments. Tracking IT performance is also critical for demonstrating the business value of your IT function; without such measures, it is difficult to communicate the value of your function, or to make a case for additional funding and investment.

The commitment you bring to performance measurement is critical for its success. Anyone can overlay a veneer of performance metrics on top of an existing operation, showing improvement year after year, and many do precisely that. It is another matter to measure your shop rigorously against the relevant benchmarks in your industry or sector, to be prepared to acknowledge shortcomings, and to commit your IT operation fully to a discipline of accountability.

So, in addition to examining performance measures, we will also explore how to gather and utilize benchmarking data that adds credibility to your internal performance measurements.

Performance measurements

There is no shortage of tracking and reporting data in IT operations. But how do you make sense of all the reporting you already have? How do you identify the additional tracking you should be doing? And how do you put everything into a framework that supports your goal of running IT like a business?

Each enterprise will answer these questions with its own uniquely engineered reporting system and structure. The reporting framework utilized in Accenture’s IT organization takes the shape of a funnel, as depicted in Figure 11:

Figure 11: Accenture IT organization’s reporting framework

Working from the top down, the lowest level – or operational Level 3+ – of reporting, includes project-level reporting on individual initiatives, status reporting, product and service reporting, and other reporting generated on a daily or weekly basis.

Data from this level is then rolled up into Level 2 organization and function reporting, grouped in the three major areas of Accenture’s IT function: business operations, infrastructure services and business applications. Reports at this level synthesize data from multiple sources into more coherent portraits of the IT function’s performance, and are typically produced on a monthly timetable.

Examples of Level 2 reports include:

  • Benefits realization reporting
  • Application mock bills/invoices
  • Infrastructure operations scorecard and status reporting
  • Satisfaction reporting
  • CIO organizational reporting (financial, people)
  • Business capability scorecards
  • CIO project management office (PMO) reporting
  • CIO leadership reporting.

Level 1 strategic enterprise reporting examines the IT function holistically and constitutes the highest level of abstraction and synthesis. Individual scorecards for IT strategy, IT performance and priorities in the current fiscal year track multiple metrics that are meaningful to the overall business. These reports are submitted to the IT steering committee and to Accenture’s chief operating officer, and also inform reporting to Accenture’s executive leadership team (ELT).

Figure 12 displays one of the Level 1 scorecards, which illustrates the level of detail and the metric tracked.

Figure 12: Strategic IT performance scorecard (sample)

This sample scorecard brings together metrics in three key areas:

1. The contribution IT is making to the success of the business, as measured in:

  • Satisfaction among business sponsors, employees and critical processes/roles
  • Benefits enabled or realized business case benefits
  • Market image, or IT’s contribution to Accenture’s new business development effort.

2. IT operational excellence, as measured by:

  • IT cost as percentage of net revenue
  • IT expense per employee – overall and by distinct workforce
  • Improvements in productivity, service levels, delivery and other targets.

3. Measures of Accenture’s IT function as a best-in-class workplace, as seen in:

  • Employee attrition
  • Employee satisfaction
  • Percent of training budget spent.

Accenture’s reporting structure for its IT organization balances considerable detail with strategic synthesis. But we did not start out here. We developed the levels of reporting depicted here over a two-year period, then refined and adjusted the metrics for several more years before arriving at the current framework. Its maturity is evident in the fact that it has remained fairly stable over the last six years.

Given that Accenture is a highly centralized enterprise, we found it comparatively easy to assemble this framework, and to ensure that its scorecards are aligned with overall IT strategy and governance. The reporting structure for a decentralized or federated enterprise would, undoubtedly, look quite different. In a distributed enterprise, the key metrics at the centralized or federated level may be primarily financial in nature, while divisional measures would focus on service and satisfaction levels. If your enterprise is a holding company and its subsidiaries operate in different sectors or markets, consolidating lower-level measures to the corporate level may be less informative. You will want to avoid rolling up operational metrics wherever this tends to obscure meaningful distinctions. For example, if your enterprise operates manufacturing businesses and financial services units, IT spending as a percentage of net revenue may be in the 1-2% range for the former, and in the 7-10% range for the latter. To combine comparable metrics from such dissimilar IT operations would serve little purpose.

Allowing for such variances, many of the metrics used by Accenture can be applied to any IT operation. Having worked with many government agencies at the local, state and national levels, as well as with non-profit organizations, I can attest to the utility of these metrics in those enterprises. All IT organizations exist to serve their businesses, and all are delivering the same basic services. So while there may be some differences in levels of benchmarks from sector to sector – such as the difference between manufacturing and financial services – it is still important to understand what you are doing and to understand upward and downward trends.

As you assemble the reporting framework suited to your enterprise, the important lesson to keep in mind is that you should not expect to get it perfect right away. Many companies seek to be too good, and waste valuable time and energy debating nuances, when only time and experience will reveal the right metric or structure for your enterprise.

Measuring satisfaction

Collecting data for performance measurement is normally a clear-cut process of quantification, until you come to measures of user satisfaction. Asking customers, “How are we doing?” can be eye-opening, jaw-dropping or mind-boggling, depending on your performance and the user providing the response. Despite the variable nature of the outcome, this interrogation is essential, so much so that at Accenture we actively solicit customer opinions on our performance. Not being afraid to ask your customers what they think of you is an indispensable part of running IT like a business.

At Accenture, we divide the customer universe into two broad groups: sponsors, or those that hire IT and purchase our services, and users or employees – those who actually use our services to do their jobs.

For sponsor satisfaction surveys, we further divide this group into business process sponsors, who buy applications and infrastructure services from IT, and geographic sponsors – the executives responsible for all of Accenture’s business activities in specified geographic areas. All business and geographic sponsors are surveyed annually on the quality of IT services they are receiving and their perceptions of IT’s overall performance, including measures of reliability, cost effectiveness and business value.

With business process sponsors, our survey instrument is ourselves. Executives from Accenture’s CIO Organization personally interview several functional leads in each major unit every quarter. In face-to-face conversations, we engage in a dialogue, hearing feedback firsthand, responding on a real-time basis, and building personal relationships along the way.

The advantage of in-person interaction is obvious, but it is not always practical; therefore, with geographic sponsors, we rely on surveys to minimize time and travel costs.

When it comes to gathering responses from those Accenture employees who are our end users, we actually survey one twelfth of our end-user universe each month. In effect, we survey every line employee in Accenture once a year, every year. These surveys do not elicit 100% response rates, but this blanket approach to the user base ensures a continuous stream of user feedback.

When we first began conducting these surveys, our IT product managers and directors tended to regard the survey as a nuisance. Our persistence has turned around that attitude completely; IT employees now clamor for the opportunity to receive direct feedback on issues and concerns in technology. We devote a considerable amount of effort to refining the questions that are being asked, and in reviewing the answers and comments that come in, reading between the lines for the product and service insights that only customers can provide. Just as if we were running a global food franchise and saw a sudden drop-off in same-store sales in a particular locality, rising numbers of technology complaints from a particular region signal a trouble that needs to be addressed immediately. The fact that we conduct the user survey on a rolling monthly schedule helps us spot changing business requirements or service delivery issues early, and respond quickly.

Metric management

The metrics you collect on your IT performance will enable you to track operations over time and determine whether your team is making progress and, if so, how much. As with any data-gathering process, consistency of methodology over time is essential to ensure meaningful comparisons.

Another important facet of what might be termed metric management is retaining your ability to drill down through the data you gather, so that you can begin to answer the question of what is happening, and why.

Figure 13 illustrates the scorecard we use to track our performance at Accenture around the specific area of core IT architecture – which includes network connectivity, remote access, data center hosting and application monitoring. We use a typical red/yellow/green classification of results to aid with visual assessment of data. As you begin to collect data like this on your own operation, and then roll it up into higher-level reporting to senior management, it is vital to retain the ability to drill down from the highest level back to the operational level, so that you can effectively diagnose the reasons why your results may be varying from your targets.

Figure 13: Core architecture scorecard (sample)

Another important dimension of data gathering concerns differences in the data across geographies. Managers of global enterprises know how costs for applications and labor can vary from location to location. So your metric management should include the capability of analyzing data on a geographic basis, as well as on the other relevant parameters of your enterprise.

To illustrate the implications of data-gathering on a global scale, Figure 14 depicts the way we look at Accenture’s businesses along three dimensions: by geography, by operating groups, and by our major growth platforms, with corporate functions underpinning all these operations.

Figure 14: Accenture’s organizational structure

This structure is used to guide many of the metrics we gather on IT operations, so that we can report on IT performance in the corporate aggregate, and also drill down to report on IT performance at lower levels of the enterprise. The ability to look at metrics with detail and precision will help you identify outliers – those products or services you offer that may be costing your operation significantly more than the norm.

For instance, we all understand that many IT products and services have a local labor component, and that the costs for these can vary widely. The cost for local in-person support in the US is typically several times as much as the same level of support in a country such as India. So you will want to be able to isolate and identify those cost components. This level of detail in your metrics will also enable you to compare costs from region to region. “Why is our IT support in the US costing so much more than support costs in the UK?” is just one example of the kinds of useful questions that will lead to illuminating discussions as you review your metrics within the IT organization and with the senior leadership of your organization.

Benchmarking

Metrics create the first half of the case you make to corporate or organizational leadership that you are running an effective and successful IT operation. Benchmarking against relevant industry and IT standards provides the other half of the value equation.

With benchmarking, you can make the case that you are measuring your operation with rigorous methods, and that your performance compares favorably with the operations of comparable IT functions in other companies, or with prevailing standards in the IT marketplace.

Without benchmarking, your argument carries little weight. You may be able to demonstrate that your operational performance has improved over last year’s results, but such improvements merely beg the question, “As compared to what?”

Benchmarking lends credence to the performance measurements you present to senior management, and lends credibility to the value proposition you make to each one of your internal customers. Benchmarking is so central to what we do at Accenture, that we conduct this process on two distinct levels.

First, we use major metrics to benchmark our work against our competitors – which are other IT service firms with annual revenues in excess of US$6 billion. We measure three variables:

  • IT costs as a percentage of net revenue
  • IT costs per person supported
  • IT workforce as a percentage of total workforce.

We retain an independent industry-consulting specialist to conduct a custom IT benchmark survey for Accenture, comparing these three metrics across companies and industry sectors. Industry analysts have access to IT cost information by industry, so it is far more feasible to commission these surveys than to attempt to conduct them directly ourselves.

Accenture also conducts product- and service-level benchmarking, which is the second level of comparative measurement. We examine many individual IT components, such as e-mail provisioning and help desk service, seeking to determine if our costs – as identified through internal metrics – are competitive with those of other companies in our field. If not, we seek to learn why not, and what must be done to correct a negative cost differential.

What do we expect to learn from the benchmarking of all our services? Sometimes, we learn very little, particularly when the product or service is mature and there are few opportunities for performance improvement. In other circumstances, we may discover that while our service has not changed, the marketplace has – a discovery that forces us to reconsider how we are providing a given service.

For example, as a result of Accenture’s annual benchmarking exercise, we noticed that the typical industry costs for a commodity-type service were declining fairly rapidly. Upon further investigation, we determined that several technological innovations were combining to make the outsourcing of this service feasible. Even though the metrics over the past several years indicated that we were doing a good job on this service, we had to recognize that we would not be able to compete as the cost prevailing in the marketplace continued to fall. So we are actively exploring the outsourcing of a central service not because we changed, but because the world did.

As with any metric, it is imperative to establish consistent definitions for what you are measuring and a consistent way of developing the benchmark you want to use. While you can never be absolutely certain that you are getting an “apples-to-apples” comparison, you cannot argue with data trends that hold over several years.

When Accenture began its IT performance measurement program in earnest back in 2001, we focused on the three metrics cited above, and we maintained this focus year after year. Today, we can see in Figure 15 the fruits of these labors in the significant declines our IT transformation has been able to achieve for Accenture.

Figure 15: Major Accenture IT metrics, 2001-2010

Even more instructive are the benchmarking results that compare our performance on these three metrics against results from comparable companies. These results give us the ability to state, with a high degree of confidence, that Accenture’s IT organization performs at levels equal to the lowest metrics or substantially below the lowest metrics recorded in our industry.

Benefits realization

Even after you have completed your performance measurements and your annual benchmarking exercise, something more remains to be done. Inside Accenture, we describe this with the innocuous term of “benefits realization.” In reality, the processes of IT performance measurement and value creation remain incomplete until all the processes discussed in this chapter are linked back to the investment decisions made through the IT governance structure first discussed in Chapter 1.

Figure 16: Life cycle of IT investment benefits realization

Figure 16 illustrates a full life cycle view of IT investments. It is instructive to trace the four stages of the life cycle in order to understand the importance of benefits realization when running IT like a business.

Recalling our original discussion of IT governance, we noted that, as part of any decision to proceed with IT investments, it is necessary to create a business case for the investment and to establish the baseline benefits the investment is expected to deliver. This business case with its associated benefits is then reviewed with the IT steering committee before investment commitments are finalized.

After an investment decision is taken and a business case approved, we track that investment for three years beyond implementation, in order to determine whether or not we are achieving the benefits promised by the business case. The return on Accenture’s IT investment is measured with rigor to determine the actual benefits realized in relation to the original business case. The results of this performance tracking are then fed back into the investment planning process, in order to inform future decision making based on actual results.

It is vital to note that only benefits that can be fully documented are considered “realized.” In the early years of our own governance process, we discerned a tendency toward what might be called “fiction writing” when it came to documenting the business cases achieved. To counteract that natural tendency, Accenture assigns its own internal audit group, which is wholly independent of the IT function. Internal Audit conducts audits of a selected random sample of business cases, including physical visits to locations and the examination of documented evidence of benefits achieved. If the objective evidence of the benefit is not confirmed during the audit, the business sponsor who stands accountable for the realization of the business case gets no credit for it. As an additional safeguard against degradation of benefits targets, any revisions to the benefits to be achieved require the approval of the IT steering committee.