Performance Measurement – From Weakest Link to Driving Force

The use of performance measurement clauses in outsourcing contracts has become very common. That’s good news. There is increasing use of service level agreements, benchmarking and customer satisfaction measurement. The bad news is that, in many cases, implementation of these clauses is being done very badly.

There is today an all too common service outsourcing scenario reported by our clients wherein customer satisfaction is low, but established service levels are being met. TBI sees a lot of deals in need of reframing, several years after they were initially signed, where service expectations are not being met and the potential of the deal to bring value to both the vendor and the customer is not being fulfilled.

This white paper will explore both the positive value of use of performance metrics and the pitfalls, since we need to understand what’s going wrong before we can fix it. It will end with a practical model for establishing effective performance measurement processes.

Why Measure?

  • To assure that service levels provided by the vendor effectively support the business
  • To monitor cost-effectiveness of the outsourcing solution
  • To gain understanding of performance problems
  • To motivate performance improvement

So, what is the potential value of using performance metrics in outsourcing actions?

1st, service level metrics help to establish expectations for vendor performance- both on the part of the vendor and on the part of the users. Note that, in TBI’s experience, as much customer dissatisfaction is caused by unrealistic customer expectations as it is by vendor failure to meet business requirements (e.g., where the customer expects problem resolution in 2 hours but the vendor has agreed to and is being paid to provide it in 4 hours). Regular use of SL metrics allows you to monitor success in meeting performance expectations.

2nd, periodic benchmarking of service costs against “industry averages” for such services provides important information about the cost effectiveness of a deal. If prices rise above average, negotiations for higher service levels or for lower pricing can be initiated.

3rd, on-going measurement during the term of the contract provides insight into opportunities for improvement – both on the part of vendors and their customers. It is important to acknowledge that not all performance problems are under full control of the vendor. For example, a high job failure rate in a mainframe data center run by a vendor might be caused by poor quality application support work performed by the client organization or another third party.

Finally, when an effective measurement process is put into place, it serves to motivate performance improvement. It is not, in TBI’s opinion, the financial penalties often associated with low performance that does this. Instead, motivation to improve is the simple result of drawing attention to the performance needs and results. Both sides benefit from improved performance – vendors profit margins increase and customers’ satisfaction with services increase.

Problems with Common IT Measurement Practice

  • Poorly selected metrics
  • Poorly or incorrectly specified and/ or executed metrics
  • Performance standards set too low or too high
  • Inadequate price and performance benchmarking
  • Insufficient process/follow through

As said earlier, it is important for us to understand the problems with use of performance measurement clauses in outsourcing contracts. The next part of this white paper will focus on these.

Key Problems Include:

  • Using the wrong metrics (e.g., too operational, or not very relevant, etc.)
  • Not paying enough attention to how metrics are calculated
  • Setting performance standards (e.g., service availability 99.9% of scheduled time) at the wrong levels – levels below those actually required by the business or levels well above those that can reasonably be met
  • Not employing benchmarking as an outside check (the term “benchmarking” refers to the systematic comparison of results of current practices to the results of other organizations for the same business processes). Benchmarking can be used as a “reality check” and a means of identifying opportunities for improvement that might otherwise be missed (e.g., industry average help desk call response time is <20 seconds, but our average is 45 seconds.).
  • Finally, and most commonly, the use of the metrics is not institutionalized. Nobody looks at the reports, there’s no action planning, etc. etc.

Each of these will be examined next in more detail – what we should be doing vs. what TBI sees going wrong in what is actually being done.

Metrics Selected Should:

  • Focus evenly on all key service concerns
  • Monitor service levels that are “critical” to the business
  • Have business information value
  • Aid in root cause analysis

In the rush to contract, some pretty sloppy work is being done in identifying a proper set of metrics for performance monitoring. Throughout industry, organizations are setting themselves up for nothing but a lot of rework, renegotiations, and likely customer dissatisfaction by carelessly specifying what they plan to measure, without appropriate up-front analysis.

Focusing evenly on all key service concern requires a good understanding of all aspects of service that the vendor is to supply. In many cases, the metrics TBI sees written into contracts miss the mark; they often over-emphasize some areas of performance and ignore others.

Not all aspects of service are equally worth measuring, and not all are realistically “business critical”. Since it is common to attach penalties to below par performance of “critical” aspects of performance, it is essential to distinguish between those services that if not performed to standard impede business performance, and those that are only annoying (e.g., timeliness of calling card order delivery vs. timeliness of desktop hardware and software problem resolution)

Service Level Metrics selected should be aimed at monitoring services from the perspective of the business. A common problem seen is reliance on what we call “operational” metrics – tactically oriented, focused on day to day process and root cause analysis; vs. “strategic” metrics that are business oriented, used for positioning and evaluation, and aimed at info needs of Senior Management. At the same time, measures must be specific enough to permit root cause analysis.

Metrics Definition and Implementation should:

  • Consider the reliability of data sources
  • Include specification of service expectations and exception handling, but minimize allowable variance
  • Include testing of calculation methods
  • Consider the impact of interfacing services provided by other parties
  • Provide an audit path

Metrics need to be clearly defined for all stakeholders – so everyone knows what they mean – and assumptions for how measures will be obtained need to be verified. The lack of detailed metrics specifications that capture the agreement of what is to be measured and how is a weakness that is very prevalent in how organizations use metrics today.

Data sources need to be documented – and validated. Once you begin probing into available data, it’s not uncommon to learn of flaws in reliability (e.g., logs that record problems, but don’t always capture problem resolution). Service level metrics need to be as reliable as possible, but many metrics have practical limitations because of data availability and/or measurement calculation methods. These limitations should be documented, to assure that all parties understand what is covered by the results and what is not and also to guide metrics improvement activities over time.

Service expectations, service boundaries and exception handling also need to be documented. For example, the WAN service availability expectation may be 7×24 but occasionally there will be need for scheduled maintenance and how that will be agreed upon and factored into the calculation will need to be documented. A common pitfall to be avoided in this area, though, is the use of a too general disclaimer about accountability for performance; e.g., a blanket statement that if work volume rises above x level, the vendor will not be expected to meet service levels. These circumstances should be handled through contract change management processes, instead, and involve negotiation of adjustments to the vendor’s pricing and staffing levels or to the previously agreed upon service levels.

Make sure you test calculation methods. It is hard to foresee all of the possible ramifications of metrics design, particularly for very complicated end-to-end types of metrics. The best plans will sometimes get unexpected results — e.g., we have seen cases where so many sub-measures were built into a single Service Level Metric in an attempt to be inclusive, that the resulting metric barely varied, despite wide fluctuations in the actual customer experience of service.

Finally, the ability to audit metrics processes periodically is very important. Metrics data collection and calculation can veer “off track” over time. Sufficient documentation is needed to allow independent party to review and note any variance from plan.

The Standards Setting Process should:

  • Be a collaborative process between vendor and customer
  • Consider baseline performance of the organization
  • Consider industry performance as well as individual business needs
  • Recognize that as business needs change, performance standards will need to change

Setting appropriate standards poses a problem for many organizations, simply because they don’t understand their current level of performance and don’t know what levels of service are common in the industry at large. All too often, they simply guess at standards.

What works best in TBI’s experience is a process that is:

Collaborative – on an on-going basis (starts, perhaps, with some agreed upon defaults, but continues throughout the contract to bring vendor, management and business users together to improve upon these) Gathers data on baseline performance (the minimum that customers will accept); Benchmarks industry performance (what’s commonly achievable); and asks customers what their business requires

Service Level Standards that are challenging, but realistic can be identified in this manner. Standards that are set at challenging, but achievable levels, are those most likely to motivate improvement. When negotiating a Service Level Agreement, it is important to understand that service levels beyond those required are not necessarily worth paying for, in terms of their business value. Blanket clauses that require a service provider to increase service levels year in and year out will not necessarily help the organization, but they will surely add cost.

Finally, a process needs to be defined by which different Service Level Standards and metrics can be negotiated as business needs change throughout the life of the contract. Performance needs shouldn’t be viewed as static – if they are, you can end up with meaningless measures and inadequate services as business needs and industry capabilities change

Price and Performance Benchmarking

  • Should employ well-defined and repeatable processes
  • Requires a carefully selected
  • Require a carefully selected benchmarking database

TBI has worked with several companies to implement price and performance benchmarking in order to do periodic “health checks”

Two keys to using benchmark databases for this purpose are:

  1. The benchmarking process (e.g., data collection and comparison) must be well enough defined so that it can be repeated and yield trustworthy results that all parties interpret in the same manner
  2. The benchmarking comparison cases must be carefully selected to insure that results will be valid. Benchmarking services offered by vendors use different data points, calculation methods and their databases offer different profiles.

It is likely that the standardized benchmarking process offered by vendors won’t be fully applicable, so that you’ll need to work with a benchmarking service that has good understanding of your goals and data needs. The best approach is to specify what your data needs are and to request proposals from multiple vendors — requesting that they describe their suggested benchmarking process in detail as part of their service proposal.

Effective Performance Management Also Require:

  • Weekly attention to measurement results
  • Action planning when performance is below expectation
  • Follow-up improvement activity
  • Effectively targeted reporting of results and status
  • A formal service level agreement maintenance process

Insufficient process and follow-through can render the best planned performance measurement useless. Common mistakes include:

  1. Waiting until the end of the month or quarter to review performance
  2. Not requiring action plans each time performance falls below standard
  3. Not following up on performance problem correction, allowing it to drift – continuous improvement not motivated
  4. Ineffective reporting …unclear, too detailed, too high level, not sent to stakeholders who have interest, sent to those with no interest. We note that the “slickest” distribution methods are not necessarily the best. The thought process when crafting the reporting process should emphasize distribution in a manner that is easy to use and which will provide information where it’s actually needed.
  5. And, finally, all too many contracts lack a statement of process by which the targets and substance of service level agreements can be modified as service requirements change. We see many cases where several years into a contract, services provided are different than those for which the original contract was written, yet SLA’s have not been modified.

How can we transform Service Level Measurement into an Enduring and Beneficial Management Process?

Transforming performance measurement into an enduring and beneficial management process is the challenge. Clearly it won’t be simple, strong organizational commitment is needed to do it well. But it’s clearly worth the effort. TBI has conducted studies of the effectiveness of outsourcing initiatives and one key finding in our research is that the most successful of the large outsourcing agreements that have occurred rely on performance measurement to monitor the deal and communicate about performance issues.

So, how do we get there from here? The remainder of this paper will present the steps that TBI recommends you take to transform service level measurement into an enduring and beneficial management process.

Transform Measurement…

  • Establish a “collaborative process,” including business, vendor and IT management participants
  • Establish a strategic framework; identify where metrics can help to monitor critical success factors
  • Review existing metrics against the framework; conduct gap analysis
  • Identify new metrics and performance measurement processes to fill in the “gaps”

Step 1:Name the players and their roles and responsibilities (vendor collects data, develops metrics & reports; customer management specifies what should be measured and uses results to manage, but doesn’t do the measurement for the vendor; business management monitors, calls for adjustment, reviews proposed changes; all parties participate equally in defining metrics and negotiating standards; )

Step 2:Begin with organizational strategy and goals and critical success factors (CSFs) for the service area of interest; note where processes in the service area must contribute in order to achieve CSFs (and where metrics would therefore be helpful)

Step 3: Once processes and CSFs are cross referenced, map existing metrics against the framework. Analyze the distribution and coverage.

Transform Measurement…

  • Prepare and test specifications for all metrics
  • Gather data on baseline and industry average levels of performance, as well as on business need, to consider in performance standard negotiation
  • Establish and document process and goals for customer satisfaction surveys, quality reviews, industry performance and price benchmarking, and service level metrics use
  • Develop or enhance performance reporting processes

Step 4:Where coverage is weak, develop measurement processes to fill the gaps; eliminate metrics where there is overlap, or overkill for an area

Step 5:Document and test all metrics before full implementation

Step 6:Do baseline studies (2-3 months minimum); gather benchmark data; identify business needs and take all of the results into consideration in proposing service level standards

Step 7:Make sure that the goals for all forms of performance measurement are documented … people need this to stay oriented (e.g., we’re entering into this agreement to benchmark price with the goal of assuring cost competitiveness equal to or better than the industry high level of satisfaction throughout the life of the contract; or we’re measuring service levels here not to penalize the vendor, but to get a better idea of where we have performance issues in an area in which the vendor only has partial control)

Step 8:Finally, don’t forget the importance of reporting processes. Three key things to remember in designing performance reporting processes that maximize the benefits of the service level measurement process are as follows:

  • Performance information should go to stakeholders at the level at which they need it to support their work and in a form they can understand
  • Use a performance report distribution mechanism that stakeholder find easy to use (i.e., not using a technology with which they are unfamiliar or uncomfortable) and…
  • In order to institutionalize thinking about performance management, consider presentation of performance measurement results to stakeholders in regular forums where results can be further discussed (e.g., in staff meetings or business steering committees).
Share Button