Normalizing Your I&O RFP Proposals: Solid Application Management Metrics You Can Operationalize

Best Practice Metrics for Tower-based I&O Services

As we continue our 5-part series on Normalizing Your I&O RFP Proposals:  Solid Metrics You Can Operationalize, we turn our attention to Application Management metrics.  As a review, Part 1 covered help desk outsourcing, Part 2 covered hosting metrics and Part 3 covered network metrics.

While it may seem odd to add application management to an infrastructure outsourcing agreement, companies on occasion take this opportunity to bundle outsourcing services together.  It’s very common to see I&O services that include SAP infrastructure and related basis support extend to include a request for providers to assume application management support services as well.  This allows for a seamless support model of the applications and the infrastructure they reside on.  There are also efficiencies in help desk support that can be achieved when bundled together.

As we continue the evaluation of all potential towers of I&O metrics, we share some typical examples of how your prospective providers will attempt to position their services and their approaches to pricing.  Based on client engagement experience, recommendations will be provided on the metrics that align most directly with use, value and traceability.

As a reminder, the 5 primary towers of I&O are:

  • Help Desk (sometimes referred to as Common Services)
  • Hosting
  • Network
  • Application Management
  • Security

Part 4: Application Management   

When going to market to evaluate I&O service providers where application management services are in scope, the strength of your asks and the value of your provider responses will be directly impacted by the level of detail you can provide.  Key factors and supporting details that, if provided, will have the greatest influence on provider’s responses to application management service requests include:

  • Applications in Scope:
    • Technologies: # of enterprise applications and related skillsets, # of home-grown applications, # of different development tools (as they form the basis for home-grown applications and related support).
    • Versions: Versions of third-party software applications (applications on legacy versions of software or those behind on upgrades, updates or patches will impact the effort to support).
    • Availability: Percentage of uptime required by application.
    • Pace of Change/Application Updates: # of releases to production of updates, changes to applications and the frequency of those updates.
    • Batch Job Processing Requirements: # of batch jobs, responsibilities and batch frequencies.
    • Stability of Application: # of tickets by application is the key metrics providers like to see to help them gauge the level of effort it will require to support that application.
  • WRICEFs:
    • WRICEFs: # of workflows, reports, interfaces, conversions, extensions and forms associated with the applications in scope that will need to be supported.
    • WRICEF Categorization: Breakdown of WRICEFs into categories of complexity (low, medium, high or simple, standard, complex).
  • Tickets:
    • By Application: # of historical tickets by application; projected tickets over term of agreement.
    • By Severity: # of historical tickets by severity (Sev1, Sev2, Sev3, …).
  • Other Factors:
    • Support Hours: Hours of support across geographies where user activity requires support.
    • Policies / Procedures / Restrictions: In many instances, companies have specific policies, procedures or restrictions that should be communicated to all prospective providers to ensure they factor these elements into their effort estimates and RFP responses.
  • General Requirements:
    • Specific Tasks: Key to providers responding with pricing per your requested metrics will be their ability to estimate the effort to support your scope. The richer and more detailed the list of responsibilities, tasks, and frequency, the more accurate the estimate.
    • SLAs: There are two schools of thought here:
      • The first school of thought is to provide your expectations for service levels up front and request your providers to agree or provide justification for why they cannot.
      • The second or alternate school of thought is to request provider-proposed SLAs and then evaluate their value and breadth against your requirements.

Neither approach is wrong.  If applying the first approach you will get further faster in your evaluation of SLA commitments/disconnects but it will come at a cost — you expose yourself to capping the level of commitment and losing the opportunity to see what the provider may have offered if left open-ended.  In some instances, we see providers proposing SLAs higher than what the client initially would have requested.

  • Productivity Improvements: Requesting providers respond to your metrics over the term of the agreement will ensure transparency to the provider’s commitment to increased productivity. Setting clear expectations of provider commitments via reduced unit pricing over the term will aid in your evaluation of both the provider’s understanding of your environment and the confidence they have in their ability to generate efficiencies over the term.

Common Provider Responses to Application Support

Provider responses to application support RFPs don’t have a great deal of variance and often providers propose fixed fee support models.  The differences in responses tend to be below the surface.  Depending on the depth of detail requested in your RFP, providers will look to position their services in one or more of the following ways:

  • Fixed price amount per month for the term, committing to address and respond to all tickets regardless of volume
  • Limited visibility to transparency to the efforts
  • Limited-to-no visibility to productivity
  • Targets for ticket reductions over the term and/or efficiency initiatives with limited detail
  • A host of assumptions and statements of out-of-scope services (not uncommon and to be expected but always requiring a thorough review).

Packaging your Detailed Information into Cost Metrics You Can Operationalize

Quantify and categorize your application management support needs by providing discrete baseline quantities of applications (with detailed attributes as noted above), historical ticket volumes, specific requirements and responsibilities of support, as early in the evaluation process as possible.  This, in turn, will require your provider to respond decomposing their fixed fee response with support costs in line with these baselines to help ensure the pricing obtained at these levels will allow for an apples-to-apples compare across provider responses.

In addition, if you elect to request specific supporting service level expectations, be able to provide context to how they align with your own historical metrics.  Affording each provider a view into current service levels will increase the credibility of your SLA requirements.

Translating these baselines into key metrics operationally aligned to your organization will support your ability to track and manage the variable consumption of services over or under your baselines.  They provide a blueprint from which each provider can assign a cost that can be compared directly to other providers and to market data.  Common metrics in the market that companies should be able to readily quantify include:

  • Tickets by severity
  • Business system requests (BSR)/Minor enhancements
  • WRICEFs by complexity to both modify or create new
  • Applications classed into categories by size, technology, complexity, etc.

These are the base metrics that most tightly align to the areas your provider will expend effort.  Your ability to align on baseline quantities for these metrics and related estimating efforts will allow your providers to respond with their most accurate cost estimates.

In addition to requesting these price points, request each supplier provide you with a detailed resource breakdown by role, effort, role location, and rate.  This will provide you, or an engaged third-party advisor, the ability to do an FTE efficiency analysis to ensure the staffing levels support the efficient utilization of resources tied to your projected ticket volumes.  In addition, this detailed resource breakdown will allow you to compare each provider’s proposed staffing models to your own to determine the level of efficiency each brings to your organization.

Leveraging prescriptive metrics validated by detailed effort estimates and tied to effort expended against current and future demand will ensure you will be able to do a true apples-to-apples comparison of provider responses and it will also ensure you can operationalize the management of your chosen provider.   

Value in Establishing Organizationally Meaningful Yet Standard Metrics

Preparing to go to market for AMS services against your metrics will ensure more accurate estimates and ground your provider’s responses to costs that tie to what and how you consume services.  With these metrics established and standardized in your RFP, your responses will allow for a true apples-to-apples comparison and provide transparency to true productivity commitments as you assess the declining price points for the same metrics year over year.  Once implemented, your ability to track and manage to your agreement with your chosen provider will be based on metrics you know and already track today.

Post a comment below, find my other UpperEdge blogs and follow UpperEdge on Twitter and LinkedIn.

Related Blogs