Next week marks the anniversary of one of the most infamous failures of the Cold War: The Bay of Pigs fiasco. Looking back at the events that led up to this black eye for President Kennedy’s administration and US foreign policy, it is clear that an environment of overconfidence and groupthink gave rise to a series of biased, short-sighted decisions that doomed this undertaking almost from the outset. And today, depending on whom you listen to, your ERP initiative could be in the same boat.
While the likelihood of a catastrophic ERP implementation that disrupts business operations and breaks your supply chain is relatively small, there is nonetheless a good chance that when you finally cross the finish line your solution will fall short of expectations. It will either cost more to implement than expected, will cost more to maintain than originally estimated, or will fall short of delivering the promised benefits. So, doom-and-gloom notwithstanding, there is no question that the opportunities for failure abound on complex, transformational projects like ERP. And since there isn’t a systems integrator out there without a significant blemish on its record, you can’t avoid problems by simply picking the right partner.
So what’s the key to a successful implementation?
The common thread that we see across all successful projects is the ability to make sound, well-informed decisions, especially in the heat of battle. To put it differently, the reason that the list of “lessons learned” look eerily similar from one ERP project to the next is simply this: having a robust methodology does not guarantee good decisions. Why not? Because as projects begin to strain both budgets and timelines, it is highly likely that your project will be derailed by the temptation to take shortcuts.
Some shortcuts, especially those related to process and methodology, occur at the “macro” level of the project and are usually very visible. They are the “intentional” shortcuts that are driven primarily by pressures to stay within budget and on schedule:
- Reduce the number of mock conversions
- Shorten user acceptance testing
- Eliminate a dress rehearsal
- Defer functionality to later waves/deployments by reducing scope
- Curtail status reporting, cross-team meetings, etc., to create some additional capacity
These tactics will serve the purpose of reducing costs; however, they also have the potential to put the operational integrity of the solution at risk. How can you know the difference before it’s too late? Adding operational continuity metrics to your report-outs early in the project is one way to keep your risk management framework comprehensive and focused on the big picture.
Shortcuts that are equally problematic, and harder to spot, are those that effectively hide behind the metrics you are using to manage the project. These are the result of “unintentional” or “uninformed” decisions that happen at the “micro” level of the project, driven largely by individual blind spots and/or inexperience. For example:
- Data Validation – Running mock conversions alone won’t guarantee data quality if your data validation scripts are too high-level, or if your validators are rushing through the process to hit milestone dates. More than one project has been stung by metrics that seemed to indicate that everything was on track…until it tried to actually use the data for the first time.
- Testing – Similar to data quality, the effectiveness of your testing process has as much to do with the thoroughness of the test plan and the quality of the test scripts as it does the number/duration of testing cycles. Passing 95% of your test scripts won’t matter much if your test plan is missing critical business processes/scenarios or the individual scripts only address 80% of the functional requirements.
- Project Status – Nobody likes to be the bearer of bad news, and there can be a tendency to downplay the significance of issues/risks until it’s too late to react. This is especially true when the heat is on, and I’ve seen more than one project go from “green” to “red” literally overnight, even though once you start digging into the problem you find that it’s been building for weeks.
Fortunately, there are techniques and methods you can use to mitigate these kinds of issues and thereby ensure the integrity of your reporting processes and metrics. One simple yet effective technique that applies in almost every situation is getting multiple perspectives on every metric. In the same way that you use multiple mirrors when driving, you need to “check your blind spots” when interpreting the metrics in your status reports: don’t just ask the functional teams about progress of testing, ask the cross-functional teams and the power users, too. Want to be certain your data is as clean as your metrics say it is? Do your testing on converted data.
UpperEdge Project Execution Advisory Services
If you are about to embark on your own ERP journey, or if you need to redirect a project that has veered off course, UpperEdge’s Project Execution Advisory Services can provide an independent perspective that helps you steer clear of dangerous shortcuts. Our risk management framework incorporates specific techniques to identify risks well in advance of them turning into issues, giving you an added level of confidence that your project will be remembered for its success rather than its failure.
- Cleveland Rocks: Your ERP Implementation Needs a Winning Team
- Does Your ERP Provider Run Your Business?
- Existing Service Providers – What’s Your Cost of Incumbency?
- Preventing an ERP Heart Attack
- The Value of Experience – What You Don’t Know Will Cost You