The importance of ERP testing before going live

Enterprise resource planning (ERP) testing is a complex and time-consuming process. Companies need to ensure the right people are involved and the right questions are being asked.

By Sam Graham January 29, 2019

An enterprise resource planning (ERP) system should never go live without comprehensive testing. Even small companies implementing Tier 3 systems are urged to carry out “conference room pilots” to ensure there are no nasty surprises. And yet, companies have not figured out how to avoid ERP failure. There are several possibilities that need to be considered including:

  • The testing is not being done
  • Testing is being done by the wrong people
  • Testing is being done badly.

The testing is not being done

There are several reasons why go-live testing is sometimes overlooked or is, at best, performed perfunctorily. One is the feeling the old system is being replaced by more up-to-date software so nothing is really changing. This gives the impression that spending time and money on comprehensive testing is wasteful. This view is even more prevalent when the new system is from the same provider as the old one – “Isn’t it just an upgrade?”

There is no such thing. A new system is different no matter how much it seems like the old one. It’s impossible for the people who wrote it, or implemented it, to understand all of the nuances of how people work. All software has bugs that occur under very specific circumstances. Even electronically-transferred data can have errors, gaps and inconsistencies, and we have a recipe for disruption and failure. Other issues such as cybersecurity threats can undermine projects, as well.

The testing is being done by the wrong people

There are different aspects to system testing. At the basic level, programmers and developers will test the software they have written to ensure it works. But, when they do that, they are actually just checking it can work when the data is correct and the right keys are pressed.

As testing moves downstream (i.e. from the programmers to the system integrators) other factors come into play. Obviously, the ERP system integrators want to prove the system works. However, all projects have planned go-live dates and pressure to hit these dates can be great. That can mean pressure to review test results optimistically – “Yes, there was a problem, but we can get around it with extra training.”

Regardless of time pressures, all software should be strenuously tested by people who really understand both how end-users are going to us it and what mistakes they are likely to make when they do (not all software is used by MIT graduates: some is used by people on minimum wage).

The testing is being done badly

To an extent, this is linked to the previous point, in that testing carried out by the wrong people is testing done badly. However, even good people can fail to do a good job if they don’t know how to properly approach the task. They must prepare a detailed test plan that accurately reflects the way the system is intended to be used. It’s not sufficient, for example, for it to say, “Test creation of a purchase order” because, in real life:

  • Some orders may be created from requisitions; others may not
  • Purchase orders be raised on foreign suppliers, with prices in foreign currencies, whilst others may not
  • Some may be for one item, but others may have dozens or hundreds of line items
  • In a standard costing environment, all items should have standard costs, but some may not and,
  • Some will buy items in the same measurement as they are stocked, but others may be bought in units decreed by the supplier.

Taking that last example further: a company might have a standard cost of $1 per unit and get an invoice for boxes at $50 each. The team needs to check the receipt, invoice, and payment all work as expected. They also should be asking questions and doing tests to find out what happens if, say, there was not a valid standard cost at the time of receipt. How can this be identified? What should the procedure be for correcting the resultant postings?

How to prevent ERP implementation issues

Talking to the people who implement ERP systems can help ensure there are contingency plans in place in case anything happens when the new system is switched on.

A second consideration is ERP systems are integrated and ripples can travel a long way. For example, a purchasing receipt will affect inventory, accounts payable and the GL as a minimum. When an error or bug is found and corrected, all of the others need to be re-tested to ensure fixing one problem hasn’t caused another. Effective business process management can help mitigate this risk.

Go-live testing must be about testing the system and not just the software. The system includes the processes, procedures and people that are essential to making the software work. It is pointless having software that does not produce the results the organization needs. It also is pointless having software that is not backed up by clear and unambiguous instructions for proper use.

And it is pointless having software that, although functioning as intended, can’t be used efficiently and effectively because the key components in the system (people) have either not be trained properly or have failed to retain the knowledge imparted during training. This is why companies may underestimate organizational change management even though organizational change management is the #1 key to digital transformation success.

The software, the processes, the procedures, and the people must be rigorously tested before the system goes live. They also should be tested together in an environment as close to real-life operation as possible. Testing properly will identify whether or not it is wise to go live (if any element fails during testing, it obviously is not). However, when the tests show up problems, the correct thing to do is delay until the problems have been addressed. This can cause problems if the organization has not prepared a contingency plan.

ERP systems take a long time to implement. The implementation team, to have gotten this far, will have had to overcome all kinds of obstacles. When go-live testing shows up problems, there is great pressure to review the results through rose-tinted glasses and believe it’ll be fine when everything goes live. While some small problems may well be resolved in time, going live without 100% successful testing is a gamble. Even small problems will put the implementation team under great stress and pressure during the early days of live running. Problems inevitably occur and organizations always discover some users make mistakes faster than those mistakes can be remedied.

The test’s purpose must be communicated well in advance to everyone. The possibility of failure, and the possible reasons for that failure, must be openly discussed. There must be a contingency plan; that plan should include being honest about the reasons for postponing go-live. If software has not been sufficiently tested, then that must be communicated. If some user departments are not ready, that also must be communicated.

There is no need to panic when go-live testing throws up problems. That is precisely what it is intended to do. Proper testing is a small price to pay when the system is intended to provide functionality for years.

Sam Graham, Third Stage Consulting Group, a CFE Media content partner. This article originally appeared on Third Stage Consulting’s blog.

Original content can be found at

Author Bio: Third Stage Consulting Group