UAT Test Plan Template: Structure, Sections, and What to Include
Most teams arrive at UAT with a spreadsheet of test cases and call it a test plan. It is not. A UAT test plan is the document that governs the entire testing phase: what is in scope, who does what, when testing can start, how defects are managed, and what the finish line looks like. Without it, UAT becomes a fire-fighting exercise rather than a controlled process.
This guide gives you the structure and content of a practical UAT test plan for ERP implementations. It covers the seven sections every plan needs, how to write test cases that business users can actually follow, how to track progress toward go-live, and why spreadsheets fail when the pressure is on.
Table of Contents
1. What a UAT Test Plan Is (and Is Not)
There is consistent confusion between a UAT test plan and a list of test cases. They are not the same thing. Test cases live inside a test plan. They are not the plan itself.
Definition:
A UAT test plan is not a list of test cases. It is the document that defines scope, responsibilities, timeline, entry and exit criteria, and how test results will be communicated. The test cases are one component of the plan, not the whole thing.
Think of the test plan as the project management document for your UAT phase. It answers: what are we testing, who is responsible, when does testing start and end, what counts as a pass, and what happens when something fails. Without a test plan, each of those questions gets answered ad hoc, usually at the worst possible moment.
On ERP implementations, a missing or thin test plan is one of the most reliable predictors of a troubled go-live. When scope is not defined, business users test random things. When entry criteria are not specified, UAT starts before the system is ready. When exit criteria are not agreed, sign-off becomes a political negotiation rather than an evidence-based decision.
A well-structured UAT test plan solves all of this before UAT begins. It is worth the time to write it properly.
2. The 7 Sections Every UAT Test Plan Needs
The following sections should appear in every UAT test plan, regardless of the system being tested or the size of the implementation.
Section 1: Scope and Objectives
Define precisely what is being tested and why. This section prevents scope creep during UAT and gives the steering committee a clear picture of what the testing phase covers.
Include: the business processes in scope, the modules or system areas covered, any processes or areas explicitly out of scope, and the primary objective of the UAT phase (for example, confirming the system supports go-live for the finance and purchasing functions by the end of April).
Out-of-scope items are just as important as in-scope ones. If reporting is out of scope for UAT round one, write it down. Otherwise you will spend three days fielding report-related defects that no one planned to fix before go-live.
Section 2: Entry Criteria
Entry criteria are the conditions that must be true before UAT can begin. This section protects your business users from being thrown into a half-configured system and protects your timeline from being consumed by a testing phase that should not have started yet.
Example: Entry Criteria for a Business Central UAT
- System Integration Testing (SIT) completed and signed off by the implementation partner
- UAT environment loaded with migrated data from the agreed data migration run
- Business users trained on core processes relevant to their test responsibilities
- All test cases reviewed and approved by process owners
- Defect management process agreed and tooling in place
- Test environment access confirmed for all testers
If any entry criterion is not met, UAT should not start. This sounds obvious. In practice, it requires someone with authority to hold the line.
Section 3: Test Cases and Scripts
This section references the test cases that will be executed during UAT. It does not necessarily reproduce them in full in the test plan document, but it defines the structure, how they are organised, and how many there are.
Organise test cases by business process, not by module. Users do not think in modules. They think in processes. Group your test cases under headings like Procure-to-Pay, Order-to-Cash, Record-to-Report, and Inventory Management. For each business process, document the number of test cases, the tester responsible, and the planned execution date.
For guidance on writing individual test cases, see the section below on writing test cases business users can follow.
Section 4: Roles and Responsibilities
UAT involves multiple parties and the responsibilities need to be explicit. Ambiguity here causes delays: nobody raises defects because they assume someone else will, or the implementation partner ends up executing tests that should be run by business users.
UAT Coordinator / Test Manager
Owns the test plan, tracks execution progress, manages the defect log, chairs daily stand-ups during UAT, and produces the readiness report for the steering committee.
Business Process Owners
Review and approve test cases for their process area, participate in or oversee test execution, and provide process-level sign-off when their area is complete.
Business Testers
Execute the test cases, record results (pass, fail, blocked), and raise defects with sufficient detail for the implementation team to investigate. These should be the people who will use the system day-to-day.
Implementation Partner
Available to support testers with system questions, investigates and resolves raised defects, deploys fixes to the UAT environment, and advises on test data and environment issues. Should not be executing tests on behalf of business users.
Steering Committee
Receives regular readiness reports, makes the go/no-go decision based on exit criteria, and resolves escalations where business priorities conflict with UAT findings.
Section 5: Environment and Data Requirements
UAT requires a dedicated environment. Testing in the production environment is not UAT. Testing in the same environment as development or SIT creates contamination and undermines the validity of your results.
Document: the URL or access details for the UAT environment, the data migration run that populates the environment, any reference or configuration data that needs to be present (number series, posting groups, chart of accounts), test user accounts and the roles assigned to each, and the process for refreshing the environment if a test run needs to be repeated from a clean state.
Test data quality matters more than most teams acknowledge. Testing a procure-to-pay process with a fictional vendor on a clean database tells you almost nothing about whether your migrated vendor master works correctly. Use real, migrated data wherever possible.
Section 6: Defect Management Process
Define how defects are raised, categorised, assigned, and resolved. Without a documented process, defects get reported by email, lost in chat threads, or raised in conversations that leave no audit trail.
A minimal defect management process needs four things: a severity scale, a defined workflow, a single location where all defects are logged, and a clear owner for each open defect.
Defect Severity Levels
- Severity 1 (Critical): a business process cannot be completed at all. Go-live cannot proceed with any open Severity 1 defects.
- Severity 2 (High): a significant part of a process is broken or produces incorrect results, but a workaround exists. Must be resolved or have an approved workaround before go-live.
- Severity 3 (Medium): a non-critical issue that impacts user experience or efficiency but does not prevent the process from completing. Can be deferred to post-go-live with a documented plan.
- Severity 4 (Low): cosmetic issues, minor wording problems, or enhancement requests. Logged and scheduled for a future release.
Section 7: Exit Criteria and Sign-Off
Exit criteria define what "done" looks like. They are the conditions that must be met before the UAT phase is formally closed and the go-live decision can be made. Without exit criteria, sign-off becomes subjective.
Typical exit criteria for an ERP UAT phase:
- 100% of planned test cases executed
- 95% or above pass rate on Severity 1 and Severity 2 test cases
- Zero open Severity 1 defects
- All open Severity 2 defects have an approved workaround or a confirmed fix date before go-live
- All business process owners have provided written sign-off for their area
- Data migration validation completed and signed off by finance
These criteria should be agreed with the client and the steering committee before UAT begins, not negotiated at the end. For more on how the UAT sign-off process works, including how to handle conditional sign-offs and last-minute escalations, see our dedicated guide.
Replace the Spreadsheet
Manage Your UAT Test Plan in LogicHive
Structured test cases, real-time pass rate tracking, defect management, and evidence-based sign-off. Built for ERP implementation teams.
3. Writing Test Cases Business Users Can Follow
The quality of your test cases determines whether UAT produces useful results. Vague test cases produce vague results. If a tester cannot tell whether they have passed or failed a test, the test case has failed before they started.
The most common problem is test cases written at the wrong level of abstraction. "Test the invoice posting process" is not a test case. It is a topic. A test case tells the tester exactly what to do, step by step, and exactly what result to expect.
Example: Well-Written UAT Test Case
Test Case: Process a purchase invoice from a migrated vendor
- Navigate to Purchase Invoices and create a new invoice
- Select vendor V00210 (migrated from legacy system)
- Enter invoice reference INV-2026-0042, posting date 12/04/2026
- Add a line: GL account 5200 (Purchases), quantity 1, unit cost £1,200.00, VAT product posting group STANDARD
- Post the invoice
- Open the posted purchase invoice. Confirm the GL entries show £1,200.00 debit to account 5200 and £1,440.00 credit to the vendor payables account (including 20% VAT)
- Check vendor V00210 in the vendor ledger. Confirm the outstanding balance has increased by £1,440.00
Expected result: Invoice posts without error. GL entries are correct. VAT entry shows £240.00 to the VAT payables account. Vendor balance increases by £1,440.00.
Actual result: [tester records what actually happened]
Status: Pass / Fail / Blocked
Note the specifics: a named vendor from the migrated data set, a real GL account number, a specific posting date, and a precise expected result including the VAT calculation. This level of detail means the tester cannot "pass" the test without actually validating what the test was designed to check.
For a full guide to writing test cases at this level of quality, including how to handle edge cases and negative tests, see how to write effective UAT test cases. If you want to understand the difference between a test case, a test suite, and a test run, see what is a test case.
4. Tracking Progress: Metrics, Pass Rates, and Steering Committee Reporting
A UAT test plan without progress tracking is just a document. Once UAT begins, you need to know at any point: how many test cases have been executed, how many have passed, how many defects are open by severity, and whether you are on track to meet the exit criteria.
The Metrics That Matter
Keep the reporting simple. Steering committees do not need to see granular defect logs. They need answers to three questions: are we on track, what is blocking us, and are we ready to go live?
- Execution progress: total test cases planned vs. executed, broken down by business process area. This tells you whether the testing pace is sufficient to complete within the planned window
- Pass rate by area: the percentage of executed test cases with a pass result, by business process. A 94% pass rate in finance and a 72% pass rate in warehouse management are very different conversations
- Open defects by severity: the count of open Severity 1, 2, 3, and 4 defects. Severity 1 count going up is a red flag regardless of overall pass rate
- Defect resolution rate: defects raised vs. defects resolved. If the implementation team is resolving defects slower than they are being raised, you have a capacity problem
- Blocked test cases: test cases that could not be executed because a dependency was missing (environment down, data not loaded, prerequisite test not yet run). These need active management or they silently inflate your "not yet executed" count
Defect Severity in Practice
The severity levels you define in the test plan need to be applied consistently during testing. In practice, there is pressure to downgrade defects to keep the pass rate looking healthy. Resist this. A Severity 1 defect that gets classified as Severity 2 to avoid a conversation with the steering committee is a go-live risk that has been hidden, not resolved.
Establish a triage process for disputed severity ratings: the UAT coordinator and the relevant business process owner agree on the severity, and the rationale is documented. This prevents severity negotiation from becoming a political process.
Reporting to the Steering Committee
Produce a weekly UAT status report for the steering committee. Keep it to one page. It should show: execution progress vs. plan, current pass rate by area, open defects by severity, key risks and blockers, and whether you are currently on track to meet exit criteria by the planned date.
If you are not on track, say so clearly. Steering committees that receive optimistic status reports until the week before go-live make worse decisions than committees that receive accurate information early. The earlier a go-live date is known to be at risk, the more options there are to respond.
5. The Spreadsheet Problem: Why Excel Test Plans Fail on Real ERP Projects
Most UAT test plans and test case libraries start life as Excel spreadsheets. For small implementations with one tester and a handful of test cases, this works. For anything larger, the spreadsheet becomes the single biggest obstacle to a controlled UAT process.
Here is what happens in practice on a real ERP project with ten business users, 150 test cases, and a six-week UAT window:
- Version control chaos: within a week, there are four versions of the spreadsheet in circulation. Nobody knows which is current. Test results entered in one version are not visible in another. Defects raised in the wrong version get lost
- No real-time visibility: the UAT coordinator cannot see the current state of testing without chasing testers to update their local copy and email it back. The steering committee is always working from yesterday's data
- Defect management breaks down: defects raised in a spreadsheet have no workflow. There is no notification when a defect is resolved, no audit trail of comments, and no reliable way to link a defect back to the test case that found it
- Concurrent editing conflicts: multiple testers trying to update the same file at once creates merge conflicts or, worse, silent data loss
- Reporting is manual: producing a pass rate, defect breakdown, or readiness summary requires someone to count rows and build pivot tables. This takes time that should be spent managing the testing
The Real Cost:
The problem with spreadsheet-based UAT is not just inconvenience. It is that critical information gets lost, defects get missed, and the steering committee makes go/no-go decisions based on incomplete data. The spreadsheet looks like it is working right up to the point where it is not.
These are not edge cases. They are the standard experience of anyone who has managed UAT on an ERP project with a spreadsheet. The solution is not a better spreadsheet. It is a purpose-built tool that was designed for exactly this problem.
Free Download
UAT Readiness Checklist
A practical checklist covering all seven sections of a UAT test plan: entry criteria, roles, environment setup, defect management, and exit criteria. Use it to audit your own test plan or as a starting point for a new one.
6. How LogicHive Replaces the Spreadsheet
LogicHive is UAT management software built specifically for ERP implementation teams. It replaces the spreadsheet-based approach with a centralised platform where test cases, defects, and sign-off all live in one place, with real-time visibility for everyone involved.
The key differences from a spreadsheet:
- Single source of truth: all testers work in the same platform. There is no version control problem because there is only one version. When a tester logs a result, the UAT coordinator and the steering committee see it immediately
- Real-time readiness dashboard: pass rates, defect counts by severity, execution progress, and sign-off status are visible at a glance. No manual reporting
- Structured defect management: defects are raised against the specific test case that found them, assigned to the correct team, and tracked through to resolution. Full audit trail. Automatic notifications
- No per-seat fees for client testers: you can give access to every business user without paying per head. LogicHive is priced from £72.99 per month for the team running the project, not per tester
- Evidence-based sign-off: sign-off is recorded in the platform against the test cases and defects it references. The sign-off document is automatically generated from the test results, not manually assembled at 11pm the night before go-live
See the full LogicHive features or check the pricing page to see how it fits your team.
The Bottom Line
A UAT test plan is not bureaucracy. It is the difference between a testing phase that produces evidence of go-live readiness and one that produces a false sense of confidence. The seven sections in this guide are not optional. They exist because each one prevents a specific failure mode that recurs on ERP projects without it.
Build the plan before UAT starts. Agree the exit criteria before the first test case is executed. And consider whether the spreadsheet you are planning to manage it in is actually fit for the job.
Frequently Asked Questions
What should a UAT test plan include?
A UAT test plan should include: scope and objectives (what is being tested and why), entry criteria (what must be true before UAT can start), test cases and scripts, roles and responsibilities, environment and data requirements, a defect management process, and exit criteria with sign-off conditions. It is the governing document for the entire UAT phase, not just a list of tests.
How is a UAT test plan different from a test script?
A UAT test plan is the governing document for the entire UAT phase. It defines scope, responsibilities, timeline, entry and exit criteria, and how results will be communicated. A test script (or test case) is a single document that walks a tester through one specific scenario, step by step. The test plan contains references to all the test scripts, but it is not the same thing.
Who writes the UAT test plan?
On ERP implementation projects, the UAT test plan is typically written by the implementation partner or project manager in collaboration with the client. The client must own the sign-off criteria and the list of business processes to be tested. The implementation partner can provide the structure and technical content, but the client needs to validate the scope. UAT test cases within the plan should be reviewed by the business users who will actually execute them.
How many test cases do you need for UAT?
There is no fixed number. The right number of test cases depends on the scope of the implementation. A good starting point is to map every core business process that will run on the new system and write at least one happy-path test case and one edge-case test case for each. For a mid-market ERP implementation, this typically results in 50 to 200 test cases. Prioritise depth over breadth: a well-written test case that follows a complete end-to-end business process is worth more than ten shallow screen-click tests.
What is a good UAT pass rate?
A pass rate of 95% or above on priority-one and priority-two test cases is a common threshold for proceeding to go-live on ERP implementations. However, pass rate alone is not enough. You also need zero outstanding critical defects (severity 1) and no unresolved severity-2 defects without an approved workaround. A 98% pass rate with two open critical defects is not a go-live-ready position.
Stop Managing UAT in Spreadsheets
LogicHive gives your team centralised test case management, real-time pass rate tracking, and structured sign-off. From £72.99/month. No per-seat fees for client testers.
Start Free Trial