Testing Guide

Manual QA Test Case Discovery: A Systematic Guide to Finding What to Test

"What should I test?" is the most common question junior QA engineers ask. The answer is not a checklist. It is a set of thinking frameworks that help you systematically uncover test scenarios from requirements, user goals, and risk analysis.

80%

Good test case discovery is about asking the right questions, not following a script.

QA Engineering Principle

1. Why "What Should I Test" Is the Wrong Starting Question

When you stare at a Software Requirements Specification (SRS) or a Jira ticket and wonder where to begin, the problem is usually not a lack of ideas. It is that you are trying to enumerate test cases before understanding the system's purpose. Test cases are not random scenarios pulled from thin air. They are derived from a structured understanding of what the software is supposed to do, who uses it, and what can go wrong.

The better starting question is: "What are the user goals this feature serves?" From there, you can systematically expand outward into normal flows, alternate flows, error conditions, boundary conditions, and edge cases. Each of these categories has well-established techniques for generating test cases, and once you learn them, the "what should I test" question stops being paralyzing.

A common mistake is conflating test coverage with test case count. Having 200 test cases does not mean you have good coverage. Having 40 well-chosen test cases that cover distinct equivalence classes, boundary values, and risk areas will catch more bugs than 200 redundant tests that all exercise the same happy path with slightly different data.

2. User-Goal-Driven Test Discovery

Start with the user. Every feature exists to help a user accomplish something. For a login page, the user goal is "authenticate and access my account." For a shopping cart, the goal is "review and modify my selected items before purchasing." Write down the primary user goals, then derive test cases by asking three questions for each goal.

First: what is the happy path? This is the normal flow where everything works as expected. The user enters valid credentials, clicks login, and lands on the dashboard. This is the obvious test case, and most people start here. The problem is that most people also stop here.

Second: what are the alternate paths? These are valid flows that differ from the primary path. The user logs in with an email instead of a username. The user logs in on a mobile device. The user has two-factor authentication enabled. Each alternate path is a distinct test scenario that exercises different code paths.

Third: what are the failure paths? The user enters an incorrect password. The user enters a nonexistent email. The server returns a 500 error. The network connection drops mid-request. Each failure path should result in a specific, user-friendly error state, and your test verifies that the error handling works correctly.

3. Boundary Value Analysis and Equivalence Partitioning

Equivalence partitioning is one of the most powerful test design techniques, and it is surprisingly underused. The idea is simple: divide all possible inputs into groups (partitions) where every value in the group should produce the same behavior. Then test one representative value from each partition instead of testing every possible value.

For example, consider a field that accepts ages between 18 and 65. The equivalence partitions are: values below 18 (invalid), values from 18 to 65 (valid), and values above 65 (invalid). You do not need to test ages 19, 20, 21, 22, and so on. Testing one value from each partition (say, 10, 30, and 80) covers the behavior for the entire range.

Boundary value analysis refines this by focusing on the edges of each partition, because bugs disproportionately cluster at boundaries. For the age field, the boundary values are 17, 18, 65, and 66. Many off-by-one errors only manifest at exact boundaries: the developer wrote "greater than 18" instead of "greater than or equal to 18," and only a test with exactly 18 will catch it.

Apply this technique to every input in your system: text field lengths, numeric ranges, date ranges, dropdown selections, file sizes, and API parameters. The combination of equivalence partitioning and boundary analysis will generate dozens of targeted test cases that catch real bugs with minimal redundancy.

Try Assrt for free

Open-source AI testing framework. No signup required.

Get Started

4. Decision Tables and State Transition Testing

When a feature involves multiple conditions that interact, decision tables help you systematically cover all combinations. Consider a shipping calculator that depends on package weight, destination zone, and whether the customer has a premium membership. Each combination of these three factors might produce a different shipping rate.

A decision table lists every combination of conditions and the expected outcome. For three binary conditions, you have eight combinations. For conditions with more values, the table grows, but you can use pairwise testing to reduce the combinations while still covering all interactions between any two factors. Tools like PICT (from Microsoft) or AllPairs can generate these reduced combinations automatically.

State transition testing applies when the system behaves differently depending on its current state. An order can be in states like "pending," "confirmed," "shipped," "delivered," or "cancelled." Draw a state diagram, then derive test cases that cover every valid transition (pending to confirmed, confirmed to shipped) and verify that invalid transitions are rejected (delivered to pending should not be possible).

5. Risk-Based Test Prioritization

You will never have time to test everything. Risk-based testing helps you allocate your limited testing time to the areas most likely to contain bugs and most likely to cause damage if bugs exist. Rank each feature or component on two axes: likelihood of failure (based on complexity, recent changes, developer experience, and historical bug rates) and impact of failure (based on user impact, revenue impact, and data loss potential).

High likelihood combined with high impact gets the most testing attention. A payment processing flow that was recently refactored deserves deep testing. A static "About Us" page that has not changed in six months can receive minimal testing. This is not about ignoring low-risk areas entirely, but about ensuring your highest-risk areas receive thorough coverage before you move to lower-risk ones.

Pair risk analysis with your other techniques. For the high-risk payment flow, apply boundary analysis to every input, build a complete decision table for discount and tax calculations, and test every state transition. For the low-risk about page, a quick smoke test confirming the page loads and displays correct content is sufficient.

6. Negative and Edge Case Discovery Techniques

Negative testing verifies that the system handles invalid, unexpected, or malicious input gracefully. A common framework is to ask "what happens when..." for each input and action: What happens when the field is empty? What happens when the input contains special characters, SQL injection patterns, or extremely long strings? What happens when the user double-clicks the submit button? What happens when the user navigates backward during a multi-step flow?

Edge cases live at the intersection of multiple conditions. They are the scenarios that nobody thinks of during development because they require a specific, unlikely combination of factors. A user with zero items in their cart who applies a percentage-based discount code. A form submission that arrives after the user's session has expired. A file upload that is exactly at the maximum allowed size.

One effective technique is error guessing, where experienced testers draw on their knowledge of common bug patterns. Null values, empty strings, maximum integers, dates around daylight saving time transitions, concurrent access to the same resource, and Unicode characters in text fields are all fertile ground for bugs. Keep a personal catalog of edge cases you have found in previous projects, because many of them recur across different applications.

7. Automated Test Discovery as a Complement

Manual test case discovery techniques are powerful, but they are also time-consuming and limited by human attention. As your application grows, it becomes increasingly difficult to manually identify every testable scenario across every feature. This is where automated discovery tools can supplement your manual techniques.

Some teams use crawlers that navigate their application and record every user flow they encounter, building a map of testable paths. Others use static analysis tools that examine the codebase to identify untested branches and conditions. AI-powered tools like Assrt take a different approach by automatically discovering test scenarios from your running application: you point it at your app, and it identifies user flows, form interactions, navigation paths, and edge cases that can be turned into Playwright tests. This does not replace the manual techniques described above, but it provides a baseline that catches the scenarios you might have missed.

The best workflow combines both approaches. Use manual techniques (boundary analysis, decision tables, risk-based prioritization) for your highest-risk features where precision matters. Use automated discovery to generate baseline coverage across the rest of your application. Then review the auto-generated scenarios and supplement them with the edge cases and negative tests that only a human tester would think of.

8. Putting It All Together: A Practical Workflow

Here is a step-by-step workflow you can follow when approaching a new feature or sprint. Start by reading the requirements and identifying user goals. For each goal, write down the happy path, alternate paths, and failure paths. Next, identify all inputs and apply equivalence partitioning and boundary analysis to each one. If the feature involves interacting conditions, build a decision table. If it involves states, draw a state transition diagram.

Then perform a risk assessment. Which parts of this feature are most likely to fail? Which would cause the most damage? Allocate your testing time accordingly. Run through your personal edge case catalog and add any relevant negative tests. Finally, consider whether automated discovery tools can fill gaps in areas you did not have time to cover manually.

This workflow will not make test case discovery instant, but it will make it systematic. You will stop wondering "what should I test" and start working through a structured process that consistently produces thorough, well-prioritized test suites. Over time, the techniques become intuitive, and you will find yourself naturally spotting boundary conditions, missing error handlers, and untested state transitions as you read requirements.

Ready to automate your testing?

Assrt discovers test scenarios, writes Playwright tests from plain English, and self-heals when your UI changes.

$npm install @assrt/sdk