QA Wolf Alternative: Your AI Coding Agent Is Already a Better QA Engineer
QA Wolf charges $8,000 per month for a human team that writes and maintains Playwright tests on your behalf. Assrt gives your AI coding agent three MCP tools and lets it do the same thing, in the same terminal session where it just wrote your code. No dashboard. No managed service. No context switch. The agent that built the feature verifies the feature.
“QA Wolf's median annual contract is $90,000. Assrt is MIT licensed and free. Your LLM API costs a few cents per test run.”
QA Wolf pricing via Vendr public data, 2026
1. What QA Wolf Sells and Why Teams Buy It
QA Wolf is a managed QA service. You pay them a monthly fee (starting at $8,000/mo for 200 tests), and their engineering team writes Playwright tests for your application, maintains them when your UI changes, triages failures, and reports bugs. The value proposition is clear: you get end-to-end test coverage without hiring a QA team.
The tradeoff is also clear. You do not own the test code. You do not control the maintenance cycle. When you need a new test urgently, it goes into their queue. When a test breaks, you wait for their team to investigate. You are paying $96,000 or more per year for access to someone else's Playwright scripts running on someone else's infrastructure.
Every "QA Wolf alternative" page on the internet responds to this by listing other managed services (Bug0, Testlio) or DIY tools (Mabl, Testsigma, Testim). They all accept the same premise: QA is either outsourced to humans or handed to a separate testing platform with its own dashboard.
That premise is now outdated. AI coding agents can write code, review code, debug code, and deploy code. They can also test code, if you give them the right tools.
2. MCP Changes the Equation: Your Coding Agent Is the QA Team
MCP (Model Context Protocol) is the standard that lets AI coding agents use external tools. When you add an MCP server to Claude Code, Cursor, or Windsurf, the agent gains new capabilities it can call during its work.
Assrt is an MCP server. Add it to your agent config, and the agent gets three new tools: assrt_test, assrt_plan, and assrt_diagnose. Here is what changes:
You ask Claude to "add a signup form with email validation." The agent writes the React component, updates the route, and adds form validation. Then, without you asking, it calls assrt_test with your dev server URL and a test plan: "fill in the email field with an invalid address, submit, verify the error message appears." Assrt launches a real Chromium browser, executes the test, and returns a pass/fail report with screenshots. The agent reads the result. If something failed, it fixes the code and retests. All in the same terminal session.
With QA Wolf, this loop takes days. You build the feature. You push to staging. Their team writes a test. They run it. They report the bug. You fix it. They retest. With Assrt, the loop takes seconds because the agent that built the feature is the same agent testing it.
This is not a different kind of managed service. It is not a different testing dashboard. It is a fundamentally different model: QA as a tool call inside your development workflow.
Add Assrt to your coding agent
One line in your MCP config. Your agent gets assrt_test, assrt_plan, and assrt_diagnose. MIT licensed, no signup.
Get Started →3. The Three MCP Tools Inside Assrt
The Assrt MCP server (src/mcp/server.ts) registers three tools on a standard McpServer instance from the official MCP SDK. Each tool has a Zod schema that defines its parameters, so any MCP-compatible agent knows exactly what to pass.
| Tool | What it does | Key parameters |
|---|---|---|
| assrt_test | Runs test scenarios in a real Chromium browser. Returns structured pass/fail results, screenshots, and a video recording. | url, plan (markdown text) or scenarioId (UUID to replay a saved scenario) |
| assrt_plan | Navigates to a URL, reads the page, and auto-generates test scenarios. Useful when the agent does not know what to test yet. | url |
| assrt_diagnose | Takes a failed test report and returns root cause analysis: is it a bug in the app, a flawed test, or an environment issue? Includes a corrected test scenario. | url, scenario, error |
The agent uses these tools like it uses any other: it reads the tool description, decides when to call it, and processes the result. After calling assrt_test, the agent receives a JSON report with passedCount, failedCount, per-assertion results, and screenshots as base64 images it can inspect visually.
Under the hood, each test run launches a Playwright browser instance with video recording enabled. The test agent (an AI agent with 15 browser automation tools including navigate, click, type_text, assert, and screenshot) executes your test cases by reading the accessibility tree and deciding what to do next. When done, Assrt generates an HTML video player with speed controls and saves everything to /tmp/assrt/.
QA Wolf also uses Playwright internally, but you never see the tests. They live on QA Wolf's infrastructure, maintained by QA Wolf's team. With Assrt, the Playwright browser runs on your machine, and the test logic is in open source files you can read and modify.
4. Tests as Plain Markdown Files You Own
When assrt_test runs, it writes the test plan to /tmp/assrt/scenario.md. This is not a proprietary format. It is a plain markdown file:
#Case 1: Submit signup form with valid email
1. Navigate to http://localhost:3000/signup
2. Type "user@example.com" into the email field
3. Type "SecurePass123" into the password field
4. Click the "Create Account" button
5. Verify the success message "Account created" is visible
#Case 2: Reject signup with invalid email
1. Navigate to http://localhost:3000/signup
2. Type "not-an-email" into the email field
3. Click the "Create Account" button
4. Verify an error message about invalid email is visibleYour coding agent can read this file, edit it, and re-run the scenario. A file watcher in scenario-files.ts monitors the file and auto-syncs edits back to cloud storage with a one-second debounce. Metadata lives in /tmp/assrt/scenario.json (scenario ID, name, URL), and results go to /tmp/assrt/results/latest.json.
You can commit these files to git. You can diff them in a PR. You can write them by hand if you want. They contain no vendor-specific syntax, no proprietary identifiers, and no references to Assrt APIs. If you stop using Assrt tomorrow, you still have readable test plans that describe exactly what to verify.
QA Wolf's tests live on QA Wolf's servers. When your contract ends, the tests stay with them. The Playwright scripts their team wrote are not portable because they depend on QA Wolf's infrastructure, their parallelization layer, and their proprietary test runner configuration.
5. Assrt vs QA Wolf: Side by Side
| Dimension | Assrt | QA Wolf |
|---|---|---|
| Model | MCP tools for your existing coding agent | Managed service with a human QA team |
| Price | Free (MIT license, pay your own LLM API costs) | $8,000+/mo ($40-44 per test per month) |
| Feedback loop | Seconds (agent builds, tests, fixes in one session) | Days (push to staging, wait for QA team, get report) |
| Test ownership | Plain markdown files on your filesystem | Playwright scripts on QA Wolf's servers |
| Infrastructure | Local Chromium via Playwright, your machine | QA Wolf's cloud infrastructure |
| Integration | MCP (works with Claude Code, Cursor, Windsurf, any MCP client) | CI/CD webhooks, their dashboard |
| Test format | Markdown (#Case N:), git-committable | Proprietary Playwright scripts, not portable |
| License | MIT | Proprietary SaaS |
| Video recording | WebM with HTML player, saved locally | Available in their dashboard |
QA Wolf is a legitimate service for teams that want fully outsourced QA and can budget $96,000+ per year for it. But the reason most teams searched "QA Wolf alternative" is the price, the slow feedback loop, or the lack of control. Assrt addresses all three by eliminating the managed service entirely and making QA a tool your coding agent already knows how to use.
6. Frequently Asked Questions
How does Assrt actually integrate with my coding agent?
Assrt runs as an MCP (Model Context Protocol) server. You add it to your Claude Code, Cursor, or Windsurf config, and the agent gets three new tools: assrt_test, assrt_plan, and assrt_diagnose. When the agent finishes implementing a feature, it calls assrt_test with your dev server URL and a test plan. Assrt launches a real Chromium browser, runs the tests, and returns structured pass/fail results with screenshots. The agent never leaves your IDE.
What does Assrt cost compared to QA Wolf?
Assrt is MIT licensed and free. You pay for your own LLM API calls (Claude or Gemini) to power the test reasoning. A typical test run costs a few cents in API usage. QA Wolf starts at $8,000 per month for 200 tests and scales from there, with a median annual contract of $90,000 according to public pricing data.
Can I run Assrt without any cloud dependency?
Yes. Assrt runs entirely on your machine. The MCP server launches a local Chromium instance via Playwright. Your application's DOM, screenshots, and test data never leave your infrastructure. The only external call is to the LLM API for test reasoning. If you have an on-premises LLM that supports function calling, you can point Assrt at it for a fully air-gapped setup.
What happens to my test scenarios after a run?
Assrt writes the test plan to /tmp/assrt/scenario.md as a plain markdown file using a #Case N: format. You can commit it to git, edit it in any text editor, or let your coding agent modify it with standard file tools. A file watcher auto-syncs edits back to cloud storage if you use the optional Assrt web app. The scenarios contain no proprietary syntax or vendor-specific identifiers.
Does Assrt record video of test runs?
Yes. Every test run records a WebM video of the browser session. After the run completes, Assrt generates an HTML player with playback speed controls (1x through 10x) and opens it in your browser. The video, screenshots, and execution logs are all saved to a local directory under /tmp/assrt/ where you or your coding agent can access them.
Can I use Assrt if I already have Playwright tests?
Assrt is complementary to existing Playwright tests. It adds an AI agent layer on top of Playwright that navigates by reading the accessibility tree instead of relying on CSS selectors. You keep your existing Playwright suite for regression, and use Assrt for exploratory testing, new feature verification, and catching issues that selector-based tests miss.
How do I migrate from QA Wolf to Assrt?
There is no formal migration because Assrt works differently. With QA Wolf, you had a managed team writing and maintaining Playwright tests on your behalf. With Assrt, your AI coding agent generates and runs tests as part of its normal workflow. You do not need to port QA Wolf's test scripts. Instead, describe your test scenarios in plain English using the #Case format, and the AI agent executes them in a real browser.
Replace Your QA Service with Three Tool Calls
Add Assrt to your coding agent's MCP config. The agent that writes your code will test it in a real browser, fix what breaks, and move on. No dashboard, no monthly invoice.