Tool ranking, 2026 edition

The best QA automation tools, ranked by exit cost. Not feature count.

Every other list for this keyword compares the same ten platforms on the same five feature boxes. None of them answers the question you will actually care about a year in: when you stop paying the vendor, what do you still own? This page ranks the field on a single axis, and shows why the most portable option in the room stores its tests in plain Markdown parsed by an eleven-line regex.

M
Matthew Diakonov
14 min read
4.9from engineers running Assrt locally
Tests stored as plain Markdown, parsed by an 11-line regex
MIT licensed, runs locally, no vendor cloud required
Runtime is real Playwright MCP, not a closed engine
$900/mo

Minimum price for a private testRigor workspace. Free tier is free because it publishes your tests under The Unlicense, making every case publicly searchable.

testRigor pricing page, April 2026

The whole page, one sentence

Rank QA automation tools by what you keep when you cancel, not by what the vendor wants to sell.

Features look good in a comparison grid and are mostly fungible across the top ten tools. Exit cost is the dimension that diverges by an order of magnitude and is almost never written down. This page writes it down.

The tools everyone is comparing

These are the sixteen names that appear across the top SERP lists for this keyword. The lists differ on ordering, but the set is effectively closed. What the lists do not do is separate them by the property that matters at year two.

SeleniumPlaywrightCypressRobot FrameworkAssrttestRigorMablApplitoolsACCELQKatalonTricentis ToscaTestsigmaTestCafeLambdaTestPanto AIBrowserStack

Every name in that strip does the core job, kicks off a browser, runs assertions, reports pass or fail. The interesting divergence is upstream of execution, in how the test artifact is stored and under what terms.

The five kinds of lock-in you are actually buying

Lock-in is not a single property. When a platform says "your tests are yours" it might mean one of these is open and the other four are closed. Read the wall of text and match it against any tool you are evaluating.

Format lock-in

Tests stored in a proprietary .yaml, .ks, or cloud-only record. You cannot grep them, cannot diff them, cannot run them with any other tool. Migration means rewriting every case.

Runner lock-in

The platform's closed execution engine is the only thing that understands your tests. If the vendor sunsets the product, your suite stops working the day the API does.

History lock-in

Screenshots, videos, and pass/fail history live in the vendor's cloud. Export is manual or not offered. Audit trails for regulated teams vanish on cancellation.

Credential lock-in

Managed test accounts, saved browser fingerprints, and OAuth tokens are stored server-side. You cannot take the session, so even portable tests need re-setup.

Org-knowledge lock-in

The 'test writer' role becomes a specialist in the vendor's DSL. Onboarding takes weeks. When the specialist leaves, the suite rots. Plain-English or plain-code suites survive turnover.

The anchor: Assrt's entire test grammar is this regex

Before the ranking, the anchor fact. Every tool that claims "our format is portable" should be able to show you a grammar this small. Assrt's can fit in an SMS.

assrt-mcp/src/core/agent.ts:568-579

That is the entire test parser. One regex, two array operations, one optional fallback. There is no compile step, no bytecode, no proprietary file extension. A scenario is a Markdown block that starts with #Case N: and contains 2-5 sentences. If Assrt disappears tomorrow you can implement a replacement runner in an afternoon, because the format is regex-level simple. Compare that to the reverse-engineering bill for a testRigor Private workspace or a Katalon .ks archive.

Anchor fact, in numbers

Assrt's test-format cost, line-by-line:

0Lines in the entire scenario parser
0Regex that constitutes the grammar
0Proprietary file extensions
0 $Per-month minimum to keep tests private

Verifiable by opening assrt-mcp/src/core/agent.ts and assrt-mcp/LICENSE. No sales call required.

Ranked by exit cost

Criterion: what you still own when you stop paying or the vendor disappears. Lower exit cost is better (you keep more, migration is cheaper). The ranking is deliberately not a feature ranking; the same tool can score worst on this axis and best on a feature axis.

FeatureVendor keeps itStays yours
AssrtOptional hosted dashboard (app.assrt.ai) for sharing runsMarkdown .md scenarios, JSON reports, WebM videos, MIT runner, all local
Raw PlaywrightNone — fully yours.spec.ts files, Apache-2 license, any CI you like
CypressCypress Cloud dashboard is optional and paid.cy.ts files in your repo, MIT core
SeleniumNone — fully yoursYour own scripts, Apache-2, any language binding
Robot FrameworkNone — fully yours.robot plain-text files, Apache-2, portable across Python runners
TestsigmaCloud features, history, and AI require paid tierOpen-source core; test cases portable but cloud features gate them
testRigor (Public Open Source)Your tests are public and searchable on the internetPlain-English scenarios; fine for OSS, not for private SaaS
testRigor (Private, $900-$7500/mo)Workspace, scenarios, runs, history all in vendor cloudNothing portable without manual export; engine is closed
MablTests, runs, visual baselines, flow editor all in Mabl cloudNo source-of-truth artifact lives outside the platform
ApplitoolsVisual baselines are the value and they stay in their cloudYour Playwright/Selenium code stays yours; the visual history does not
ACCELQTests and flows in ACCELQ Universe, proprietary formatExport path exists but is manual; runner is closed
Katalon StudioTest Cases in .ks, TestOps history in cloud, Katalon-specific keywordsScripts partly portable but reports and Test Cases are not
Tricentis ToscaModel-based tests in Tricentis format, full enterprise lock-inNothing portable; migration is a rewrite
LambdaTest / BrowserStackDevice and browser history stays vendor-sideYour tests stay yours (runner only); session recordings do not

Table reads: name | what the vendor keeps | what stays with you. "Stays yours" column is the full description of the portable artifacts; "vendor keeps" is the part that evaporates on cancellation. The three lowest exit-cost rows are Assrt, raw Playwright, and Selenium.

How to actually score a tool before you buy

Four checks, in order. Run them on every tool on your shortlist during the trial, not after the procurement signature. Each takes under ten minutes.

Four checks that catch exit-cost traps

1

Format check

Open one test file. Can a new hire read it in 30 seconds without the vendor's docs? If yes, format score is high. If it is a row in a database or a .ks blob, format score is zero.

Assrt: plain Markdown. Raw Playwright: .spec.ts. testRigor Private: rows in their cloud. Mabl: rows in their cloud.

2

Runtime check

Pick any scenario. Can I run it with an open-source tool I could install myself? Playwright, Selenium, and Cypress clear this bar. Closed vendor engines do not.

Assrt runs on Playwright MCP (open). testRigor's engine is closed. Mabl's engine is closed. ACCELQ's engine is closed.

3

License check

Read the LICENSE file of the runner. MIT or Apache-2: safe to fork. 'The Unlicense' on a public-only plan: fine for OSS, bad for private work. Commercial EULA: locked.

assrt-mcp/LICENSE is MIT. Playwright is Apache-2. Cypress core is MIT. testRigor free tier is The Unlicense but public.

4

Export check

Can I get runs out as JSON, screenshots as PNG, videos as MP4/WebM in under ten minutes? If the answer involves a CSV export wizard, your history is stranded.

Assrt writes JSON reports and WebM videos to /tmp/assrt directly. No UI export step. The filesystem is the export.

Portability self-check

Run this checklist against whichever tool you lean toward. Seven checks. Any that fail are real exit-cost liabilities you will pay for later.

What a portable QA suite looks like

  • Tests are in a format I can open in any text editor (Markdown, Gherkin, .spec.ts)
  • Tests live in my repo under version control, not only in the vendor's cloud
  • The runner is MIT, Apache-2, or another permissive open-source license
  • Screenshots export as PNG and videos as WebM or MP4, not a proprietary viewer
  • Pass/fail history exports as JSON or CSV I can load into any BI tool
  • No vendor-managed credentials the suite depends on (service accounts are in my vault)
  • The docs explicitly describe a 'leave the platform' path, not just onboarding

What the numbers look like at cancellation day

Assume a team running a mid-size suite that has accrued a year of history. The numbers below are the practical delta between the closed-cloud path and the portable path when the subscription ends.

0Scenarios in a mature suite
0 moHistory retained after cancel on closed platforms
0Bytes of Assrt history you lose on cancel
0Lines needed to re-implement the Assrt parser
$0/motestRigor Private minimum
$0/moQuoted top-end for hosted AI platforms
$0Assrt MCP and CLI license cost
0%Assrt plans stored in your own git repo

Assrt's 0-line parser and 0-byte history loss are the whole point. Plain Markdown, plain git, plain local runner.

When the closed-cloud path actually wins

Fair counter. There are real scenarios where the closed-cloud tools justify their exit cost: heavy enterprise compliance workflows tied to a vendor's audit trail, very large non-engineer QA teams that benefit from a vendor-specific keyword library, and visual regression workloads where Applitools' baseline infrastructure is genuinely hard to replicate. If you are in one of those buckets, pay the subscription and know exactly what you are paying for. If you are not, default to a portable format and revisit later.

Start with the zero-lock-in option.

Assrt is MIT, local, and stores tests as plain Markdown. Point assrt_plan at your URL, run the Markdown with assrt_test. If you later decide you need an enterprise platform, you walk away with everything you wrote.

Install npx assrt-mcp

Specific questions about exit cost, pricing, and portability

Why rank QA automation tools by exit cost instead of features?

Features are what the vendor wants you to compare on. Exit cost is what you wish you had compared on two years in, when your suite is 800 scenarios deep and the procurement team is looking at next year's renewal. Every top SERP result for this keyword ranks by features, AI buzzwords, or pricing tiers. None of them answer: when the contract ends, what do you still own? The answer differs by an order of magnitude across this list. Tests stored as plain Markdown in your repo and driven by open-source runners (Assrt, raw Playwright, Selenium, Cypress) stay yours. Tests stored in Mabl's cloud, in a testRigor private workspace, in ACCELQ's Universe, or in Katalon's proprietary .ks files are the vendor's asset that you pay for access to. That is the first real decision.

What does Assrt actually store, and where?

A plan is plain Markdown. Each scenario is a block that starts with #Case N: and is followed by 2-5 sentences in English. The parser at assrt-mcp/src/core/agent.ts:569 is a single regex: /(?:#?\s*(?:Scenario|Test|Case))\s*\d*[:.]\s*/gi. That is the entire grammar. No schema, no compiler, no bytecode, no proprietary extension. You commit the .md files next to your source code. Runs and screenshots land in /tmp/assrt on your disk; videos are WebM. If you want them in a database, the open-source server has an optional Firestore sync; you can turn it off and nothing breaks.

testRigor has a free Public Open Source plan. Is that really free?

It is free as in price. It is not free as in private. Per testRigor's own pricing page, the Public Open Source plan licenses your tests under The Unlicense and makes all test cases and results publicly searchable on the internet. That is fine for OSS projects. It is not fine for a private SaaS whose test plan describes internal flows, admin endpoints, or regulated user journeys. To make that suite private you move to testRigor Private at $900/mo for one suite and one parallelization; scaling up runs you into higher tiers. You are paying a subscription to keep the tests you wrote out of search engines. With Assrt you write the Markdown, commit it to your private repo, and you are done.

Which tools score best on exit cost and which score worst?

Best exit cost (you keep the tests): raw Playwright, raw Cypress, raw Selenium, Robot Framework, and Assrt. In all five the artifacts are text you wrote (or that a tool generated and handed you in an open format) sitting in your repo. Worst exit cost: Mabl, testRigor Private workspaces, ACCELQ Universe, Tricentis Tosca, Applitools (for the visual baselines specifically), Katalon Studio Enterprise (scripts are portable, but Test Cases in the .ks format and the TestOps cloud history are not). Middle: testRigor Public Open Source (free but public), Testsigma (open-source core but cloud features gated), LambdaTest (runner only; your tests are still yours, but your device history is not). The ranking is specific, not hand-wavey — the criterion is what the vendor explicitly keeps if you stop paying.

What does Assrt actually cost and is it really free?

The MCP server and CLI are MIT-licensed and free. Running them costs zero. The only usage-based cost is the LLM API calls the agent makes while interpreting a plan: by default this is claude-haiku-4-5-20251001 with your own Anthropic API key. A typical #Case costs cents, not dollars. You can also point it at Gemini. There is an optional hosted dashboard at app.assrt.ai for sharing runs with teammates; that one is cloud and optional and your tests are not locked in it — the source of truth stays the Markdown files in your repo. Compare to testRigor Private at $900/mo minimum and AI-first enterprise platforms that quote $2k-$7.5k/mo for mid-size teams.

I already write Playwright specs. Is Assrt a replacement or a complement?

Complement. Raw Playwright is the gold standard for full-control E2E tests that a human engineer authors and reads. Assrt is for the layer above: the flows a product manager or a founder can describe in English but does not want to sit down and turn into a spec file. Under the hood Assrt drives real Playwright MCP calls (navigate, click, type_text, snapshot, press_key), so the runtime is the same browser automation you already trust. Many teams run both: a handful of Assrt #Case files covering critical smoke paths written by non-engineers, and a deeper Playwright suite written by QA engineers. Both are portable; both are yours.

What about self-healing tests? The AI tools all advertise that.

Self-healing has two flavors. Vendor flavor: when a selector drifts, the platform's closed model rewrites the test in their cloud and you never see the diff. That is fast but you lose the review step. Assrt flavor: when an assrt_test run fails, you call assrt_diagnose which returns a four-section response, the last section of which is a literal #Case block written in the same grammar as your plan. You paste it over the failing one and re-run. The repair is a diff you can read and a text edit you can commit. That is not as flashy but it is auditable, reviewable, and survives the vendor disappearing. The diagnose contract lives in assrt-mcp/src/mcp/server.ts around line 240.

How do I actually benchmark tools on exit cost before buying?

Four questions, in order. (1) In what file format are tests stored? If the answer is a vendor DSL, a proprietary extension, or a row in a cloud database, exit cost is high. (2) Where do tests execute? If the runner is open-source (Playwright, Selenium, Cypress) you can always run them yourself; if it is a proprietary engine, you cannot. (3) What licence? MIT or Apache-2 means you can fork if the vendor dies. 'The Unlicense' is fine for public tests, not private ones. A commercial EULA with a cloud-only dependency is the highest exit cost. (4) Does the vendor let you export runs, screenshots, and videos in open formats (JSON, PNG, WebM, MP4)? If not, your history is stranded. Apply these four to every tool on your shortlist before the feature comparison.

Is a list of Playwright-compatible tools the same as a list of portable tools?

No. A number of AI-first testing platforms wrap Playwright internally and then store scenarios in their own format that you cannot export as Playwright code. The promise 'built on Playwright' is about execution, not portability. The portable tools on this list are the ones where the test source artifact (the thing you commit to git) is either raw Playwright/Cypress/Selenium code or a plain-text file (Markdown, Gherkin, Robot Framework .robot) that any compatible runner could execute. That distinction is what separates 'cheap to migrate off' from 'cheap to use this year.'

Which tool should a small team pick if they have never automated before?

If the team has no engineers or only one engineer, start with Assrt's three-tool MCP loop: point assrt_plan at your URL, run the returned #Case blocks with assrt_test, and paste the Corrected #Case from assrt_diagnose when something fails. Zero learning curve, zero lock-in. If the team has a QA engineer and expects to scale past a few hundred scenarios, use both: Assrt for the plain-English critical-path smoke tests, raw Playwright for the deeper suite. If the team is enterprise-sized and already has a TestRail or Jira workflow the QA lead will not abandon, evaluate Katalon or ACCELQ on their own merits but go in knowing the exit cost is real.

The portable option

Your tests are Markdown. Your runner is MIT. Your history is your filesystem.

0 lines of parser, 0 cents of subscription, full control.

Try Assrt free

How did this page land for you?

React to reveal totals

Comments ()

Leave a comment to see what others are saying.

Public and anonymous. No signup.