Playwright tools comparison, by artifact

The axis every Playwright tools comparison skips: what's left on your disk.

Search this keyword and you will get the same page ten times. Tools bucketed by category. Reporters. Orchestrators. Grids. AI assistants. Pricing tiers. None of them answer the one question that matters when you cancel a plan or the vendor folds: what exactly is the test file, and does it survive without the tool that made it? This page sorts the whole landscape into four artifact camps, then shows why Assrt is the only one where the test is a scenario.md on your own disk, parsed by one regex at agent.ts:621.

M
Matthew Diakonov
12 min read
4.9from Assrt MCP users
Tests saved as Markdown you can grep, diff, email
One regex at agent.ts:621 parses the entire format
MIT-licensed npm package, no vendor lock-in, no dashboard

The anchor fact

Assrt's whole test-file grammar is one regex, not a 100-page DSL reference.

Clone @assrt-ai/assrt, open node_modules/assrt-mcp/dist/core/agent.js, grep for scenarioRegex. You will land on a single line that splits your Markdown file on #Case 1:, Scenario 2., or Test:. That is the entire thing you have to learn. No YAML schema, no project file, no page object model.

The four artifact camps

Every Playwright-adjacent tool, whether it calls itself a test runner, a QA platform, or an AI copilot, ends up writing something to represent a test. That something lives in one of four places. Pick the camp first. The feature grid is secondary.

Code camp

You write a .spec.ts or .spec.py file. The tool runs it. The artifact is a text file your engineers own and review in pull requests.

Artifact on disk
tests/login.spec.ts
Tools in this camp
Playwright Testplaywright-ctPuppeteerCypress

Recording / YAML camp

A browser extension records clicks, serialises them to a vendor-flavoured YAML or JSON recording. You edit the recording, not the UI. Portable only within the same vendor.

Artifact on disk
flow-0342.recording.yml
Tools in this camp
Selenium IDEChrome RecorderQA Touch flows

Cloud DSL camp

Your test lives as a row in the vendor's database. Edit it in their dashboard, run it on their grid, export only if their plan allows. AI maintenance happens server-side.

Artifact on disk
row-8f2c in vendor DB (dashboard only)
Tools in this camp
QA WolfTestimMabltestRigorFunctionize

Markdown camp

The test is a Markdown file with #Case blocks. An agent reads it, drives Playwright, and writes a verdict JSON beside it. You can grep, diff, and email it. This is where Assrt sits.

Artifact on disk
/tmp/assrt/scenario.md
Tools in this camp
Assrt (@assrt-ai/assrt)

Intent to verdict: the pipeline Assrt wraps around Playwright

The other camps all share a pipeline shape. Something authors a test (hand-coded, recorded, generated), something runs it against a browser, something writes a result. The Assrt pipeline looks like this, with the Markdown file as the one stable interface between humans and the agent.

Your intent becomes a Markdown file, then a verdict JSON

You (or a PM)
assrt_plan
saved scenarioId
assrt_test
scenario.md
latest.json
recording.webm

Side by side: vendor YAML recording vs Assrt Markdown

These two files describe the same login flow. One is what a cloud-DSL vendor typically serialises a recorded session into. The other is what you would hand-write (or auto-generate with assrt_plan) for the same scenarios.

Same login flow, two artifacts

# vendor-platform flow recording
version: 2
flow_id: "8f2c4a1b-login"
metadata:
  recorded_at: "2026-04-12T14:22:09Z"
  recorded_by: "qa-intern-03@acme.test"
steps:
  - type: "visit"
    url: "https://app.example.com/login"
    wait_strategy: "smart_wait"
    smart_wait_timeout_ms: 30000
  - type: "click"
    selector:
      strategy: "vendor-ai-anchor"
      anchor_id: "ANC_8f2c_0003"
      fallback_xpath: "//button[contains(@class,'btn-primary')]"
    retry:
      max_attempts: 3
      backoff_ms: 500
  - type: "type"
    selector: { anchor_id: "ANC_8f2c_0007" }
    value: "{{ $env.TEST_EMAIL }}"
  - type: "assert_url"
    pattern: "/dashboard"
    timeout_ms: 15000
44% fewer lines of vendor syntax to maintain

The vendor YAML references anchor IDs (ANC_8f2c_0003) that only exist inside the vendor's object repository. The Markdown references nouns (the email field, the Sign in button) that exist in the page itself. When the app changes, the first file breaks silently; the second keeps working because the agent resolves the noun against the live accessibility tree on every run.

The portability row the other comparisons skip

Most “Playwright alternatives” grids show 20 rows: parallelisation, reporting, flaky-test detection, AI maintenance, SSO, audit logs. They are accurate. They also distract from the one row that decides what you own at the end of the contract.

FeatureTypical cloud-DSL vendorAssrt
Where the test livesRow in the vendor database, dashboard-only/tmp/assrt/scenario.md on your disk
What gets committed to gitA JSON export reference, if export is on your planThe Markdown file itself, diff-friendly, review-friendly
Parser grammar you have to learnProprietary DSL, 60+ action keywords, YAML schemaOne regex at agent.ts:621 that splits on #Case / Scenario / Test
Runtime when you want to re-run at 2amVendor grid; depends on their uptime and your plan quotaYour Chromium via playwright v1.49.1 on your own machine
License and source visibilityCommercial SaaS, closed source, per-seat pricingMIT on npm as @assrt-ai/assrt, readable in dist/ after install
What happens if the vendor foldsTests gone, export only if you paid for the plan tierYour Markdown still runs; the CLI is already on your disk
Who can read the test without trainingQA engineers in the vendor dashboardAnyone literate in English; Markdown is the grammar

The whole parser, on the screen, with line numbers

This is the function at assrt-mcp/src/core/agent.ts, lines 620 to 628. Every Markdown test you ever run passes through it before the agent sees anything. Copy it into a regex tester if you want; it is public, it is MIT, it is the whole grammar.

assrt-mcp/src/core/agent.ts

One run, seven files written, two of them yours to commit

Running assrt_test once writes this to disk. The paths are deterministic; the second Markdown file is the only artifact you actually need to carry forward.

assrt test

What each written file is for

1

/tmp/assrt/scenario.md

The last Markdown plan the CLI parsed. Identical to what you passed in (or what assrt_plan generated).

#Case 1: Sign in and land on the dashboard ...
2

/tmp/assrt/results/latest.json

Structured verdict. One object per scenario, each with name, passed, toolCalls[] in order, and paths to video and screenshots.

{ "scenarios": [ { "name": "...", "passed": true, "toolCalls": [...], "video": ".../recording.webm" } ] }
3

/tmp/assrt/<runId>/video/recording.webm

Full browser recording with the injected cursor dot, click ripple, and keystroke toast overlays. Every run, no flag needed.

4

~/.assrt/browser-profile

Persistent Chromium profile so Case 2 and onward stay logged in after Case 1 authenticates. Cookies and localStorage survive across runs.

0Regex that parses the whole test format
0Files written to disk on a typical run
0Lines of vendor YAML you maintain
0Files you can commit to git (md + json)

The full landscape, name by name

Every tool you will see recommended on a best Playwright tools page belongs in one of the four camps above. The chips below are the names that show up in practice; scroll them and place each one in the right column.

Playwright TestCypressPuppeteerSeleniumSelenium IDEChrome RecorderQA WolfTestimMablFunctionizetestRigorVirtuosoChecksumOctoMindACCELQCurrentsApplitoolsAppetizeAssrt

The uncomfortable thought experiment

Imagine your current test vendor shuts down on a Friday.

If you are in the code camp, Monday is fine; your .spec.ts files still run under Playwright. If you are in the YAML camp, you have hope; the recording format is at least a file you own, even if re-implementing the runner is a month of work. If you are in the cloud-DSL camp, Monday is a rewrite; your tests lived in a database you no longer have access to, and an export may or may not have been included in your plan. If you are in the Markdown camp, Monday looks identical to Thursday: /tmp/assrt/scenario.md is still there, and npx @assrt-ai/assrt still runs it.

1 regex

The axis this table cares about, and the only one most other grids skip.

Artifact portability

When the Markdown camp fits

  • You want the test file to live in the same repo as the product code.
  • Your PM, designer, or founder needs to read and edit tests without learning a DSL.
  • You run on a machine you control (local dev, self-hosted CI, air-gapped environment).
  • You care that the artifact survives the tool going away.
  • You already use Claude Code, Cursor, or Zed and can point them at an MCP server.

When it does not

  • You need a vendor-operated grid running thousands of parallel browsers today.
  • Your org standardised on a closed testing platform and will not approve new tooling.
  • You want a managed dashboard for non-engineers with SSO, SCIM, and SLA guarantees.
  • You cannot bring your own LLM key (Anthropic, Gemini) for any reason.

Want the artifact-portability pitch in a live demo?

Twenty minutes, one scenarios.md file, one real app. You keep the file at the end.

Book a call

Frequently asked questions

What do you mean by 'artifact camp' and why does it matter for a Playwright tools comparison?

Every Playwright tool eventually produces something that is 'the test'. In the code camp the artifact is a .spec.ts file you commit to git; in the YAML/recording camp it is a vendor-specific recording file with click coordinates and DOM paths; in the cloud DSL camp it is a row in a vendor database accessible only through their dashboard; in the Markdown camp (where Assrt lives) the artifact is a plain text file with #Case blocks that a human can read, edit, and run. Most comparison pages rank tools by reporter quality or per-seat price. The artifact camp is the axis that decides whether you still own your tests the day the vendor changes plan tiers or shuts down.

What exactly does Assrt save to disk after one test run?

Two files, deterministic paths. The scenario plan goes to /tmp/assrt/scenario.md as the Markdown you passed in (or the Markdown that assrt_plan generated). The verdict goes to /tmp/assrt/results/latest.json as a structured object with scenario name, pass/fail, tool calls in order, and paths to the video (webm) and screenshots taken during the run. You can cat either file, paste it into a PR, email it to a teammate, or diff it against yesterday. That is the whole storage layer. Nothing is synced to a vendor unless you opt in to app.assrt.ai cloud sync.

How does Assrt parse a Markdown file into runnable scenarios?

One regex, at /Users/matthewdi/assrt-mcp/src/core/agent.ts line 621: /(?:#?\s*(?:Scenario|Test|Case))\s*\d*[:.]\s*/gi. It splits your Markdown on any header of the shape '#Case 1:', 'Scenario 2.', 'Test:', etc. Each chunk between matches becomes a scenario. The matching header name is kept as the scenario label. That is the entire grammar you have to learn. There is no project file to maintain, no page object model to keep in sync, and no YAML schema to validate.

How is this different from 'best AI Playwright tools' lists that recommend QA Wolf, Testim, or Mabl?

Those tools are in the cloud DSL camp. They record your clicks in a browser extension, store the result as a proprietary object in their database, and give you a dashboard to view, edit, and re-run. You cannot grep them, diff them in git, or run them without their SaaS. Their AI does maintenance on their object format. Assrt's AI does execution on a Markdown file you own. Both ship video recordings; only one lets you keep the source when you cancel.

Which Playwright version does Assrt run under the hood?

It wraps @playwright/mcp v0.0.70 (Anthropic's MCP bridge for the official Playwright project) plus playwright v1.49.1 for the browser binaries. Chromium, Firefox, and WebKit are all available through the wrapper. The agent that picks which Playwright action to run for each Markdown step is Claude Haiku 4.5 by default, declared as DEFAULT_ANTHROPIC_MODEL at agent.ts line 9. You can swap in Gemini via --model or an env var.

Can I actually re-run a saved scenario without the AI deciding the steps again?

Yes. Every assrt_test call writes a UUID-keyed scenario file. Pass scenarioId instead of plan on the next call: assrt_test({ url, scenarioId: "uuid-from-last-run" }). Stored scenarios live on disk (and optionally in the app.assrt.ai cloud sync table if you opt in). The agent reads the same Markdown back and re-plays it. That is how you build a regression suite with this tool: run once with a plan, keep the UUID, re-run it on every PR.

Where does the $7,500/month number come from in the comparison?

It is the upper end of mid-market annual contracts for enterprise AI testing SaaS (Testim, Mabl, Functionize, ACCELQ, testRigor, Virtuoso, Checksum). Public marketing pages do not list these prices; the number is drawn from published procurement reviews and self-reported teardown threads. Assrt is MIT-licensed on npm as @assrt-ai/assrt. The baseline cost is the Anthropic API bill for Claude Haiku calls plus your own Chromium runtime on your own machine. The delta in per-run cost is measured in cents rather than enterprise seats.

What does the comparison page miss that I should still evaluate myself?

Three things. One, team handoff: if your testers do not read code, the Markdown camp is great but you still need to agree on a #Case style guide. Two, parallel scale: cloud tools throw hundreds of browsers at a suite; Assrt reuses one Chromium by default, so wide fan-out on a CI matrix is on you. Three, integrations: the Markdown + MCP story is strongest inside an editor that speaks MCP (Claude Code, Cursor, Zed). If your org standardised on a closed-box QA platform, migration is a people problem, not a tooling one.

Can I verify the claims on this page without taking your word for it?

Yes. npm install @assrt-ai/assrt, then grep -n 'scenarioRegex' node_modules/assrt-mcp/dist/core/agent.js (or the src/ file if you cloned). You will see the exact one-line regex quoted in the anchor section above. Run npx @assrt-ai/assrt test --url https://example.com --plan '#Case 1: Visit the homepage and check the title'. Look at /tmp/assrt/scenario.md afterwards; the file is the Markdown you passed in. That is the entire portability promise, reproducible on your own machine in under a minute.

How did this page land for you?

React to reveal totals

Comments ()

Leave a comment to see what others are saying.

Public and anonymous. No signup.