Reddit deep dive, r/QualityAssurance

Your AI regression suite should fit in one Markdown file.

Every vendor page about AI-generated regression tests stops at the word generation. Nobody shows you the file. With Assrt the file is/tmp/assrt/scenario.md, the format is five lines of Markdown per case, and re-running it next week is one argument: a UUID.

M
Matthew Diakonov
11 min read
4.8from Assrt MCP users
Regression plan stored as plain Markdown, not YAML
Re-run by scenarioId UUID, no recompile step
Open-source MCP server, self-hosted Chromium, zero cloud requirement

The one-line version

An AI code regression test, in Assrt, is just a Markdown #Case plan with a UUID. Generate it once, re-run it forever by ID.

No proprietary YAML, no compiled .spec.ts file to keep in sync, no vendor cloud to store it in. The file is yours, on your disk, in your git repo if you want it there.

What the top ten SERP pages leave out

Search "ai code regression test generation" and you get the same ten pages: Momentic, QA Wolf, Katalon, DeviQA, Parasoft, TestSprite, Virtuoso. Every one of them describes the category in the same four bullets. AI analyzes diffs. AI picks tests to run. AI self-heals broken selectors. AI generates cases from requirements.

Not one of them shows you what the generated test actually is. Is it a .spec.ts file? A YAML document? A record in a cloud database? Can you open it in VS Code? Can you grep it? Can you paste it in a pull request and have a teammate review it without logging into the vendor dashboard? The abstraction hides the artifact, and the artifact is the whole point of regression testing, because regression only works if the same test can run again next week.

This page does the opposite. We are going to show you the exact file Assrt writes, the exact format of each test case, the exact JSON metadata that makes it re-runnable by ID, and the exact file watcher that keeps the file in sync when an agent edits it.

The file: /tmp/assrt/scenario.md

Here is a real regression plan Assrt would write after a singleassrt_plancall against a signup page. Plain Markdown. Every case is self-contained. Every step is an imperative instruction the browser agent can execute verbatim.

/tmp/assrt/scenario.md

The format is pinned by PLAN_SYSTEM_PROMPT inassrt-mcp/src/mcp/server.tsline 219. The prompt enforces: 3 to 5 actions per case, 5 to 8 cases per plan, each case independent, every verification observable (visible text, URL, element presence). Those constraints exist so a single human or a single agent can read the entire regression suite for a feature in under sixty seconds and tell you what it covers.

The sidecar: scenario.json

Alongside the Markdown, a tiny JSON file holds the metadata that makes the plan re-runnable without regeneration. This is the file that turns any test into a regression test.

/tmp/assrt/scenario.json

The id is a UUID issued on first run. From that point on, any call to assrt_test can swap out the plan argument for scenarioId and the same Markdown will be fetched, materialized back to disk, and handed to the Playwright agent. Nothing gets re-generated. The test you wrote in week one is literally the test that runs in week fifty.

re-run.ts

How the regression loop actually flows

The reason this works without a compilation step, a self-healing pass, or a selector registry is that the Markdown is the spec and the Playwright MCP agent is the compiler, re-invoked fresh every run. Here is the plumbing.

One plan in, one pass/fail out

Claude Code
CI job
CLI: npx assrt run
assrt_test(scenarioId)
/tmp/assrt/scenario.md
Chromium via @playwright/mcp
/tmp/assrt/results/latest.json

The perception loop, step by step

1

Invocation

assrt_test is called with either a plan string or a scenarioId. If a scenarioId is provided, the plan is fetched from the central store, cached locally at ~/.assrt/scenarios/<uuid>.json, and written to /tmp/assrt/scenario.md.

2

Browser launch

A Playwright MCP session starts in headless Chromium at 1600x900 by default. If you pass extension: true, it connects to your running Chrome instead, so authenticated flows survive without replaying a login.

3

Agent reads the Markdown

The inner agent (Claude Haiku by default, Gemini optional) receives the #Case list as its user message. It works case by case, calling snapshot, click, type_text, and assert from the 18-tool Playwright MCP vocabulary.

4

File watcher runs in parallel

An fs.watch handler on scenario.md debounces for 1000ms and pushes any edits back to Firestore via PATCH /api/public/scenarios/<id>. A human or a second agent can tighten a step mid-run; the next invocation picks it up.

5

Results persist with the run ID

The final report writes to /tmp/assrt/results/latest.json and /tmp/assrt/results/<runId>.json. Historical runs are never overwritten, so you can diff week-over-week pass rates against the same scenarioId.

Try it in the next terminal you open

npx assrt-mcp installs the server. Your coding agent calls assrt_plan to generate the first plan, then assrt_test to run it. Every future run reuses the UUID. That is the whole regression workflow.

Install from npm

The 1-second watcher that makes the Markdown alive

This is the mechanic that turns a static plan file into a collaborative regression suite. When writeScenarioFile runs, it calls startWatching, which registers a Node fs.watch handler onscenario.md. The handler debounces for one second after the last keystroke and then syncs to Firestore.

assrt-mcp/src/core/scenario-files.ts

Three things follow from this tiny handler. First, you can edit the Markdown in VS Code while a run is in progress; the running agent does not see the change (it already has the old plan in context), but the next run will. Second, another agent, say a Claude Code session that just shipped a feature, can programmatically append a new #Case to the file using fs.appendFileSync, and the case becomes part of the permanent regression suite one second later. Third, the lastWrittenContent guard on line 44 means the watcher ignores echoes of its own writes, so there is no feedback loop even though both the file and the remote store are being updated.

What it looks like in your terminal

Here is the full generate-then-regress loop you would run locally on a pull request branch. Every line below is a real step you can reproduce.

regression-loop.sh

The numbers, grounded in the code

None of these are invented. Every one comes from a constant inassrt-mcpor from the prompt that defines the plan format. You can grep them yourself.

0Markdown file per regression suite
0msFile watcher debounce
0Cases per plan, max
0Steps per case, max

And one more

0 lines of proprietary YAML or DSL in the regression artifact. The test is the Markdown. The runtime is Playwright. The storage is a file.

Assrt vs. the cloud-only regression platform

Most AI regression platforms ship as a SaaS tool where the test artifact lives in their database, selectors are patched by their model, and re-runs go through their scheduler. That stack works until you want to grep your tests, run them offline, check them into your repo, or leave the vendor. The row-by-row picture:

FeatureTypical cloud AI regression toolAssrt
Test artifact formatProprietary YAML or cloud-only recordPlain Markdown #Case file at /tmp/assrt/scenario.md
Re-run mechanismVendor scheduler, web dashboardassrt_test({url, scenarioId}) from any MCP-aware agent or CLI
Edit after generationWeb form or visual editor, round-trip to cloudAny text editor; fs.watch syncs in 1 second
RuntimeVendor-hosted headless browser fleetLocal Chromium via @playwright/mcp; extension mode reuses your real Chrome
Offline executionNot supported~/.assrt/scenarios/<uuid>.json cache means offline runs still work
Cost$7.5K/mo enterprise tier is commonOpen-source MCP server, free; you pay for LLM inference only
Lock-inTests are not portable off the platformDelete the assrt binary tomorrow and your scenario.md files still run in any browser

The diff workflow: regression as a pull request artifact

Because the regression suite is a plain file, the pull request review story gets obvious. When an agent ships a new feature, it appends a #Case to scenario.md (or to a committed copy in your repo). The PR diff shows exactly which test case was added, in English. A human reviewer reads it like any other prose change. A teammate in another timezone can comment on the test the same way they comment on a code line.

regression.md (git diff)

That diff is both the ticket and the acceptance test. No second artifact in a test management tool. No translation from "acceptance criteria" to "test script." The same sentence is both.

Questions from r/QualityAssurance and r/ExperiencedDevs

Where does Assrt actually save the regression test on disk?

Every assrt_test run writes the plan to /tmp/assrt/scenario.md and the scenario metadata (id, name, url, updatedAt) to /tmp/assrt/scenario.json. The last run result lives at /tmp/assrt/results/latest.json and each historical run gets its own file at /tmp/assrt/results/<runId>.json. Those paths are exported from PATHS in scenario-files.ts, so you can point any other tool (grep, git diff, your editor, a second Claude session) at them without special permissions.

What does the generated regression test file look like?

It is Markdown. Each test case starts with #Case N: followed by a short action-oriented name, then 3-5 imperative steps the browser agent can execute. The format is pinned by PLAN_SYSTEM_PROMPT in the MCP server and asks the generator for 5-8 cases per page, each self-contained so it can run without depending on a previous case having passed. You can open the file in any editor and read every test in under a minute.

How does re-running a regression test work without re-generating it?

assrt_test accepts either a plan (inline text) or a scenarioId (UUID from a previous run). When you pass the UUID, the tool fetches the saved plan from the central API, writes it back to /tmp/assrt/scenario.md, and hands it to the Playwright MCP agent. That is the entire regression loop: same UUID, same file, new code, new browser session. No test compilation step, no selector healing pass, no intermediate Playwright spec to maintain.

What happens if I edit /tmp/assrt/scenario.md while a test is running?

A file watcher (fs.watch in scenario-files.ts) fires on every change and debounces for one second. After one second of quiet, the updated Markdown is pushed back to Firestore via PATCH /api/public/scenarios/<id>. The watcher ignores echoes of its own writes by remembering lastWrittenContent. This means a human or another agent can open the Markdown, tighten a step, and the next assrt_test run picks it up automatically.

Is this actually 'Playwright' or something custom?

It is real Playwright. The agent drives a Chromium instance through @playwright/mcp, the same MCP server Microsoft maintains. Assrt does not invent a browser protocol or a selector language; it picks tools out of the 18-tool Playwright MCP vocabulary (snapshot, click, type_text, assert, wait_for_stable, http_request, and so on). The Markdown plan is the spec; Playwright is the runtime.

Why is Markdown better than YAML or a generated .spec.ts file for regression?

A Markdown #Case file is a prompt. You can read it, edit it, diff it against last week's version, and show it to a product manager without any training. A YAML artifact is a vendor schema you have to learn. A generated .spec.ts file drifts the moment a selector renames (the reason self-healing exists). A Markdown plan describes intent at the user action level, so when the UI renames a button the plan does not change; only the browser agent's snapshot does.

What if I want a real Playwright .spec.ts file anyway?

You can write one yourself from the Markdown in about the time it takes to read it. Or you can run the scenario and capture the tool calls the agent made; they map one-to-one onto page.click, page.fill, expect(locator).toHaveText, etc. But the point of storing regression as Markdown is that you do not have to keep that spec file in sync. The plan is the source; a generated spec would be a stale duplicate.

Does this tie me to Assrt's cloud?

No. The MCP server is open-source and self-hosted (npx assrt-mcp from a clone). The Markdown files live on your disk. The browser is Chromium via Playwright. The only optional cloud call is the Firestore sync for sharing scenarios between machines, and the fallback in scenario-store.ts caches every fetched scenario locally in ~/.assrt/scenarios/<uuid>.json so offline runs still work. Pull the plug on assrt.ai and your regression suite keeps running.

Stop reading, start generating

Your first regression test is one MCP call away.

Wire assrt-mcp into your coding agent, point it at localhost, and ship with a regression file you can grep.

Get Assrt free

How did this page land for you?

React to reveal totals

Comments ()

Leave a comment to see what others are saying.

Public and anonymous. No signup.