Free Alternative

The Best Free Alternative to Manual Playwright: 4 MCP Tools That Replace Your .spec.ts Files

Every "Playwright alternative" article recommends another framework you still write code in. Cypress instead of Playwright. Selenium instead of Playwright. You trade one set of selectors and assertions for another. The real alternative to writing manual Playwright tests is not writing test code at all. Assrt exposes 4 MCP tools that let your AI coding agent run real browser tests from English descriptions, on cloud VMs, for free.

By the Assrt team||8 min read
4 MCP tools, 0 test files

Tests run on isolated cloud VMs, not your laptop. First boot ~11s, snapshot restores are faster.

Freestyle VM infrastructure

1. Why "Which Framework?" Is the Wrong Question

Search for "best Playwright alternative" and you get lists of 8 to 12 testing frameworks. Cypress, Selenium, TestCafe, Puppeteer, WebdriverIO, Katalon. Each article compares syntax, language support, and pricing. They assume you want to keep writing test code but in a different tool.

That assumption misses the actual problem. The pain of manual Playwright is not Playwright itself. It is maintaining hundreds of .spec.ts files, updating selectors when the UI changes, debugging flaky waits, and spending engineering hours on test infrastructure instead of product work.

Switching from Playwright to Cypress does not fix that. You still write code. You still maintain selectors. You still debug flaky tests. The real alternative is removing the test code layer entirely and letting an AI agent handle browser interaction from plain English instructions.

2. The 4 MCP Tools in server.ts

Assrt is an MCP server. MCP (Model Context Protocol) is the standard that lets AI coding agents call external tools. When you install Assrt, it registers 4 tools that your agent (Claude Code, Cursor, or any MCP-compatible client) can call directly:

assrt_test

The primary tool. Pass a URL and a test plan (plain text) or a scenarioId (UUID from a previous run). The agent launches a cloud VM, opens a real browser, executes each test case, and returns structured JSON with pass/fail results, screenshots, and video.

assrt_plan

Navigates to your URL, takes screenshots at multiple scroll positions, extracts the DOM accessibility tree, and generates 5 to 8 focused test cases. Output is markdown in #Case N: format. You do not write test cases; the tool analyzes your page and writes them for you.

assrt_diagnose

Pass a failed test scenario and the error. The tool performs root cause analysis and returns a corrected test. Instead of you reading stack traces and adjusting selectors, the AI figures out what went wrong and fixes the test.

assrt_analyze_video

Runs vision analysis on a recorded test session. Useful for catching visual regressions, layout shifts, or UX issues that pass/fail assertions would miss.

These 4 tools are defined in src/mcp/server.ts of the Assrt MCP package (894 lines). That one file is the entire interface between your AI agent and browser testing. No test runner binary, no config files, no plugin system.

Install in one command

Run npx @m13v/assrt in your project. Your AI agent gets 4 new tools for browser testing. No signup, no API key.

Get Started

3. What Happens When You Run assrt_test

When your AI agent calls assrt_test with a URL and a test plan, here is what happens under the hood:

  1. A Freestyle cloud VM spins up. Not a Docker container on your machine. An ephemeral VM at a unique *.vm.freestyle.sh domain. First boot takes about 11 seconds. If a snapshot exists for this configuration, the VM restores from snapshot with all services pre-warmed.
  2. A real Chromium browser opens inside the VM. The agent controls it via Playwright MCP, using tools like snapshot, click, type_text, scroll, press_key, wait, and assert. These are real Playwright actions, not simulated clicks.
  3. The AI agent (Claude Haiku or Gemini) interprets each test case. It reads the plain English step, decides which browser tool to call, executes it, observes the result via DOM snapshot, and moves to the next step. No hardcoded selectors. The agent finds elements by understanding the page.
  4. Results stream back in real time. Your IDE receives screencast frames (~15fps), step completions, assertion results, and UX improvement suggestions as they happen. A video is recorded for post-run review.
  5. Structured results are saved locally. The test plan goes to /tmp/assrt/scenario.md and results to /tmp/assrt/results/latest.json. Every run gets a unique scenario ID (UUID) so you can re-run the exact same test later.

The entire flow happens without you opening a terminal, writing a test file, or configuring a test runner. You describe what to test in your IDE chat. The agent handles everything else.

4. Manual Playwright vs. Assrt: Same Signup Flow

Here is the same test written both ways. A user signs up, verifies their email, and lands on the dashboard.

Manual Playwright (TypeScript)

import { test, expect } from '@playwright/test';
import Mailosaur from 'mailosaur';

const mailosaur = new Mailosaur(process.env.MAILOSAUR_KEY!);

test('signup flow', async ({ page }) => {
  const email = `test.${Date.now()}@srv.mailosaur.net`;
  await page.goto('https://myapp.com/signup');
  await page.getByLabel('Email').fill(email);
  await page.getByLabel('Password').fill('Pass123!');
  await page.getByRole('button', { name: 'Sign Up' }).click();

  const msg = await mailosaur.messages.get('srv', {
    sentTo: email
  }, { timeout: 30000 });

  const code = msg.html.body.match(/\d{6}/)?.[0] ?? '';
  await page.getByLabel('Code').fill(code);
  await page.getByRole('button', { name: 'Verify' }).click();
  await expect(page.getByText('Dashboard')).toBeVisible();
});

This requires a Mailosaur subscription ($99+/month), an API key in CI secrets, regex parsing for the OTP code, and manual timeout handling. When your email template changes, the regex breaks silently.

Assrt (Plain English)

#Case 1: Full signup with email verification
1. Create a temporary email address
2. Navigate to the signup page
3. Type the temp email into the email field
4. Type a password into the password field
5. Click Sign Up
6. Wait for the verification code in the email
7. Type the verification code into the OTP input
8. Click Verify
9. Verify the dashboard loads
DimensionManual PlaywrightAssrt
Test format.spec.ts TypeScript filesMarkdown (#Case N: steps)
Selector maintenanceManual updates when UI changesAI finds elements each run
Email testingExternal service + custom codeBuilt-in (create_temp_email, wait_for_verification_code)
Execution environmentLocal machine or CI runnerIsolated cloud VM (Freestyle)
Failed test debuggingRead traces, update test codeassrt_diagnose returns root cause + fix
CostFree (Playwright) + $99+/mo (email service) + eng timeFree (MIT) + ~$0.03/test (LLM API calls)
Vendor lock-inLow (standard Playwright API)Zero (markdown files, MIT license, self-hostable)

Try the same comparison on your app

Point Assrt at your staging URL. It generates a test plan and runs it in under 2 minutes. Compare that to how long your last .spec.ts file took to write.

Get Started

5. No Vendor Lock-In: Your Tests Are Markdown

Most AI testing tools trap your tests in a proprietary format or a cloud dashboard you cannot export from. When you stop paying, your tests disappear.

Assrt tests are markdown files. The #Case N: format is plain text that any human can read. After each run, the plan is saved to /tmp/assrt/scenario.md on your local filesystem. Results go to /tmp/assrt/results/latest.json as structured JSON with pass counts, fail counts, duration, and per-scenario assertions with evidence strings.

The npm package (@m13v/assrt) is MIT licensed. The Dockerfile in the repo lets you self-host on your own infrastructure. You can run the MCP server locally, point it at your own LLM provider, and keep every piece of the stack under your control.

If you stop using Assrt tomorrow, you keep your markdown test plans, your JSON results, and your video recordings. Nothing is locked behind a login or an API key.

6. Frequently Asked Questions

What are the 4 MCP tools Assrt exposes?

assrt_test runs test scenarios against a URL and returns structured pass/fail results with screenshots. assrt_plan navigates to a URL, analyzes the page with screenshots and accessibility tree extraction, and generates 5 to 8 test cases automatically. assrt_diagnose takes a failed test scenario and performs root cause analysis with a corrected test. assrt_analyze_video runs vision analysis on a recorded test session. These are defined in the MCP server source at src/mcp/server.ts.

How is this different from Playwright codegen?

Playwright codegen records browser actions and outputs TypeScript test files you then maintain. When your UI changes, those generated files break and you update them manually. Assrt never generates .spec.ts files. Tests stay as English descriptions in markdown format (#Case N: steps). The AI agent interprets them fresh each run, adapting to UI changes without brittle selectors or hardcoded waits.

What does 'MCP integration' mean in practice?

MCP (Model Context Protocol) is a standard that lets AI coding agents call external tools. When you install Assrt (npx @m13v/assrt), it registers as an MCP server. Your AI agent in Claude Code or Cursor can then call assrt_test, assrt_plan, and the other tools directly from the chat interface. You describe what to test, the agent calls the tools, and results appear in your IDE. No separate test runner, no terminal switching.

How do the cloud VMs with snapshot boot work?

Assrt runs tests on Freestyle ephemeral VMs, not on your local machine. The first run for a given configuration boots a VM in about 11 seconds. After that first run, a snapshot is saved with all services pre-warmed. Subsequent runs restore from the snapshot, skipping the full boot process. Each test gets an isolated VM at a unique *.vm.freestyle.sh domain, so tests never interfere with each other or with your local environment.

Is Assrt actually free? What is the catch?

Assrt is MIT licensed and published as an npm package. There is no subscription, no seat pricing, and no usage limits on the tool itself. You pay only for the LLM API calls that power the AI agent, which typically costs a few cents per test scenario when using Claude Haiku. You can also self-host via Docker (a Dockerfile is included in the repo) and point it at your own LLM endpoint.

Can I use Assrt in CI/CD pipelines?

Yes. Since Assrt exposes standard MCP tools, you can call assrt_test programmatically in any CI environment that can run Node.js. Pass a URL and a test plan (or a saved scenarioId from a previous run), and it returns structured JSON with pass counts, fail counts, and per-scenario results. The cloud VM execution means your CI runner does not need a browser installed.

What happens to my tests if I stop using Assrt?

Your tests are markdown files with English descriptions. They are not locked into a proprietary format or stored in a vendor cloud. You can read them, version them in git, and translate them to any other testing approach. Assrt also saves test plans and results to local files (/tmp/assrt/scenario.md and /tmp/assrt/results/latest.json), so you always have access to your data.

Ready to automate your testing?

Assrt discovers test scenarios, writes Playwright tests from plain English, and self-heals when your UI changes.

$npm install @assrt/sdk