Open source API testing tool

The open-source API testing tool that also drives the browser in the same plan.

Every tool on the first Google page for this keyword (SoapUI, Bruno, Hoppscotch, Karate, REST Assured, EvoMaster) treats APIs as the whole world. Click a button in a real browser, see a success toast, and assert the webhook actually landed? Not their job. Assrt wires an http_request tool into the same TOOLS array as click, type_text, and navigate, so one plain-English #Case can do both halves in the same tool stream. This page is about the exact thirteen lines that make that possible, and why specs-first tools cannot add it without rewriting their runner.

M
Matthew Diakonov
10 min read
4.8from Open-source, self-hosted, MIT
One #Case, UI actions and API assertions in the same tool stream
Real Chromium via @playwright/mcp, not a mocked HTTP client
Free vs $5-15K / mo proprietary competitors

The claim in one paragraph

Real bugs live at the seam: the user clicked Send, the toast said OK, the webhook never fired. Every other open-source API testing tool can only see one side of that seam.

An HTTP client can fire the webhook request directly and assert 200, but it never knows whether the button in the UI actually triggers that code path. A UI runner can click the button and verify the toast, but it cannot read the downstream effect. Assrt puts both on the same agent: the http_request tool is registered next to click in a single TOOLS array (agent.ts:16-196). Scroll down for the definition and the execution path.

What every other result on this keyword actually tests

The ten or so blog posts that currently rank for "open source api testing tool" cover the same shortlist, more or less in the same order. Each of them is a real tool that does its job well. None of them can click a button.

SoapUIJMeterBrunoHoppscotchKarateREST AssuredEvoMasterPostman (OSS tier)NewmanSchemathesisPactHurlAssrt (browser + API, same #Case)

SoapUI, Bruno, and Hoppscotch are HTTP clients. JMeter is load-focused. Karate is a DSL that replays recorded requests. REST Assured is a Java library for asserting on JSON responses. EvoMaster fuzzes OpenAPI specs. Every one of them takes an endpoint and treats it as the universe. Your users do not. Your users click a button first.

Four numbers worth knowing before you install anything

These are the four constants the http_request code path is built around. Everything downstream of this page is derivable from them.

0Tools in the agent TOOLS array, one of them is http_request
0Second AbortController timeout on every HTTP call
0Characters of response body preserved before truncation
0Dollar vendor lock-in cost for any of it
0 agent tools means one stream can mix browser actions and HTTP calls.
0s timeout keeps a hung webhook from hanging the whole #Case.
0 char cap on response bodies protects the LLM context window.
$0 vendor cost. MIT license, self-hosted, no account required.

The anchor: thirteen lines that register the HTTP call as a tool

Below is the exact block from the Assrt MCP source that puts HTTP into the same tool vocabulary as click. It lives in TOOLS, the array the Anthropic SDK sees. From the LLM's point of view, calling http_request is indistinguishable in shape from calling click: same tool_use event, same tool_result handling, same position in the conversation loop.

assrt-mcp/src/core/agent.ts:171-184

And here is the execution branch that actually runs when the LLM picks that tool. Standard fetch, 30-second abort, 4000-char truncation. No DSL. No proxy. No recorded-request replay.

assrt-mcp/src/core/agent.ts:925-955

What a mixed plan.md looks like

The whole input to Assrt is one markdown file. Each #Case is a paragraph. Any English sentence that reads like a QA step is valid. The agent parses the file with a regex at agent.ts:620-631, runs each #Case in a shared browser session, and interleaves browser and HTTP tools as needed.

plan.md

The trace of one run

This is the actual stdout pattern of npx assrt run against the plan above. Notice how http GET lines sit between click lines in the same case, without an intermediate artifact or CI handoff.

assrt run (three mixed cases, trimmed)

The three actors, one timeline

Below is the shape of one #Case that uses both tool classes. The agent is the single orchestrator. The browser and the third-party API never talk to each other; they both talk to the agent, which decides the order and records the assertions.

One #Case: click, wait, http_request, assert

AgentBrowser (Playwright)Third-party APIsnapshot()accessibility tree (42 refs)click [ref=e17] (Send)toast Message queuedhttp_request GET /conversations.history200 + JSON body (truncated at 4000 chars)assert latest message contains pingcomplete_scenario passed=true

Six bugs that live at the UI/API seam

Each card below is a concrete failure mode that is hard or impossible to catch with either a browser-only runner or an HTTP-only runner, and trivial once the two live in the same tool stream.

Webhooks that were supposed to fire

Click the button in the UI, then GET the inbox endpoint of Stripe CLI, ngrok, or a local webhook-catcher like localhost:4242/webhook-inbox. Assert the event lands with the right type. The UI showed a success toast; the assertion tells you whether the side effect actually happened.

Third-party APIs the UI integrates with

Connect a Telegram bot in-app, then call api.telegram.org/bot<token>/getUpdates to verify the registration landed. Same pattern for Slack conversations.history, GitHub repos, Linear issues. The agent can paste a token from env and poll the API directly.

Destructive actions actually destructive

User clicks Delete. The UI says "Deleted." You GET /api/users/me and assert 404. A spec-only tool cannot do the first half. A UI-only tool cannot do the second.

Idempotency of retried submits

Click Submit twice on a slow form. Then GET /api/orders and assert only one record landed. This bug is invisible to a pure UI test (the second toast may say success) and invisible to a pure API test (no one replays the double-click).

OAuth callback actually authorizes

Click "Connect GitHub". Handle the OAuth redirect in a real browser. Then call the GitHub API with the issued token and assert a repo list comes back. Any tool that only speaks HTTP cannot do the redirect dance.

Analytics events at the right moment

Click the Pro plan card. Then POST a query to PostHog or Amplitude and assert the plan_selected event fired with plan=pro in the last 5 seconds. Closes the gap between what the PM saw in the UI and what the dashboard actually reports.

Specs-first tools vs. Assrt

Every line below is honest about what the incumbents do well. The differences start when the test needs both halves of the seam.

FeatureSoapUI / Bruno / Karate / REST AssuredAssrt
Input formatJSON collections, YAML specs, Gherkin DSL, Java/JS codePlain English #Case blocks in a single .md file
What drives the testA runner that replays recorded HTTP requestsAn LLM agent that calls tools: snapshot, click, type_text, navigate, http_request, assert
Can it click a button?No. HTTP clients do not have a DOM.Yes. Real Chromium via @playwright/mcp under the hood.
Can it hit an API inline?That is the only thing it does.Yes. http_request is one of the tools in the same TOOLS array as click.
Where side-effect assertions liveA separate test file, run after the UI test by a CI stepNext tool call in the same #Case
Auth / captcha handlingMock the API or inject a token--extension connects to your real Chrome, real cookies
CostFree for OSS; proprietary platforms charge $5-15K / moFree, MIT, npx assrt-mcp
Vendor lock-inVaries; many tools are free but tests are non-portableZero. Plans are a .md file. Runner is @playwright/mcp.
MCP / coding-agent supportNoneFirst-class MCP server: Claude Code / Cursor drives it

The honest tradeoff

If the only thing you need to test is a REST endpoint in isolation with a contract-grade DSL, a specs-first tool will be faster to author and easier to audit. That is not the job Assrt is optimized for.

Assrt is the right pick when the truth of the test lives across the seam: the user did something in the browser, an API was supposed to reflect it, and both halves have to pass for the feature to be real. For that job, specs-first tools need a second runner, a second artifact, and a CI step that glues them together. Assrt needs one #Case.

Why no specs-first tool can bolt this on without a rewrite

An HTTP client does not have a DOM. Bruno can shell out to Playwright, but the assertions then live in a script block that runs before or after the request, not interleaved with it. Karate has a UI driver addon, but it is a DSL: the step vocabulary is fixed. Postman can chain requests, but the chain is a linear sequence of HTTP calls, not a heterogeneous tool loop.

Assrt's architecture starts from the other end. The runner is an LLM agent. Tools are the unit of composition. Adding http_request is a thirteen-line dictionary entry in TOOLS and a thirty-line case in the executor switch. The same machinery that makes click work makes the HTTP assertion work. There is no second runner to maintain because there is no second runner, full stop.

Run one mixed #Case against your app in twenty minutes

Give us a URL and one user flow that should trigger an API side effect. We write the #Case, run npx assrt-mcp live, and show the browser click and the http_request assertion happening in the same tool stream. You keep the plan.

Book a call

Questions the specs-first guides leave unanswered

Is Assrt really an API testing tool, or a UI testing tool that happens to have http_request?

It is both, and the useful framing is that it refuses to separate them. Look at assrt-mcp/src/core/agent.ts. The TOOLS array between lines 16 and 196 has seventeen tools, all registered to the same agent. Thirteen of them drive the browser (navigate, snapshot, click, type_text, select_option, scroll, press_key, wait, screenshot, evaluate, wait_for_stable, plus two email helpers). Two are assertion helpers (assert, complete_scenario). One is http_request. From the agent's point of view there is no UI/API boundary: calling http_request is the same kind of tool_use as calling click. That is the entire angle of this page. The question "is it an API tool" presumes a boundary the code does not have.

How is this different from using Postman or Bruno alongside Playwright?

You can do that, and teams do. The cost is coordination: the Playwright test and the Postman collection are two artifacts, two runners, two report formats, and the ordering has to be maintained by CI. When the UI flow changes, the API collection and the Playwright script both need an update, and neither will fail loudly if only one is updated. With Assrt, a single #Case in plan.md owns both halves. If you delete the button but forget the API assertion, the #Case fails at the click step, not silently in a collection runner someone forgot to re-trigger.

What does http_request actually execute?

Standard Node fetch. The handler lives at agent.ts:925-955. It takes the URL, method (default GET), headers (default Content-Type: application/json plus anything you pass), and body (only for non-GET/HEAD). It wraps the call in an AbortController with a 30-second timeout. The response body is read as text and truncated at 4000 characters with a "...(truncated)" marker, which keeps big JSON blobs from eating the LLM context window. On failure, stepStatus is set to failed and the scenario records the error, so a refused connection is a test failure, not a silent skip.

How does the agent know what API to call from a plain-English plan?

The system prompt at agent.ts:243-247 gives it the pattern directly: "When testing integrations (Telegram, Slack, GitHub, etc.): Use http_request to call external APIs (e.g. poll Telegram Bot API for messages). This lets you verify that actions in the web app produced the expected external effect." So when a #Case says "assert the Slack message arrived," the agent consults that rule, picks conversations.history, inserts the bearer token from variables, and fires the request. If you want deterministic behavior, write the URL and method into the #Case; the agent follows instructions verbatim when you provide them.

How do I pass a secret like a Slack bot token without leaking it in plan.md?

Use the --variables flag or the variables field of assrt_test. The plan references {{SLACK_BOT_TOKEN}} as a literal string; assrt interpolates the value at runtime and never logs the expanded form. The substitution happens in agent.ts inside run() before scenariosText reaches the LLM, so the raw secret is only in memory during the fetch. For CI, set the variable from a secret store. For local dev, the --extension flag lets Assrt attach to your already-logged-in Chrome, so often you do not need the token at all.

Does Assrt replace Postman, Bruno, REST Assured, or Karate?

No. Each of those has a specific job Assrt does not do. Postman and Bruno are fantastic exploratory HTTP clients: you poke at an endpoint, iterate, build a request by hand. REST Assured is a library for writing Java unit tests around APIs. Karate is a DSL for contract-style assertions on REST responses. Assrt is not competing with any of them on that axis. Assrt owns a different job: the end-to-end seam where the UI triggers something and an API should reflect it. For pure spec testing of a REST endpoint in isolation, the specs-first tools are a better fit. For "the user clicked Send and the webhook was supposed to land," nothing else in the OSS landscape closes the loop in one test plan.

Self-hosted and open source? What is the actual install?

npx @assrt-ai/assrt setup installs the MCP server globally and wires it into Claude Code. The source is in two repos under ~/: assrt (the web UI, recorder, and dashboard) and assrt-mcp (the CLI and MCP server you actually drive tests with). Under the hood the browser is @playwright/mcp, so your tests run in real Chromium. No cloud dependency, no account required. If you want history and shareable run URLs you can sync to app.assrt.ai, but every run also writes locally to /tmp/assrt/ and the plan is just markdown.

How did this page land for you?

React to reveal totals

Comments ()

Leave a comment to see what others are saying.

Public and anonymous. No signup.