Vibe Coding and the Test Coverage Gap
AI coding assistants can scaffold an entire application in minutes. But the code they generate almost never includes meaningful tests. As the software job market shifts toward AI-augmented development, the ability to verify and test vibe-coded applications is becoming a critical skill.
“Generates standard Playwright files you can inspect, modify, and run in any CI pipeline.”
Open-source test automation
1. What Vibe Coding Is and Why It Skips Tests
Vibe coding refers to the practice of building software by describing what you want to an AI assistant and accepting its output with minimal review. The developer provides high-level intent ("build me a dashboard with user authentication and a Stripe integration") and the AI generates hundreds or thousands of lines of working code. The focus is on speed and iteration, not on understanding every line.
The approach works remarkably well for prototypes, side projects, and MVPs. But it has a structural blind spot: AI coding tools optimize for making the application work, not for making it testable or tested. When you ask an AI to build a checkout flow, it generates the checkout flow. It rarely generates the tests that verify the checkout flow handles edge cases correctly. Even when you explicitly ask for tests, the generated tests tend to be shallow, testing that components render without testing that they behave correctly.
This is not a flaw in the AI tools. It reflects how they are trained and prompted. Most training data consists of application code, not test code. Most prompts ask for features, not verification. The result is a growing population of applications that work in the happy path but have unknown behavior in edge cases, error states, and concurrent usage scenarios.
2. The Real Cost of Untested AI-Generated Code
When vibe-coded applications move from prototype to production, the missing test coverage becomes a serious liability. Without tests, every change is a gamble. You cannot refactor safely because you have no way to verify that existing behavior is preserved. You cannot onboard new developers because there is no executable specification of what the application should do. You cannot deploy with confidence because there is no automated safety net.
The cost compounds over time. A vibe-coded application that ships successfully tends to grow quickly, with more features added through the same AI-assisted process. Each new feature adds code but not tests. After a few months, you have a substantial codebase where any change could break anything, and the only way to verify correctness is manual testing. This is exactly the situation that end-to-end testing was invented to prevent.
The irony is that developers who vibe-code are often building faster than their team can manually verify. They ship three features a day, but QA can only test one. The backlog of unverified changes grows until a production incident forces the team to slow down and add the testing they skipped.
3. Why Traditional Testing Doesn't Fit the Vibe-Coding Workflow
The traditional approach to testing assumes you understand the code you are testing. You write unit tests for functions you wrote, integration tests for APIs you designed, and end-to-end tests for flows you specified. This assumption breaks down with vibe coding. When an AI generates a 500-line component, the developer may not understand its internal structure well enough to write meaningful unit tests for it.
End-to-end tests are a better fit for vibe-coded applications because they test behavior from the user's perspective. You don't need to understand the internal implementation to write a test that says "when I click Add to Cart, the cart count increases by one." This is the level of specification that vibe-coding developers actually operate at: they describe what the application should do, not how it should do it.
The remaining problem is effort. Writing end-to-end tests manually is slow, and developers who chose vibe coding specifically because it is fast are unlikely to spend hours writing Playwright test files by hand. The testing approach needs to match the development speed, which is where automated test generation becomes essential.
4. Adding Test Coverage to Vibe-Coded Apps
The most practical way to add test coverage to a vibe-coded application is to generate tests from the running application itself. Instead of reading the source code and writing tests for it, tools can crawl the application, discover its pages and flows, and generate test files that verify the observed behavior. This approach works regardless of how the code was written, whether by a human, an AI, or a combination of both.
Assrt takes this approach. You point it at your running application with a single command (npx @assrt-ai/assrt discover https://your-app.com), and it crawls the application, identifies testable scenarios, and generates standard Playwright test files. The generated tests cover navigation, form submissions, authentication flows, and interactive elements. Because they are standard Playwright files, you can inspect them, modify them, and run them in any CI pipeline without vendor lock-in.
Other approaches include using Playwright's built-in codegen tool to record tests manually, or using commercial services like QA Wolf (starting at $7,500/month) to outsource test creation entirely. The right choice depends on your budget and team size. For solo developers and small teams building with vibe coding, an open-source tool that generates tests automatically is usually the most practical starting point.
5. The Job Market Angle: Testing as a Differentiator
As AI coding tools become ubiquitous, the ability to write code becomes less of a differentiator in the job market. What remains valuable is the ability to verify that code works correctly, handles edge cases, and can be maintained over time. Developers who can ship tested, production-ready code (whether they wrote it by hand or generated it with AI) are more valuable than developers who can only ship untested prototypes.
This shift is already visible in job postings. Companies increasingly list "test automation" and "CI/CD experience" as requirements, not because they love testing for its own sake, but because they have learned that untested code is expensive code. The developer who can set up automated test generation, configure a CI pipeline, and maintain a healthy test suite is the developer who keeps the team shipping safely.
For developers entering or re-entering the job market, learning to test AI-generated code is a strategic investment. The tools are accessible (Playwright is free, Assrt is open-source, CI platforms have generous free tiers), and the skill set is transferable across frameworks and languages. Whether you vibe- code your next side project or write every line by hand, the tests are what make it production-ready.
Ready to automate your testing?
Assrt discovers test scenarios, writes Playwright tests from plain English, and self-heals when your UI changes.