QA Velocity

The AI Velocity Gap: When Development Outpaces Testing

Teams that adopted AI coding assistants now produce 3 to 4x more code but their testing capacity stayed the same. The result: untested code or a growing QA backlog.

0

Generates standard Playwright files you can inspect, modify, and run in any CI pipeline.

Open-source test automation

1. The Velocity Gap Problem

AI coding assistants have fundamentally changed the rate at which features get written. A developer with Copilot, Claude Code, or Cursor can produce working code 3 to 4 times faster than without AI assistance. But the testing pipeline, QA review process, and deployment verification have not sped up at all.

This creates a widening gap between how fast code is produced and how fast it can be verified. The gap manifests as either shipping untested code (risk) or accumulating a growing backlog of features waiting for QA review (waste). Neither outcome is acceptable for teams that care about quality.

2. Shipping Untested vs. Growing Backlog

Most teams default to shipping with less testing because the business pressure to deliver features is stronger than the discipline to maintain quality. The short-term result is faster delivery. The medium-term result is a growing number of production bugs, customer complaints, and hotfixes that consume the development time that should be going to new features.

The alternative, maintaining full QA rigor while development accelerates, creates a bottleneck that frustrates developers and project managers. Features sit in a QA queue for days or weeks, losing the speed advantage that AI coding provided. The velocity gap makes the old approach to QA unsustainable.

Close the testing gap

Assrt auto-discovers and generates tests at the speed your team ships code. Open-source, free.

Get Started

3. Agentic Quality Engineering

Agentic quality engineering applies AI to the testing side of the pipeline, matching the speed gains on the development side. Instead of a human QA engineer manually testing each feature, an AI agent explores the application, identifies test scenarios, and generates automated tests that can run in CI.

The question is whether AI-generated tests are good enough to catch the real issues rather than just inflating coverage numbers. The answer depends on the tool and the approach. Tools that generate shallow tests (checking element visibility, verifying page loads) provide little value. Tools like Assrt that discover user flows and generate behavioral tests provide genuine coverage that catches regressions.

4. Test Generation at Development Speed

For testing to keep pace with AI-assisted development, test generation needs to be as fast and as automated as code generation. This means integrating test generation into the development workflow rather than treating it as a separate phase. When a developer generates a feature with AI, the tests should be generated simultaneously as part of the same workflow.

The practical implementation varies by team. Some teams include test requirements in their AI prompts (generate the feature and the Playwright tests together). Others use a separate test generation tool that runs automatically after code changes are committed. The key is that testing happens at development speed, not at manual QA speed.

5. Verification as the New Bottleneck

With AI coding, development speed is no longer the bottleneck. With AI test generation, testing speed is no longer the bottleneck. The new bottleneck is verification: confirming that both the generated code and the generated tests actually do what they should. This is a human judgment task that does not yet have a good automated solution.

The teams shipping fastest and most reliably are the ones who invest in making verification efficient. Clear specifications, readable generated code, meaningful test names, and consistent patterns all reduce the time required for a human to verify that the generated output is correct. Speed is important, but confidence is what actually lets you deploy.

Ready to automate your testing?

Assrt discovers test scenarios, writes Playwright tests, and self-heals when your UI changes.

$npx @assrt-ai/assrt discover https://your-app.com