QA in vibe coding: lessons from an AI build
Alick, our guest author, shares his first-hand insights from building with AI and the realities of “vibe coding”.
Over the last few months, I’ve been exploring what it takes to build products using AI. The process - “vibe coding” (Collins Dictionary’s 2025 word of the year!) allows a small development team - or even a single Product Manager - to bring ideas to life faster than ever. Working directly with AI to handle scaffolding, boilerplate, and even complex workflows is liberating.
But speed and flexibility don’t replace the need for testing, because in the same way a developer will test the ‘happy path’ the way they envisaged usage when they coded it, the same is true of direction given to an AI; the writer will test it as they conceived it. And that is not representative of real-world usage.
So, even when code seems to “just work”, untested, or minimally tested features can create unexpected problems down the line.
AI can accelerate development, but it’s not perfect
In my experience building Track It, an AI-assisted football match tracker, I learned that AI can generate impressive functionality, effortlessly adding a level of product quality polish that isn’t always apparent in an MVP (minimum viable product).
But it can also misinterpret requirements or overlook edge cases. Just because a feature works in a demo or on a local branch doesn’t guarantee it will work in production, or for all users or at scale.
Small mistakes can cascade quickly
Even in a vibe coding workflow, small errors can cascade quickly. Assuming even that your workflow is robust enough to monitor and alert for potentially expensive infinite loops running in cloud functions – because what Vibe Coder ever writes that bit first, right - missing a detail in user input validation, overlooking a deployment configuration, or mismanaging a database call can all cause potentially expensive issues. By maintaining unit tests, integration tests, and regression tests, you ensure that features perform consistently, cost-effectively and reliably.
Confidence comes from knowing it works
When I deployed Track It across staging and production environments, I realised that rigorous testing isn’t just about catching bugs - it’s about confidence. Knowing that every deployment has been verified allows you to innovate freely, add features, and iterate on user feedback to build customer confidence, without fear of breaking critical functionality.
Iteration gets smarter with AI feedback
One unexpected benefit I found while working with AI is that testing – especially end-user testing - actually improves how you work with the AI itself.
Writing tests and identifying edge cases helps you frame better prompts and provide clearer guidance. In other words, testing doesn’t just protect your product - it improves your collaboration with AI and strengthens vibe coding practices.
Conclusion
Vibe coding is transforming the way we build digital products, but it’s not a shortcut around reliability. Testing remains essential to deliver a product that truly works for users. By combining rapid AI-driven development with structured QA and UAT practices, you can enjoy the speed and flexibility of vibe coding while ensuring robustness.
If you want to see how I applied these principles while building Track It, including how AI helped accelerate development while still requiring careful oversight, check out my full video walkthrough here.
Share this article
If you want to connect with Alick and follow his insights on AI-driven product development, head over to his LinkedIn profile.
More articles
Meet our new Employee-Trustee Director: Katie
Testing the untestable: navigating edge cases in legacy public sector systems