Vibe Coding Won. Now What?

Vibe Coding Won. Now What?

Vibe coding went from a niche provocation to the dominant paradigm of software development in less than 18 months. Collins English Dictionary named it 2025 Word of the Year. OpenAI co-founder Andrej Karpathy coined the term in February 2025; by early 2026, approximately 92% of US developers use AI coding tools daily, and 46% of all new code is AI-generated. The adoption battle is over—vibe coding won.

So why does it feel like the victory lap is getting complicated?

What “Winning” Actually Looks Like

The adoption numbers are real. JetBrains surveyed 24,534 developers and found 85% regularly use AI tools, with nearly 90% saving at least one hour per week. AI reduces time-to-pull-request by up to 58%. Teams complete 21% more tasks. On every input metric, AI coding tools have delivered.

But look past the inputs:

  • AI-generated code contains 1.7x more major issues per PR than human-written code
  • 45% of AI-generated code contains OWASP Top-10 security vulnerabilities
  • Code churn increased 41%, and code duplication rose 4x
  • 63% of developers report spending more time debugging AI-generated code than writing it from scratch would have taken
  • Trust in AI code accuracy fell from 43% in 2024 to 33% in 2026

Adoption won. Quality is losing ground.

The Vibe Coding Failure Mode

The original definition of vibe coding was pointed: describe tasks to the AI, accept the output if it runs, iterate through prompts rather than reading the code. The “vibe” is the absence of understanding. For throwaway scripts and solo side projects, that’s fine. For production systems at scale, it’s a compounding bet that you’ll never need to audit, debug, or maintain what you’ve shipped.

The real-world consequences are arriving. One security firm found 69 vulnerabilities—6 critical—in just 15 applications built with popular vibe coding tools. The Enrichlead collapse became a case study for what happens when a vibe-coded product hits actual load and actual security scrutiny. When the code was written with intent to accept rather than intent to understand, “debug the system” requires reverse-engineering decisions nobody made.

The Accountability Gap

The biggest structural problem is that vibe coding has outpaced accountability. Non-technical founders and practitioners can now ship production apps without any engineering involvement. That’s a feature until it’s a bug—specifically, until something breaks, is breached, or needs to scale. At that point, there’s no developer with working knowledge of the system to call.

This isn’t a hypothetical for the future. It’s happening now. And for teams with actual engineers, the equivalent is accepting AI output for security-critical paths without review because “it seemed fine” or “we were moving fast.” The velocity is real; so is the exposure.

What “Post-Adoption” Requires

The framing “should we use AI coding tools?” is dead. The live question is: how do you structure the use of AI coding tools so that the output is something you can own?

That reframe changes what you do:

Review is not optional. It’s not about slowing AI down—it’s about staying accountable to what ships. If you can’t review it, you shouldn’t ship it. Tools like Atlassian’s Rovo Dev that enforce standards at generation time (not just at review time) reduce the burden without eliminating accountability.

Track quality separately from velocity. PR throughput went up. But what’s happening to defect escape rate, security scan hits, and time-to-resolve incidents? If you’re only measuring velocity, you’re flying blind on what vibe coding is actually trading away.

Define where vibe coding is in-bounds. Not all code is equal. Boilerplate, tests, docs, internal tooling: vibe away. Security-critical paths, data handling, authentication, payment flows: full understanding required before merge. Make the boundary explicit so it’s a standard, not a debate per PR.

Treat AI output as first draft, not final product. The best teams aren’t using AI to replace engineering judgment—they’re using it to generate a starting point that engineers then own. The difference in culture shows up in incident response, in code reviews, in the knowledge that someone on the team can explain why the code does what it does.

Vibe coding won the adoption war. The next phase isn’t a victory lap—it’s figuring out how to use a tool that generates code faster than most organizations can understand it.

Related Posts

When AI Slows You Down: Picking the Right Tasks
Development-PracticesProcess-Methodology
Feb 21, 2026
5 minutes

When AI Slows You Down: Picking the Right Tasks

One of the main reasons teams don’t see performance benefits from AI is simple: they’re using it for the wrong things.

AI can make you faster on some tasks and slower on others. If the mix is wrong—if people lean on AI for complex design, deep debugging, and security-sensitive code while underusing it for docs, tests, and boilerplate—then overall you feel no gain or even a net loss. The tool gets blamed, but the issue is task fit.

Lessons from a Year of AI Tool Experiments: What Actually Worked
Industry-InsightsTechnology-Strategy
Feb 8, 2026
9 minutes

Lessons from a Year of AI Tool Experiments: What Actually Worked

Over the past year, I’ve been experimenting extensively with AI tools—trying to understand what they’re actually good for, where they fall short, and how to use them effectively. I’ve written about several of these experiments: the meeting scheduling failures, the presentation generation disappointments, and most recently, setting up Moltbot as an SDR.

Looking back at all these experiments, patterns emerge. Some things consistently worked. Others consistently didn’t. And a few things surprised me in both directions.

AI Agents and Google Slides: When Promise Meets Reality
Process-MethodologyIndustry-Insights
Jan 12, 2026
4 minutes

AI Agents and Google Slides: When Promise Meets Reality

I’ve been experimenting with AI agents to help create Google Slides presentations, and I’ve discovered something interesting: they’re great at the planning and ideation phase, but they completely fall apart when it comes to actually delivering on their promises.

The Promising Start

I’ve had genuinely great success using ChatGPT to help with presentation planning. I’ll start a conversation about my presentation topic, share the core material I want to cover, and ChatGPT does an excellent job of: