Vibe Coding Won. Now What?

Vibe Coding Won. Now What?

Vibe coding went from a niche provocation to the dominant paradigm of software development in less than 18 months. Collins English Dictionary named it 2025 Word of the Year. OpenAI co-founder Andrej Karpathy coined the term in February 2025; by early 2026, approximately 92% of US developers use AI coding tools daily, and 46% of all new code is AI-generated. The adoption battle is over—vibe coding won.

So why does it feel like the victory lap is getting complicated?

What “Winning” Actually Looks Like

The adoption numbers are real. JetBrains surveyed 24,534 developers and found 85% regularly use AI tools, with nearly 90% saving at least one hour per week. AI reduces time-to-pull-request by up to 58%. Teams complete 21% more tasks. On every input metric, AI coding tools have delivered.

But look past the inputs:

  • AI-generated code contains 1.7x more major issues per PR than human-written code
  • 45% of AI-generated code contains OWASP Top-10 security vulnerabilities
  • Code churn increased 41%, and code duplication rose 4x
  • 63% of developers report spending more time debugging AI-generated code than writing it from scratch would have taken
  • Trust in AI code accuracy fell from 43% in 2024 to 33% in 2026

Adoption won. Quality is losing ground.

The Vibe Coding Failure Mode

The original definition of vibe coding was pointed: describe tasks to the AI, accept the output if it runs, iterate through prompts rather than reading the code. The “vibe” is the absence of understanding. For throwaway scripts and solo side projects, that’s fine. For production systems at scale, it’s a compounding bet that you’ll never need to audit, debug, or maintain what you’ve shipped.

The real-world consequences are arriving. One security firm found 69 vulnerabilities—6 critical—in just 15 applications built with popular vibe coding tools. The Enrichlead collapse became a case study for what happens when a vibe-coded product hits actual load and actual security scrutiny. When the code was written with intent to accept rather than intent to understand, “debug the system” requires reverse-engineering decisions nobody made.

The Accountability Gap

The biggest structural problem is that vibe coding has outpaced accountability. Non-technical founders and practitioners can now ship production apps without any engineering involvement. That’s a feature until it’s a bug—specifically, until something breaks, is breached, or needs to scale. At that point, there’s no developer with working knowledge of the system to call.

This isn’t a hypothetical for the future. It’s happening now. And for teams with actual engineers, the equivalent is accepting AI output for security-critical paths without review because “it seemed fine” or “we were moving fast.” The velocity is real; so is the exposure.

What “Post-Adoption” Requires

The framing “should we use AI coding tools?” is dead. The live question is: how do you structure the use of AI coding tools so that the output is something you can own?

That reframe changes what you do:

Review is not optional. It’s not about slowing AI down—it’s about staying accountable to what ships. If you can’t review it, you shouldn’t ship it. Tools like Atlassian’s Rovo Dev that enforce standards at generation time (not just at review time) reduce the burden without eliminating accountability.

Track quality separately from velocity. PR throughput went up. But what’s happening to defect escape rate, security scan hits, and time-to-resolve incidents? If you’re only measuring velocity, you’re flying blind on what vibe coding is actually trading away.

Define where vibe coding is in-bounds. Not all code is equal. Boilerplate, tests, docs, internal tooling: vibe away. Security-critical paths, data handling, authentication, payment flows: full understanding required before merge. Make the boundary explicit so it’s a standard, not a debate per PR.

Treat AI output as first draft, not final product. The best teams aren’t using AI to replace engineering judgment—they’re using it to generate a starting point that engineers then own. The difference in culture shows up in incident response, in code reviews, in the knowledge that someone on the team can explain why the code does what it does.

Vibe coding won the adoption war. The next phase isn’t a victory lap—it’s figuring out how to use a tool that generates code faster than most organizations can understand it.

Related Posts

Sep 3, 2014
3 minutes

5 Ways to Keep Your Nude Pictures Secure

With the recent revelation that there was a massive release of naked or revealing photographs of many female celebrities, it seems to be an important time to remind people how to make sure private photos and other information don’t get shared all around the internet without your permission. As a result, here are my top 5 ways to keep your nude pictures secure.

  1. Don’t Take Nude Selfies - Yes, the best and easiest way to keep your naked selfies out of the sight for the public viewers on the internet is to never take a naked selfie in the first place. Just don’t do it.

Mar 13, 2015
3 minutes

Never Explicitly Trust Software Because It Is Open-Source

One of the major ideas behind open source projects is that allowing anyone that wants to view the source code of a project to be able to do so should make bugs and security weaknesses easy to find. While this did not work so well with OpenSSL and its various bugs that have been exposed recently, I do have an example where it worked extremely well.

Magento is an eCommerce platform that has two separate editions. One is a completely open-source and free as in beer Community edition. The other is a somewhat expensive Enterprise Edition. There is a large community of Magento developers that create extentions, or addons, for these two editions of Magento.

GitHub Agentic Workflows Are Here: What They Change (and What They Don't)
Technology-StrategyDevelopment-Practices
Feb 24, 2026
4 minutes

GitHub Agentic Workflows Are Here: What They Change (and What They Don't)

In February 2026, GitHub launched Agentic Workflows in technical preview—automation that uses AI to run repository tasks from natural-language instructions instead of only static YAML. It’s part of a broader idea GitHub calls “Continuous AI”: the agentic evolution of continuous integration, where judgment-heavy work (triage, review, docs, CI debugging) can be handled by coding agents that understand context and intent.

If you’re weighing whether to try them, it helps to be clear on what they are, what they’re good for, and what stays the same.