Vibe Coding: The Most Dangerous Idea in Software Development

Vibe Coding: The Most Dangerous Idea in Software Development

Andrej Karpathy—former director of AI at Tesla and OpenAI co-founder—coined a term last year that’s become the most divisive concept in software development: “vibe coding.”

His description was disarmingly casual: an approach “where you fully give in to the vibes, embrace exponentials, and forget that the code even exists.” In practice, it means letting AI tools take the lead on implementation while you focus on describing what you want rather than how to build it. Accept the suggestions, trust the output, don’t overthink the details.

The post got 5 million views on X. A quarter of Y Combinator startups now report that over 95% of their codebases are AI-generated. Nearly half of surveyed software engineers say they’re “keeping up” with AI tools. The vibe coding movement is real, it’s growing, and it’s already producing casualties.

What Vibe Coding Actually Looks Like

Let me distinguish vibe coding from simply using AI tools. There’s a spectrum:

Traditional development with AI assistance: You write code, AI suggests completions. You accept useful suggestions, reject bad ones. You understand every line that ships.

AI-augmented development: You describe tasks, AI generates code. You review, modify, and verify. You understand the code before it ships, even if you didn’t write every line.

Vibe coding: You describe what you want. AI generates code. You run it. If it works, you ship it. If it doesn’t, you tell the AI what went wrong and let it try again. You don’t necessarily read or understand the code—you just care that it produces the right output.

The key distinction is that last point: in vibe coding, understanding the code is optional. You’re evaluating outputs, not implementations. The code is an artifact you don’t need to engage with.

For prototyping, this can be thrilling. You can build functional applications in hours that would have taken days or weeks. Ideas that would have died as “maybe someday” projects can become working demos in an afternoon.

But thrilling and safe aren’t the same thing.

The Open Source Casualty

The first major casualty of vibe coding is already visible: open source projects.

A recent paper titled “Vibe Coding Kills Open Source” documents how AI tools are disrupting the relationship between developers and open source maintainers. The mechanism is straightforward: when AI installs dependencies and writes integration code, developers interact less with the actual project documentation, community, and maintainers.

Tailwind Labs CEO Adam Wathan provided a concrete example. He attributed laying off three workers directly to AI coding tools. The reason: traffic to Tailwind’s documentation dropped 40% from early 2023 levels. Developers weren’t visiting the docs because AI tools were handling Tailwind integration for them.

This matters more than it might seem. Documentation traffic is how many open source projects discover customers for their commercial products. It’s how they measure adoption, understand user needs, and sustain funding. When AI tools become the intermediary, the open source project becomes invisible—used but not seen, depended upon but not supported.

If this pattern continues, we could see a wave of open source projects losing funding and maintainer attention even as their actual usage increases. The projects become more critical to more codebases while simultaneously becoming less sustainable.

The Quality Problem

Vibe coding’s second problem is quality. When you don’t read the code, you can’t evaluate its quality—you can only evaluate its behavior in the scenarios you test.

This is fine for throwaway prototypes. It’s dangerous for anything that will be maintained, scaled, or relied upon.

AI-generated code tends to:

Solve the immediate problem without considering the broader system. The code works for your current test case but may not handle edge cases, concurrent access, or unexpected inputs.

Introduce subtle performance issues. The generated code often prioritizes correctness over efficiency. N+1 database queries, unnecessary memory allocations, and suboptimal algorithms are common.

Create security vulnerabilities that look correct. Input validation that covers most cases but not all. Authentication checks that work in isolation but have race conditions. SQL queries that are parameterized in some places but not others.

Accumulate technical debt invisibly. Each AI-generated solution is locally reasonable but globally incoherent. The codebase becomes a patchwork of different patterns, conventions, and approaches.

When you vibe code, these issues accumulate silently. The application works, so everything seems fine. But underneath, the codebase is becoming increasingly fragile and unmaintainable.

The Skill Atrophy Risk

There’s a deeper concern: what happens to developer skills when understanding code becomes optional?

Programming is a skill that requires practice. Reading code, understanding data flow, debugging unexpected behavior, reasoning about edge cases—these are capabilities that develop through repetition and struggle. When AI handles the implementation and you only evaluate outputs, those skills atrophy.

This might not matter if AI tools were perfectly reliable. But they’re not. They fail, they produce wrong answers confidently, and they struggle with complex systems. When an AI tool fails and you’ve lost the skills to understand and fix the code it generated, you’re stuck.

The survey data is telling: 17.5% of developers are opting out of AI tools entirely, citing that the tools aren’t advanced enough or require too much learning time to be effective. These aren’t technophobes—they’re experienced developers who’ve evaluated the tradeoff and decided it doesn’t work for them yet.

When Vibe Coding Makes Sense

I’m not saying vibe coding is always wrong. There are legitimate use cases:

Personal projects and prototypes: If you’re building something for yourself and the stakes are low, vibe code away. The speed advantage is real, and the downsides are limited to your own frustration when things break.

Throwaway scripts: One-time data transformations, quick automation tasks, scripts that will run once and be discarded. The cost of failure is low, and the speed of development matters.

Exploring ideas: When you’re trying to figure out whether an idea is worth pursuing, a vibe-coded prototype can answer the question quickly. Just don’t confuse the prototype with a foundation for production software.

Learning by output: For beginners, vibe coding can demonstrate what’s possible and inspire learning. The danger is stopping at “it works” instead of understanding why it works.

When Vibe Coding Is Dangerous

The danger zones are clearer:

Production applications: Any software that users depend on, that handles sensitive data, or that needs to be maintained over time should not be vibe coded.

Team projects: When multiple people need to work on the same codebase, everyone needs to understand the code. Vibe-coded contributions create maintenance burdens for the whole team.

Anything with security implications: Authentication, authorization, data handling, payment processing—these require careful, intentional implementation with full understanding.

Infrastructure and operations: Vibe-coded infrastructure can fail in ways that are hard to diagnose because nobody understood the configuration in the first place.

The Middle Path

The responsible approach is somewhere between vibe coding and refusing to use AI at all:

Use AI to accelerate, not to replace understanding. Let AI write first drafts, but read and understand the code before shipping it. This captures most of the speed benefit while maintaining quality.

Review everything that matters. Prototypes can be vibe coded. Production code should be reviewed. The bar for review should match the stakes.

Maintain your skills. Even when AI handles implementation, spend time reading code, debugging manually, and understanding systems. These skills are insurance against AI failure.

Support open source deliberately. If your AI tools are using open source dependencies, make an effort to engage with those projects—read their docs, file issues, contribute, and consider sponsoring. Don’t let AI make open source invisible.

Be honest about what you understand. If you ship code you can’t explain, acknowledge that risk. Don’t pretend vibe-coded output is the same as understood code.

The Bigger Picture

Vibe coding is seductive because it removes the hardest part of programming: the thinking. It lets you focus on what you want rather than how to build it. For many tasks, this is genuinely productive.

But programming is hard for a reason. The complexity isn’t artificial—it reflects the genuine difficulty of building reliable, secure, maintainable software systems. Abstracting away that complexity doesn’t eliminate it; it hides it. And hidden complexity has a way of emerging at the worst possible time.

The most dangerous thing about vibe coding isn’t that it produces bad code—it’s that it produces code that works until it doesn’t, built by people who can’t fix it when it breaks.

Use AI tools. Let them make you faster. But don’t give in to the vibes when the stakes are real. Software development requires understanding, and understanding requires engaging with the code—even when AI offers to handle it for you.

Related Posts

The 32% Problem: Why Most Engineering Orgs Are Flying Blind on AI Governance
Engineering-LeadershipProcess-Methodology
Feb 3, 2026
7 minutes

The 32% Problem: Why Most Engineering Orgs Are Flying Blind on AI Governance

Here’s a statistic that should concern every engineering leader: only 32% of organizations have formal AI governance policies for their engineering teams. Another 41% rely on informal guidelines, and 27% have no governance at all.

Meanwhile, 91% of engineering leaders report that AI has improved developer velocity and code quality. But here’s the kicker: only 25% of them have actual data to support that claim.

We’re flying blind. Most organizations have adopted AI tools without the instrumentation to know whether they’re helping or hurting, and without the policies to manage the risks they introduce.

GitHub Copilot Agent Mode: First Impressions and Practical Limits
Technology-StrategyDevelopment-Practices
Feb 4, 2026
8 minutes

GitHub Copilot Agent Mode: First Impressions and Practical Limits

GitHub Copilot’s agent mode represents a significant shift in how AI coding assistants work. Instead of just suggesting completions as you type, agent mode can iterate on its own code, catch and fix errors automatically, suggest terminal commands, and even analyze runtime errors to propose fixes.

This isn’t AI-assisted coding anymore. It’s AI-directed coding, where you’re less of a writer and more of an orchestrator. After spending time with this new capability, I have thoughts on what it delivers, where it falls short, and how to use it effectively.

AI Code Review: The Hidden Bottleneck Nobody's Talking About
Process-MethodologyDevelopment-Practices
Feb 6, 2026
8 minutes

AI Code Review: The Hidden Bottleneck Nobody's Talking About

Here’s a problem that’s creeping up on engineering teams: AI tools are dramatically increasing the volume of code being produced, but they haven’t done anything to increase code review capacity. The bottleneck has shifted.

Where teams once spent the bulk of their time writing code, they now spend increasing time reviewing code—much of it AI-generated. And reviewing AI-generated code is harder than reviewing human-written code in ways that aren’t immediately obvious.