JetBrains Air and the Case for the Agent-Native IDE

JetBrains Air and the Case for the Agent-Native IDE

JetBrains Air, launched in public preview in early March, is one of the more interesting answers yet to a question the AI tooling market keeps circling: do we really want agents bolted onto traditional editors, or do we eventually need environments designed around them from the start?

Air is betting on the second path.

Why Air Is Interesting

Most current AI coding tools still inherit the shape of the pre-AI IDE. There is a primary editor, maybe a chat pane, maybe an agent sidebar, and the user is still clearly the central operator of a mostly traditional workspace.

Air takes a different angle. It is built around multiple concurrent agents and the orchestration of those agents inside the development environment. JetBrains is positioning it less like “an editor with AI” and more like “a workspace for directing and integrating agent work.”

That distinction may sound subtle, but product categories often shift this way. At first, new capabilities get added to old containers. Later, someone designs a new container around the capability itself.

Why This Might Matter

JetBrains has an advantage here because it understands deep code context extremely well. The company has spent decades building language-aware, refactoring-heavy tooling. Air combines that lineage with multiple agents, terminal integration, Git context, and preview workflows.

In practical terms, that points to an environment where the developer is less focused on typing every change and more focused on:

  • splitting work into the right parallel units
  • comparing outputs from different agents
  • reviewing and integrating results
  • managing context across multiple active threads

That is a better fit for where agentic development is heading than the classic “one prompt, one answer” interaction model.

The Strategic Bet

Air also reflects a larger market split that is becoming clearer:

  • some vendors are making existing editors more agent-capable
  • some are building orchestration layers around agent workflows
  • some, like JetBrains here, are experimenting with environments where agents are first-class from the start

That matters because the UI and workflow assumptions of a traditional IDE may not be the best fit once multiple agents are active at the same time. If the real job of the developer increasingly becomes orchestration, then the tool should probably optimize for orchestration.

The same thing happened in other software categories. Early tools often extend the old paradigm for as long as possible. Eventually the new usage pattern becomes important enough that the old container starts to feel awkward.

What Could Go Wrong

Agent-native tooling also has risks. It is easy to over-index on concurrency and end up with a more complicated environment than most teams actually need. There is a difference between enabling multiple agents and making multi-agent work comprehensible.

The hard questions are not just:

  • how many agents can run?

They are:

  • how clearly can I tell what each agent is doing?
  • how do I compare outputs?
  • how do I merge work without creating chaos?
  • how do I preserve enough context that humans still understand the system?

An agent-native IDE that cannot answer those questions well is just a more futuristic way to get overwhelmed.

The Bigger Takeaway

Even if Air itself evolves significantly before broad adoption, its public preview is a useful market signal. It suggests the next phase of AI developer tooling will not just be a fight over models or extensions. It will also be a fight over what the primary development environment should look like when agents are normal.

That is a more fundamental competition than “whose autocomplete is better?”

JetBrains Air matters because it treats the agent not as an extra feature but as a core design assumption. Whether or not this exact product becomes dominant, that assumption is likely to shape the next generation of development tools.

Related Posts

Lessons from a Year of AI Tool Experiments: What Actually Worked
Industry-InsightsTechnology-Strategy
Feb 8, 2026
9 minutes

Lessons from a Year of AI Tool Experiments: What Actually Worked

Over the past year, I’ve been experimenting extensively with AI tools—trying to understand what they’re actually good for, where they fall short, and how to use them effectively. I’ve written about several of these experiments: the meeting scheduling failures, the presentation generation disappointments, and most recently, setting up Moltbot as an SDR.

Looking back at all these experiments, patterns emerge. Some things consistently worked. Others consistently didn’t. And a few things surprised me in both directions.

Codex Security and the Rise of AI Reviewing AI
Technology-StrategyEngineering-Leadership
Mar 9, 2026
4 minutes

Codex Security and the Rise of AI Reviewing AI

The next big shift in AI-assisted software development is not more code generation. It is AI for verification.

OpenAI’s new Codex Security research preview, announced in early March 2026, is a good signal of where the market is going. The product scans repositories commit by commit, builds repository-specific threat models, validates findings in isolated environments, and ranks issues with proposed fixes. OpenAI says early adopters used it to detect more than 11,000 critical and high-severity vulnerabilities while cutting false positives by more than 50%.

Why Your Team Isn't Seeing AI Benefits (And It's Not the Tools)
Engineering-LeadershipIndustry-Insights
Feb 16, 2026
6 minutes

Why Your Team Isn't Seeing AI Benefits (And It's Not the Tools)

You rolled out AI coding tools. You got licenses, ran the demos, and encouraged the team to try them. Months later, the feedback is lukewarm: “We use it sometimes.” “It’s okay for small stuff.” “I’m not sure it’s actually faster.” Nobody’s seeing the dramatic productivity gains the vendor promised.

If this sounds familiar, you’re not alone. Research shows that while 84% of developers use or plan to use AI tools, only 55% find them highly effective—and trust in AI output has dropped sharply. Adoption doesn’t equal impact. The gap between “we have AI” and “AI is helping us ship better, faster” is where most teams get stuck.