Cursor in JetBrains and the End of IDE Lock-In

Cursor in JetBrains and the End of IDE Lock-In

One of the quietest but most important developer-tooling stories of March 2026 is that Cursor is now available directly inside JetBrains IDEs through the Agent Client Protocol, or ACP, registry.

At first glance, this looks like a convenience feature. Cursor users can keep their preferred agent while staying in IntelliJ IDEA, PyCharm, or WebStorm. JetBrains users get access to a popular agentic workflow without switching editors. Nice, but not transformative.

The deeper story is that the market is slowly breaking the assumption that your AI agent and your IDE must come from the same vendor.

Why ACP Matters

JetBrains describes ACP as the equivalent of what the Language Server Protocol did for programming language tooling: an open standard that lets any compatible agent work in any compatible editor. Cursor’s implementation uses its CLI as the bridge, connecting over JSON-RPC and newline-delimited messaging. In plain terms, the editor becomes the client and the coding agent becomes a portable service.

That is strategically important because the first generation of AI coding adoption was defined by bundled experiences:

  • Cursor meant using the Cursor editor
  • Copilot meant living inside GitHub and VS Code
  • Claude Code meant terminal-native workflows

ACP starts to unwind that bundling. If an agent can move between environments, then developers can choose their editor for editing and their agent for reasoning. That sounds obvious, but it changes the competitive map.

What Changes for Teams

For individual developers, the value is flexibility. Many teams have JetBrains-heavy workflows because those IDEs are strong on refactoring, inspection, debugging, and language-specific depth. Cursor brings a more AI-native, agentic experience. With ACP, teams do not have to choose one set of strengths and give up the other.

For engineering leaders, the larger value is reduced tool lock-in.

If your team can switch agents without retraining everyone onto a new editor, procurement decisions get easier. If a model provider improves dramatically, or a security policy forces a tooling change, your AI workflow does not have to be rebuilt from scratch around a new IDE. That is a healthier market structure than the one we had a year ago.

What Does Not Magically Improve

Protocol portability does not solve the hard parts of AI adoption by itself.

Your agent can move across editors, but:

  • It still needs permissions and guardrails
  • It still needs repo-specific context to perform well
  • It still needs review and verification after it proposes changes

ACP is about interoperability, not intelligence. It gives teams more freedom to assemble a stack. It does not guarantee better output from the stack.

That said, interoperability matters more than people think. Standards change vendor incentives. Once users can move agents more easily, providers have to compete on capability, trust, workflow fit, and price instead of relying as heavily on captive ecosystems.

The More Interesting Competitive Angle

Cursor joining the ACP registry also hints at what the next phase of AI tool competition might look like.

Instead of “Which IDE wins?”, the question becomes:

  • Which agent works best for this team?
  • Which editor gives us the best human productivity?
  • Which protocol layer keeps the two loosely coupled?

That is a more mature framing than the current market, where teams often end up choosing a whole bundled workflow because one part of it is attractive.

This is why the JetBrains integration matters even if you never personally use Cursor. It is evidence that coding agents are becoming more like infrastructure components and less like monolithic products.

The AI tooling market still has plenty of lock-in. But March 2026 is starting to show what the exit ramps look like.

Related Posts

From Code Writer to AI Orchestrator: The Changing Developer Role
Industry-InsightsEngineering-Leadership
Feb 5, 2026
8 minutes

From Code Writer to AI Orchestrator: The Changing Developer Role

There’s a narrative circulating in tech circles: developers are evolving from “code writers” to “AI orchestrators.” The story goes that instead of typing code ourselves, we’ll direct AI agents that write code for us. Our job becomes coordination, review, and high-level direction rather than implementation.

It’s a compelling vision. It’s also significantly oversimplified.

Research shows that developers can currently “fully delegate” only 0-20% of tasks to AI. That’s not nothing, but it’s far from the wholesale transformation some predict. The reality of how developer roles are changing is more nuanced—and more interesting—than the hype suggests.

OpenClaw in 2026: Security Reality Check and Where It Still Shines
Technology-StrategyIndustry-Insights
Feb 25, 2026
4 minutes

OpenClaw in 2026: Security Reality Check and Where It Still Shines

OpenClaw (the project formerly known as Moltbot and Clawdbot) had a wild start to 2026: explosive growth, a rebrand after Anthropic’s trademark request, and adoption from Silicon Valley to major Chinese tech firms. By February it had sailed past 180,000 GitHub stars and drawn millions of visitors. Then the other shoe dropped. Security researchers disclosed critical issues—including CVE-2026-25253 and the ClawHavoc campaign, with hundreds of malicious skills and thousands of exposed instances. The gap between hype and reality became impossible to ignore.

The Trust Collapse: Why 84% Use AI But Only 33% Trust It
Industry-InsightsEngineering-Leadership
Feb 19, 2026
5 minutes

The Trust Collapse: Why 84% Use AI But Only 33% Trust It

Usage of AI coding tools is at an all-time high: the vast majority of developers use or plan to use them. Trust in AI output, meanwhile, has fallen. In recent surveys, only about a third of developers say they trust AI output, with a tiny fraction “highly” trusting it—and experienced developers are the most skeptical.

That gap—high adoption, low trust—explains a lot about why teams “don’t see benefits.” When you don’t trust the output, you verify everything. Verification eats the time AI saves, so net productivity is flat or negative. Or you use AI only for low-stakes work and conclude it’s “not for real code.” Either way, the team doesn’t experience AI as a performance win.