Cursor in JetBrains and the End of IDE Lock-In

Cursor in JetBrains and the End of IDE Lock-In

One of the quietest but most important developer-tooling stories of March 2026 is that Cursor is now available directly inside JetBrains IDEs through the Agent Client Protocol, or ACP, registry.

At first glance, this looks like a convenience feature. Cursor users can keep their preferred agent while staying in IntelliJ IDEA, PyCharm, or WebStorm. JetBrains users get access to a popular agentic workflow without switching editors. Nice, but not transformative.

The deeper story is that the market is slowly breaking the assumption that your AI agent and your IDE must come from the same vendor.

Why ACP Matters

JetBrains describes ACP as the equivalent of what the Language Server Protocol did for programming language tooling: an open standard that lets any compatible agent work in any compatible editor. Cursor’s implementation uses its CLI as the bridge, connecting over JSON-RPC and newline-delimited messaging. In plain terms, the editor becomes the client and the coding agent becomes a portable service.

That is strategically important because the first generation of AI coding adoption was defined by bundled experiences:

  • Cursor meant using the Cursor editor
  • Copilot meant living inside GitHub and VS Code
  • Claude Code meant terminal-native workflows

ACP starts to unwind that bundling. If an agent can move between environments, then developers can choose their editor for editing and their agent for reasoning. That sounds obvious, but it changes the competitive map.

What Changes for Teams

For individual developers, the value is flexibility. Many teams have JetBrains-heavy workflows because those IDEs are strong on refactoring, inspection, debugging, and language-specific depth. Cursor brings a more AI-native, agentic experience. With ACP, teams do not have to choose one set of strengths and give up the other.

For engineering leaders, the larger value is reduced tool lock-in.

If your team can switch agents without retraining everyone onto a new editor, procurement decisions get easier. If a model provider improves dramatically, or a security policy forces a tooling change, your AI workflow does not have to be rebuilt from scratch around a new IDE. That is a healthier market structure than the one we had a year ago.

What Does Not Magically Improve

Protocol portability does not solve the hard parts of AI adoption by itself.

Your agent can move across editors, but:

  • It still needs permissions and guardrails
  • It still needs repo-specific context to perform well
  • It still needs review and verification after it proposes changes

ACP is about interoperability, not intelligence. It gives teams more freedom to assemble a stack. It does not guarantee better output from the stack.

That said, interoperability matters more than people think. Standards change vendor incentives. Once users can move agents more easily, providers have to compete on capability, trust, workflow fit, and price instead of relying as heavily on captive ecosystems.

The More Interesting Competitive Angle

Cursor joining the ACP registry also hints at what the next phase of AI tool competition might look like.

Instead of “Which IDE wins?”, the question becomes:

  • Which agent works best for this team?
  • Which editor gives us the best human productivity?
  • Which protocol layer keeps the two loosely coupled?

That is a more mature framing than the current market, where teams often end up choosing a whole bundled workflow because one part of it is attractive.

This is why the JetBrains integration matters even if you never personally use Cursor. It is evidence that coding agents are becoming more like infrastructure components and less like monolithic products.

The AI tooling market still has plenty of lock-in. But March 2026 is starting to show what the exit ramps look like.

Related Posts

Why AI Is Hurting Your Best Engineers Most
Engineering-LeadershipIndustry-Insights
Mar 8, 2026
4 minutes

Why AI Is Hurting Your Best Engineers Most

The productivity story on AI coding tools has a flattering headline: senior engineers realize nearly five times the productivity gains of junior engineers from AI tools. More experience means better prompts, better evaluation of output, better use of AI on the right tasks. The gap is real and it makes sense.

But there’s a hidden cost buried in that same data. The tasks senior engineers are being asked to spend their time on are changing—and not always in ways that use their strengths well. Increasingly, the work that lands on senior engineers’ plates in AI-augmented teams is validation, review, and debugging of AI-generated code—a category of work that is simultaneously less interesting, harder than it looks, and consuming time that used to go to architecture, design, and mentorship.

Getting Your Team Unstuck: A Manager's Guide to AI Adoption
Engineering-LeadershipProcess-Methodology
Feb 22, 2026
5 minutes

Getting Your Team Unstuck: A Manager's Guide to AI Adoption

You’ve got AI tools in place. You’ve encouraged the team to use them. But the feedback is lukewarm or negative: “We tried it.” “It’s not really faster.” “We don’t see the benefit.” As a manager, you’re stuck between leadership expecting ROI and a team that doesn’t feel it.

The way out isn’t to push harder or to give up. It’s to change how you’re leading the adoption: create safety to experiment, narrow the focus so wins are visible, and align incentives so that “seeing benefits” is something the team can actually achieve. This guide is for engineering managers whose teams are struggling to see any performance benefits from AI in their software engineering workflows—and who want to turn that around.

The AI Productivity Paradox: Why Experienced Developers Are Slowing Down
Industry-InsightsEngineering-Leadership
Feb 2, 2026
6 minutes

The AI Productivity Paradox: Why Experienced Developers Are Slowing Down

There’s something strange happening in software development right now, and I think we need to talk about it.

Recent research has surfaced a troubling finding: experienced developers working on complex systems are actually 19% slower when using AI coding tools—despite perceiving themselves as working faster. This isn’t a minor discrepancy. It’s a fundamental disconnect between how productive we feel and how productive we actually are.

As someone who’s been experimenting with AI tools extensively (and writing about the results), this finding resonates with my experience. Let me break down what’s happening and what it means for engineering teams.