GitHub Copilot Goes Fully Agentic in JetBrains: Hooks, MCP, and Instruction Files

GitHub Copilot Goes Fully Agentic in JetBrains: Hooks, MCP, and Instruction Files

In mid-March 2026, GitHub promoted a major bundle of Copilot agentic capabilities to general availability in JetBrains IDEs, moving key features out of preview for day-to-day use.

The changelog reads like a checklist of what “serious agentic IDE support” now means:

  • Custom agents and sub-agents, plus a planning-oriented agent workflow for breaking down complex work
  • Agent hooks in public preview, so teams can run custom commands at defined points in an agent session
  • MCP auto-approve at server and tool granularity to reduce approval friction when policies allow it
  • Automatic discovery of AGENTS.md and CLAUDE.md instruction files during agent sessions
  • Auto model selection generally available, with Copilot choosing models based on availability and performance
  • An extended reasoning experience for models that expose more explicit thinking, such as Codex-class workflows

Why JetBrains Users Should Care

JetBrains IDEs are where many teams live for deep language support, refactoring, and navigation. Agent features only matter if they meet developers in that workflow, not as a separate tool they resent switching to.

GA here signals GitHub believes the integration is stable enough to be a default expectation, not an experiment.

Hooks Are the Quiet Power Feature

Hooks are easy to underestimate. They are how organizations turn generic agent behavior into org-specific guardrails:

  • run policy checks before tool use
  • inject logging after actions
  • block or warn on sensitive paths
  • attach internal metadata for audit trails

If your company is serious about agents, hooks are often the bridge between “cool demo” and “allowed in production repos.”

Instruction Files Are Becoming Standard

Support for AGENTS.md and CLAUDE.md discovery is another step toward repo-native agent configuration. The repo becomes the source of truth for how agents should behave, not a pile of private chat prompts.

That is good for consistency and onboarding. It also means those files deserve review, versioning, and ownership like any other critical config.

MCP Auto-Approve Needs Policy, Not Optimism

MCP auto-approve can remove friction, but friction is sometimes the security model. Teams should pair it with:

  • least-privilege MCP servers
  • clear allow lists
  • logging and traceability

GitHub’s own agent logging improvements elsewhere in March are part of the same puzzle: speed and control have to be designed together.

The Bottom Line

Copilot’s JetBrains GA is not just feature expansion. It is a statement that agentic workflows are becoming baseline IDE functionality for a large slice of the market.

If you lead an engineering org, the question is no longer whether JetBrains shops will use agents. It is whether your standards for hooks, instruction files, and MCP permissions are ready for that reality.

Related Posts

Why AI Is Hurting Your Best Engineers Most
Engineering-LeadershipIndustry-Insights
Mar 8, 2026
4 minutes

Why AI Is Hurting Your Best Engineers Most

The productivity story on AI coding tools has a flattering headline: senior engineers realize nearly five times the productivity gains of junior engineers from AI tools. More experience means better prompts, better evaluation of output, better use of AI on the right tasks. The gap is real and it makes sense.

But there’s a hidden cost buried in that same data. The tasks senior engineers are being asked to spend their time on are changing—and not always in ways that use their strengths well. Increasingly, the work that lands on senior engineers’ plates in AI-augmented teams is validation, review, and debugging of AI-generated code—a category of work that is simultaneously less interesting, harder than it looks, and consuming time that used to go to architecture, design, and mentorship.

Gemini CLI Conductor Turns Review into a Structured Report
Development-PracticesPerformance-Optimization
Mar 20, 2026
3 minutes

Gemini CLI Conductor Turns Review into a Structured Report

Google’s automated review update for Gemini CLI Conductor is worth paying attention to for a simple reason: it treats AI review as a structured verification step, not as another free-form chat.

Conductor’s new review mode evaluates generated code across multiple explicit dimensions:

  • code quality
  • plan compliance
  • style and guideline adherence
  • test validation
  • security review

The output is a categorized report by severity, with exact file references and a path to launch follow-up work. That is an important product choice.

The PR Tsunami: What AI Code Volume Is Doing to Your Review Process
Engineering-LeadershipPerformance-Optimization
Mar 3, 2026
4 minutes

The PR Tsunami: What AI Code Volume Is Doing to Your Review Process

AI coding tools delivered on their core promise: developers write less, ship more. Teams using AI complete 21% more tasks. PR volume has exploded—some teams that previously handled 10–15 pull requests per week are now seeing 50–100. In a narrow sense, that’s a win.

But there’s a tax on that win that most engineering leaders aren’t accounting for: AI-generated PRs wait 4.6x longer for review than human-written code, despite actually being reviewed 2x faster once someone picks them up. The bottleneck isn’t coding anymore. It’s review capacity, and it’s getting worse as AI generation accelerates.