The 32% Problem: Why Most Engineering Orgs Are Flying Blind on AI Governance

The 32% Problem: Why Most Engineering Orgs Are Flying Blind on AI Governance

Here’s a statistic that should concern every engineering leader: only 32% of organizations have formal AI governance policies for their engineering teams. Another 41% rely on informal guidelines, and 27% have no governance at all.

Meanwhile, 91% of engineering leaders report that AI has improved developer velocity and code quality. But here’s the kicker: only 25% of them have actual data to support that claim.

We’re flying blind. Most organizations have adopted AI tools without the instrumentation to know whether they’re helping or hurting, and without the policies to manage the risks they introduce.

What “Governance” Actually Means

Let me be clear about what I mean by governance, because the word often triggers an allergic reaction in engineering organizations. Governance isn’t bureaucracy. It’s not about slowing things down or adding approval processes to every AI interaction.

Governance is about answering three fundamental questions:

  1. What are we allowing? Which AI tools are sanctioned, and for what purposes?
  2. What are we measuring? How do we know if AI adoption is actually helping?
  3. What are we protecting? What guardrails prevent AI tools from introducing unacceptable risks?

Without answers to these questions, you’re not making informed decisions about AI adoption—you’re just hoping for the best.

The Risks of Ungoverned AI Adoption

Let me walk through some of the risks that organizations without AI governance are implicitly accepting.

Security and Data Exposure

AI coding tools need context to be useful. They work best when they can see your codebase, understand your patterns, and access relevant documentation. But that context often includes sensitive information.

I wrote recently about Moltbot and the security considerations of giving an AI assistant access to your CRM, email, and calendar. The same concerns apply to coding tools. When developers paste code into ChatGPT, share proprietary algorithms with Copilot, or feed sensitive data into AI assistants, that information leaves your control.

Without governance, you have no visibility into what data is flowing to AI providers. You might be inadvertently training their models on your intellectual property, exposing customer data, or leaking security-sensitive implementation details.

Code Quality and Technical Debt

AI tools generate code quickly. That’s the point. But speed and quality aren’t the same thing.

Without governance, you have no way to track whether AI-generated code is accumulating technical debt faster than human-written code. You can’t measure whether AI suggestions are introducing subtle bugs, security vulnerabilities, or architectural problems that will cost you later.

The 91% of leaders who believe AI is improving quality might be right. But without measurement, they might also be watching technical debt accumulate invisibly until it becomes a crisis.

Intellectual Property Questions

When an AI tool generates code, who owns it? The answer isn’t as simple as you might think.

AI models are trained on vast amounts of code, some of it copyrighted or licensed under specific terms. When those models generate output, there’s an open question about whether that output might infringe on the intellectual property rights of the training data sources.

This isn’t a theoretical concern. GitHub Copilot has been the subject of class-action lawsuits over exactly this issue. Without governance policies that address IP considerations, your organization might be unknowingly accumulating legal risk with every AI-generated line of code.

Skill Atrophy

If developers rely heavily on AI tools for certain types of work, they may lose proficiency in those areas over time. This might be fine for truly commoditized tasks, but it’s concerning for skills that matter when AI tools fail or aren’t available.

Without governance that considers skill development, you might be trading short-term productivity for long-term capability loss. When the AI tools change, fail, or become unavailable, will your team still be able to function effectively?

A Practical Governance Framework

Governance doesn’t have to be heavy-handed. Here’s a practical framework that provides visibility and protection without creating bureaucratic overhead.

1. Establish a Tool Inventory

Start by knowing what AI tools your team is using. This sounds basic, but many organizations have no central visibility into AI adoption. Developers might be using ChatGPT, Copilot, Cursor, Claude, Gemini, and various other tools without any organizational awareness.

Create a simple registry of sanctioned AI tools. Define which tools are approved for which purposes. This doesn’t mean banning everything else—it means creating clarity about what’s expected.

For tools that handle code or sensitive data, establish minimum requirements:

  • Data handling and retention policies of the provider
  • Security certifications and compliance posture
  • Terms of service regarding training on customer data
  • Intellectual property provisions

2. Define Acceptable Use Guidelines

Not everything should be fed to AI tools. Create clear guidelines about what types of information are appropriate to share with AI assistants:

Generally Acceptable:

  • Public documentation and open-source code
  • Generic coding patterns and algorithms
  • Non-sensitive internal documentation

Requires Caution:

  • Proprietary business logic
  • Internal architectural details
  • Code that handles sensitive data

Generally Prohibited:

  • Customer data, even in sanitized form
  • Security implementations and credentials
  • Legally privileged information

These guidelines don’t need to be exhaustive. The goal is to give developers a framework for making good decisions, not to anticipate every possible scenario.

3. Implement Measurement

This is where most organizations fail. They adopt AI tools based on promises and anecdotes, without establishing baseline metrics or tracking actual outcomes.

At minimum, track:

  • Cycle time: Are you shipping faster?
  • Quality metrics: Bug rates, incident frequency, technical debt indicators
  • Developer experience: Do developers actually find the tools helpful?
  • Cost: What are you spending on AI tools, and what’s the ROI?

The goal isn’t to create a surveillance system. It’s to have data that supports informed decisions about AI adoption. If the tools are helping, the data will show it. If they’re not, you’ll know before too much damage is done.

4. Establish Review Processes

AI-generated code needs review, and that review might need to be different from traditional code review. Consider:

  • Explicit tagging: Should AI-generated code be marked as such for reviewers?
  • Review checklists: Are there specific things reviewers should check in AI-generated code?
  • Escalation paths: When should AI-generated code get additional scrutiny?

This doesn’t mean creating a separate review track that slows everything down. It means being intentional about how AI-generated code is evaluated.

5. Plan for Incidents

What happens when an AI tool causes a problem? Whether it’s a data exposure, a generated bug that reaches production, or an IP issue, you need a response plan.

Define:

  • How AI-related incidents are identified and categorized
  • Who is responsible for investigation and response
  • What documentation is required
  • How lessons learned are incorporated into governance updates

The Measurement Gap

Let’s return to that troubling statistic: 91% of leaders say AI improves velocity, but only 25% have data to support it.

This measurement gap is the most critical problem in AI governance today. Organizations are making significant investments in AI tools—not just money, but workflow changes, skill development, and process adaptations—based on feelings rather than evidence.

Here’s what I’d recommend measuring:

Outcome Metrics (What Actually Matters)

  • Time from commit to production
  • Feature delivery cycle time
  • Bug escape rate
  • Customer-reported issues
  • Developer satisfaction and retention

Activity Metrics (Leading Indicators)

  • Pull request size and frequency
  • Code review turnaround time
  • Build and test pass rates
  • Deployment frequency

AI-Specific Metrics (Understanding Impact)

  • AI suggestion acceptance rates
  • Time spent on AI-assisted vs. traditional coding
  • Rework rates on AI-generated code
  • Developer-reported AI usefulness

The key is comparing these metrics before and after AI adoption, and ideally having control groups that can isolate the AI effect from other changes.

Getting Started

If you’re an engineering leader without AI governance today, here’s how to start:

Week 1-2: Discovery

  • Survey your team about current AI tool usage
  • Identify what data might be flowing to AI providers
  • Document your current baseline metrics (whatever you’re already tracking)

Week 3-4: Policy Development

  • Draft acceptable use guidelines
  • Create a sanctioned tool list with rationale
  • Define measurement approach

Month 2: Implementation

  • Communicate policies to the team
  • Set up measurement tracking
  • Establish review process modifications

Ongoing: Iteration

  • Review metrics monthly
  • Update policies based on learnings
  • Adapt as AI capabilities and risks evolve

The Bottom Line

AI governance isn’t about restricting AI use. It’s about enabling AI use responsibly and effectively.

The organizations that thrive with AI tools will be those that adopt them thoughtfully—with clear policies, meaningful measurement, and appropriate safeguards. The organizations that struggle will be those that adopted based on hype, never measured impact, and discovered the risks only when something went wrong.

You don’t have to be in the 32% with formal governance today. But you should be moving in that direction. The alternative—flying blind with tools that have real risks and uncertain benefits—isn’t a strategy. It’s a gamble.

And unlike most gambles, this one has your codebase, your team’s skills, and potentially your company’s intellectual property on the line.

Related Posts

Jul 8, 2014
3 minutes

Always Use Automated Integration Testing

QA or Quality Assurance of a software project is often the area of software development that is most neglected. Typically developers avoid software testing like their lives depended on it. While a basic level of testing is required for a single scenario to validate that your code “works”, the level of testing that is required to ensure that all users have a good user experience across all targeted platforms is something that a developer seems to think is beneath them.

The AI Productivity Paradox: Why Experienced Developers Are Slowing Down
Industry-InsightsEngineering-Leadership
Feb 2, 2026
6 minutes

The AI Productivity Paradox: Why Experienced Developers Are Slowing Down

There’s something strange happening in software development right now, and I think we need to talk about it.

Recent research has surfaced a troubling finding: experienced developers working on complex systems are actually 19% slower when using AI coding tools—despite perceiving themselves as working faster. This isn’t a minor discrepancy. It’s a fundamental disconnect between how productive we feel and how productive we actually are.

As someone who’s been experimenting with AI tools extensively (and writing about the results), this finding resonates with my experience. Let me break down what’s happening and what it means for engineering teams.

AI Agents and Google Slides: When Promise Meets Reality
Process-MethodologyIndustry-Insights
Jan 12, 2026
4 minutes

AI Agents and Google Slides: When Promise Meets Reality

I’ve been experimenting with AI agents to help create Google Slides presentations, and I’ve discovered something interesting: they’re great at the planning and ideation phase, but they completely fall apart when it comes to actually delivering on their promises.

The Promising Start

I’ve had genuinely great success using ChatGPT to help with presentation planning. I’ll start a conversation about my presentation topic, share the core material I want to cover, and ChatGPT does an excellent job of: