Getting Your Team Unstuck: A Manager's Guide to AI Adoption

Getting Your Team Unstuck: A Manager's Guide to AI Adoption

You’ve got AI tools in place. You’ve encouraged the team to use them. But the feedback is lukewarm or negative: “We tried it.” “It’s not really faster.” “We don’t see the benefit.” As a manager, you’re stuck between leadership expecting ROI and a team that doesn’t feel it.

The way out isn’t to push harder or to give up. It’s to change how you’re leading the adoption: create safety to experiment, narrow the focus so wins are visible, and align incentives so that “seeing benefits” is something the team can actually achieve. This guide is for engineering managers whose teams are struggling to see any performance benefits from AI in their software engineering workflows—and who want to turn that around.

1. Stop Pushing Adoption; Start Defining Success

If the goal is “everyone use AI more,” you’ll get more usage and possibly more frustration. If the goal is “we get measurable benefit in these specific areas,” you give the team a target they can hit.

Do this: Pick 2–3 outcomes that matter (e.g. “faster docs,” “fewer ‘how does this work?’ questions,” “shorter cycle time for green-tier tasks”). Define what “better” looks like and how you’ll measure it. Communicate that: “We’re not judging you on how much you use AI. We’re judging whether we’re faster and better in these areas, and AI is one lever.”

That reframe alone helps. The team stops feeling like they’re failing for not “adopting” and starts working toward a shared definition of benefit.

2. Create Safety to Experiment and to Say No

People won’t try new workflows if they’re afraid that mistakes will be blamed on “using AI” or that skipping AI will be seen as resistance. They also need permission to say “this task is faster without AI.”

Do this: Say explicitly: “We’re experimenting. If AI slows you down on something, don’t use it there. If something goes wrong and AI was involved, we’ll fix it and learn—no blame.” Back it up: when something breaks, focus on process (review, tests, guardrails), not on “who used AI.” And when someone says “I’m not using AI for X because it’s slower,” treat that as useful data, not defiance.

3. Narrow the Use Cases First

Trying to use AI for everything makes it hard to see where it actually helps. Narrow the scope so the team can run a few clear experiments and see results.

Do this: Choose 2–3 workflows (e.g. “docs from code,” “test generation,” “daily briefing or ask-our-docs”). Run them for 2–4 weeks. Measure the outcome (e.g. “we have more docs,” “we added N tests,” “we answered X questions from the doc bot”). Share the results. Only then consider adding more use cases. This way, benefits are visible and attributable instead of lost in the noise.

4. Own the Verification and Guardrails

If the team doesn’t trust the tool, they’ll over-verify or avoid it. Trust is built when they see that mistakes are caught before they hurt.

Do this: Define simple rules: what must always be reviewed, what can be reviewed lightly, and what’s off-limits for AI. Put lightweight guardrails in place (e.g. required review for security-sensitive paths, tests for AI-touched code). When something slips through, improve the process without blaming the person. Over time, the team will feel safe using AI where you’ve said it’s okay.

5. Celebrate and Share Small Wins

If the only message the team hears is “use AI more,” they won’t see success. If they hear “we shipped these docs in a day” or “we added these tests with AI and they caught a bug,” they start to see benefit.

Do this: In standups or retros, call out concrete wins: “This week we used AI to generate the runbook for X—saved us a few hours.” “We added tests for Y with AI; one of them caught a regression.” Name the workflow and the outcome. That builds evidence that AI can help and gives the team something to point to when they’re skeptical.

6. Align Incentives With Learning and Outcomes

If people are rewarded only for velocity (story points, PRs), they’ll avoid anything that might slow them down in the short term—including learning new workflows or doing experiments. If they’re rewarded for outcomes and for improving how the team works, they have room to try AI and report honestly.

Do this: Make “we ran an AI experiment and here’s what we learned” a valid and valued outcome. Include quality, sustainability, and team effectiveness in how you assess the team (and yourself). Protect some time for experimentation so the team doesn’t have to “sneak” learning in.

7. Listen to the Skeptics

The people who are least convinced often have the most useful feedback. They’ve tried it, hit the friction, and can tell you exactly where AI is slowing them down or where the process is broken.

Do this: Ask: “Where did AI help? Where did it waste your time? What would have to be true for you to use it more?” Use that to adjust: narrow use cases, fix verification, or change expectations. Don’t treat skepticism as something to overcome; treat it as input that makes adoption better.

8. Tie It Back to the Broader Theme

Everything above is in service of one idea: helping the team see performance benefits from AI in their workflows. That only happens when:

  • Success is defined so the team can achieve it.
  • It’s safe to experiment and to say when AI isn’t helping.
  • Use cases are narrow enough that wins are visible.
  • Verification and guardrails make it safe to trust the tool.
  • Wins are celebrated and shared.
  • Incentives support learning and outcomes, not just throughput.
  • Skeptics are heard and their feedback is used.

As a manager, your job isn’t to make the team “use AI.” It’s to create the conditions where using AI in the right places leads to benefits everyone can see. When your team is stuck, start there—then iterate with them on the rest.

Related Posts

May 7, 2014
2 minutes

Micromanagement Ruins Teams

It seems that the management thinking these days is that managers should empower their employees to make decisions and implement the actions behind these decisions. This works great when you have a team and management that has mutual trust with a mutual goal. However, when the manager does not trust the members of the team, or thinks that they have to be the one to make every decision or have input into every task, the empowerment disappears.

When AI Assistants Fail: The Meeting Scheduling Reality Check
Process-MethodologyIndustry-Insights
Jan 11, 2026
3 minutes

When AI Assistants Fail: The Meeting Scheduling Reality Check

I recently tried to use AI assistants to solve what should be a straightforward problem: scheduling a meeting with three other people at my office. We’re all Google Workspace users, so I figured this would be a perfect use case for AI—especially given all the hype about AI assistants being able to handle calendar management and scheduling.

Spoiler alert: both ChatGPT and Gemini failed spectacularly.

The ChatGPT Experience

I started with ChatGPT, thinking it would be able to help coordinate schedules. My request was simple: find a time that works for me and three colleagues for a meeting.

Mar 30, 2015
2 minutes

Optimize Wide To Narrow

If you consider the path that a user takes through your website from landing page to successful conversion, you can think of the number of users that make it to each point along the way to a successful conversion as similarly shaped to that of a funnel. In a typical setup, you may have a very small percentage of your users make it to a successful conversion, but there are several areas along the way that either improve the chances the user will convert or decrease those chances.