Getting Your Team Unstuck: A Manager's Guide to AI Adoption

Getting Your Team Unstuck: A Manager's Guide to AI Adoption

You’ve got AI tools in place. You’ve encouraged the team to use them. But the feedback is lukewarm or negative: “We tried it.” “It’s not really faster.” “We don’t see the benefit.” As a manager, you’re stuck between leadership expecting ROI and a team that doesn’t feel it.

The way out isn’t to push harder or to give up. It’s to change how you’re leading the adoption: create safety to experiment, narrow the focus so wins are visible, and align incentives so that “seeing benefits” is something the team can actually achieve. This guide is for engineering managers whose teams are struggling to see any performance benefits from AI in their software engineering workflows—and who want to turn that around.

1. Stop Pushing Adoption; Start Defining Success

If the goal is “everyone use AI more,” you’ll get more usage and possibly more frustration. If the goal is “we get measurable benefit in these specific areas,” you give the team a target they can hit.

Do this: Pick 2–3 outcomes that matter (e.g. “faster docs,” “fewer ‘how does this work?’ questions,” “shorter cycle time for green-tier tasks”). Define what “better” looks like and how you’ll measure it. Communicate that: “We’re not judging you on how much you use AI. We’re judging whether we’re faster and better in these areas, and AI is one lever.”

That reframe alone helps. The team stops feeling like they’re failing for not “adopting” and starts working toward a shared definition of benefit.

2. Create Safety to Experiment and to Say No

People won’t try new workflows if they’re afraid that mistakes will be blamed on “using AI” or that skipping AI will be seen as resistance. They also need permission to say “this task is faster without AI.”

Do this: Say explicitly: “We’re experimenting. If AI slows you down on something, don’t use it there. If something goes wrong and AI was involved, we’ll fix it and learn—no blame.” Back it up: when something breaks, focus on process (review, tests, guardrails), not on “who used AI.” And when someone says “I’m not using AI for X because it’s slower,” treat that as useful data, not defiance.

3. Narrow the Use Cases First

Trying to use AI for everything makes it hard to see where it actually helps. Narrow the scope so the team can run a few clear experiments and see results.

Do this: Choose 2–3 workflows (e.g. “docs from code,” “test generation,” “daily briefing or ask-our-docs”). Run them for 2–4 weeks. Measure the outcome (e.g. “we have more docs,” “we added N tests,” “we answered X questions from the doc bot”). Share the results. Only then consider adding more use cases. This way, benefits are visible and attributable instead of lost in the noise.

4. Own the Verification and Guardrails

If the team doesn’t trust the tool, they’ll over-verify or avoid it. Trust is built when they see that mistakes are caught before they hurt.

Do this: Define simple rules: what must always be reviewed, what can be reviewed lightly, and what’s off-limits for AI. Put lightweight guardrails in place (e.g. required review for security-sensitive paths, tests for AI-touched code). When something slips through, improve the process without blaming the person. Over time, the team will feel safe using AI where you’ve said it’s okay.

5. Celebrate and Share Small Wins

If the only message the team hears is “use AI more,” they won’t see success. If they hear “we shipped these docs in a day” or “we added these tests with AI and they caught a bug,” they start to see benefit.

Do this: In standups or retros, call out concrete wins: “This week we used AI to generate the runbook for X—saved us a few hours.” “We added tests for Y with AI; one of them caught a regression.” Name the workflow and the outcome. That builds evidence that AI can help and gives the team something to point to when they’re skeptical.

6. Align Incentives With Learning and Outcomes

If people are rewarded only for velocity (story points, PRs), they’ll avoid anything that might slow them down in the short term—including learning new workflows or doing experiments. If they’re rewarded for outcomes and for improving how the team works, they have room to try AI and report honestly.

Do this: Make “we ran an AI experiment and here’s what we learned” a valid and valued outcome. Include quality, sustainability, and team effectiveness in how you assess the team (and yourself). Protect some time for experimentation so the team doesn’t have to “sneak” learning in.

7. Listen to the Skeptics

The people who are least convinced often have the most useful feedback. They’ve tried it, hit the friction, and can tell you exactly where AI is slowing them down or where the process is broken.

Do this: Ask: “Where did AI help? Where did it waste your time? What would have to be true for you to use it more?” Use that to adjust: narrow use cases, fix verification, or change expectations. Don’t treat skepticism as something to overcome; treat it as input that makes adoption better.

8. Tie It Back to the Broader Theme

Everything above is in service of one idea: helping the team see performance benefits from AI in their workflows. That only happens when:

  • Success is defined so the team can achieve it.
  • It’s safe to experiment and to say when AI isn’t helping.
  • Use cases are narrow enough that wins are visible.
  • Verification and guardrails make it safe to trust the tool.
  • Wins are celebrated and shared.
  • Incentives support learning and outcomes, not just throughput.
  • Skeptics are heard and their feedback is used.

As a manager, your job isn’t to make the team “use AI.” It’s to create the conditions where using AI in the right places leads to benefits everyone can see. When your team is stuck, start there—then iterate with them on the rest.

Related Posts

OpenClaw for Engineering Teams: Beyond Chatbots
Technology-StrategyIndustry-Insights
Feb 9, 2026
8 minutes

OpenClaw for Engineering Teams: Beyond Chatbots

I wrote recently about using OpenClaw (formerly Moltbot) as an automated SDR for sales outreach. That post focused on a business use case, but since then I’ve been exploring what OpenClaw can do for engineering teams specifically—and the results have been more interesting than I expected.

OpenClaw has evolved significantly since its early days. With 173,000+ GitHub stars and a rebrand from Moltbot in late January 2026, it’s moved from a novelty to a genuine platform for local-first AI agents. The key differentiator from tools like ChatGPT or Claude isn’t the AI model—it’s the deep access to your local systems and the skill-based architecture that lets you build custom workflows.

Jun 10, 2014
3 minutes

5 Ways to Do SCRUM Poorly

As a developer that frequently leads projects and operates in various leadership roles depending on the current project lineup, the Agile development methodology is a welcome change from the Waterfall and Software Development Life Cycle approaches to software development. SCRUM is the specific type of Agile development that I have participated in at a few different workplaces, and it seems to work well if implemented properly. However, there are several ways to make a SCRUM development team perform more poorly than it ought. The top 5 I have seen include:

Lessons from a Year of AI Tool Experiments: What Actually Worked
Industry-InsightsTechnology-Strategy
Feb 8, 2026
9 minutes

Lessons from a Year of AI Tool Experiments: What Actually Worked

Over the past year, I’ve been experimenting extensively with AI tools—trying to understand what they’re actually good for, where they fall short, and how to use them effectively. I’ve written about several of these experiments: the meeting scheduling failures, the presentation generation disappointments, and most recently, setting up Moltbot as an SDR.

Looking back at all these experiments, patterns emerge. Some things consistently worked. Others consistently didn’t. And a few things surprised me in both directions.