AGIV

AI Champion Program

Champions over mandates.

We find the employees already experimenting with AI in your organization, equip them with the skills and visible wins to pull others in, and build the habits and networks that keep running after we leave.

Three-phase methodology
01
identify the believers
02
equip them with wins
03
let them pull others in
[The problem]

Why “roll it out to everyone” stops working

Most AI adoption programs start the same way: executive buys in, licenses get distributed, a training session or two gets scheduled, and a target gets set (20% of workflows automated, everyone using Copilot by Q2).

Six months later, usage ticks up slightly. A few individuals save time on individual tasks. The organization looks the same.

The mandate model has a structural problem. The people driving it understand AI intuitively. Their colleagues don't. There's a documented cognitive phenomenon behind this: once you know something, you lose the ability to imagine not knowing it. The experts become unable to explain to non-adopters what they're missing. Enthusiasm without structure produces compliance without adoption.

The people who actually change organizations aren't the ones who send mandates. They're the ones other employees listen to because they've already built something useful.

How the program works

Find the believers. Equip them. Get out of the way.

The program runs in three phases.

Phase 1

01

Identification

We find employees who are already experimenting with AI tools on their own time, who ask different questions about AI than their colleagues, and who show signs of wanting to build rather than just use. We do this through structured interviews, behavioral observation, and documentation signals. Manager nominations are a poor filter for this. Champions are often not the most senior people on the team. We find the ones who would build whether or not they were asked to.

Phase 2

02

Equipping

We run structured programs for identified champions using 1–2 week build cycles. Each cycle ends with a working system the champion built, documented, and can demonstrate to their team. The documentation matters as much as the system. Champions who can show their work to colleagues produce more adoption than champions who just tell colleagues about what they built.

Phase 3

03

Expanding

Champions with working systems and documented skills get visibility: demo time, internal presentations, Slack channels, whatever the organization’s natural sharing mechanisms are. This produces organic pull. When colleagues see a peer save three hours on a real task they also do, the conversation changes. We don’t push adoption. We create the conditions for it.

What we leave behind

The three things that sustain it

01

Skills library

Champions document how to use AI for specific workflows in your organization. We structure this documentation so it’s usable by colleagues, not just the person who wrote it. The library grows with each build cycle and is deployed org-wide.

02

Champion network

Champions across departments who know each other, share what’s working, and can support colleagues without routing everything through a central team. When new AI tools emerge, the network self-organizes rather than waiting for IT.

03

Build patterns

The architecture, governance, and project management approaches that let your organization continue launching AI initiatives without needing external support for every one. We use short cycles specifically because AI compresses execution time. Six-week sprints are designed for a world where building took longer.

[Program health]

Three questions we use to assess every engagement

At the end of every engagement, we ask each champion three questions:

01What would you build next without us?
02Who would you teach?
03What patterns could you leave behind?

The quality of those answers tells us whether the program produced compounding capability or just completed deliverables. We optimize for the first.

[What this is not]

We don't run lunch-and-learns. We don't produce a slide deck of AI use cases. We don't run a one-day workshop and call it an AI transformation.

Hackathons are part of our toolkit and they're effective precisely because they're the best filter we've found for identifying champions. We've run 50+ of them, including events with 400+ participants. But hackathons alone don't produce organizational change. The champion program is what comes after.

[Best fit]

Best fit

We work with organizations from 100 to 5,000+ employees. The program scales by department or can run org-wide.

The program delivers the most value for organizations where:

Individual employees are using AI but institutional processes haven’t changed
A previous rollout happened but adoption stalled after initial training
There’s a range of AI enthusiasm across departments and you want to spread it organically
You have engineering-heavy teams that want to get the rest of the organization building alongside them
[Engagement structure]

How an engagement runs

Engagement length depends on organization size and scope.

Weeks 1–2

Discovery

We map current AI usage, identify champion candidates, and establish baseline metrics.

Weeks 3–8

Build cycles

Champions run 1–2 week build sprints, producing working systems and documentation. We facilitate, advise on architecture, and handle blockers.

Weeks 9–12

Expansion

Champions present work, skills are deployed org-wide, network is formalized, and we document the patterns for ongoing use.

We don't disappear after the engagement. Subscription engineering is available for organizations that want ongoing support while the champion network matures.

Start with a conversation

The most common starting point is a discovery call where we map your current state: who's already using AI, where the enthusiasm is, and what's blocked adoption so far. From there we scope an engagement that fits your organization's size and where you are in the journey.