The Ground-Up AI Framework: How to Adopt AI by Asking First and Deploying Second

Most AI adoption fails because it’s announced to employees. This framework flips that by earning a mandate from the people who actually do the work, then using leadership to approve…

Use a framework that includes people, authorization, and communication.

Most AI adoption fails because it’s announced to employees. This framework flips that — by earning a mandate from the people who actually do the work, then using leadership to approve what the organization has already authored.


Why top-down AI frameworks keep failing

The default shape of AI adoption in most organizations looks like this: leadership decides AI matters, a strategy gets written, tools get selected, pilots get launched, and then — somewhere between the pilot and production — the whole thing stalls.

The stall has a cause, and it isn’t technology. It’s that the people expected to use AI were never asked what they needed from it. The strategy was designed for them, not with them. The tools were chosen without them. The rollout was announced at them. By the time they’re expected to adopt, they’re being asked to embrace a solution to a problem they never agreed they had.

The Writer 2026 Enterprise AI survey captured the predictable result: 29% of employees, rising to 44% of Gen Z, admit to actively sabotaging their company’s AI strategy. That isn’t malice. It’s the organizational immune system responding exactly as you’d expect when a foreign object gets injected without consent.

This framework is built on a different premise: people support what they help author. If the use cases come from the people who do the work, the adoption problem is substantially solved before deployment begins. Leadership’s role shifts from issuing a mandate to ratifying one. Communication shifts from selling to confirming. Resistance shifts from a blocker to a source of signal.

Here’s how to run it.


Stage 1: Ask the two questions

Before any tool is selected, any strategy is written, or any budget is allocated, the organization asks its people two questions. Not in a survey that sits in an inbox. In structured conversations, with managers, in small groups, with a stated purpose and a promise to report back.

Question 1: “What do you not want to be doing?”

This surfaces the drudgery — the repetitive, low-value, friction-filled parts of the job that drain time and energy without producing much. Report drafting. Meeting summaries. Expense coding. Ticket triage. Data reconciliation. The invisible tax of modern knowledge work.

Two things happen when you ask this question seriously. First, the answers are remarkably consistent across roles, which means you find real patterns you can build around. Second, people feel something most employees almost never feel when AI comes up: heard. That single experience does more to reduce resistance than any training program.

Question 2: “What would you want to be doing, if you could?”

This surfaces aspiration — the work people believe is more valuable but can’t get to because the drudgery absorbs their capacity. Deeper customer relationships. Better analysis. More creative projects. Mentoring. Strategic thinking. The work that got them into the role in the first place but that the job has drifted away from.

Ask these two questions together and a use case almost writes itself. The answer to what you don’t want to do becomes the automation opportunity. The answer to what you’d want to do becomes the business case for the automation. One without the other produces either bitter efficiency plays (cut drudgery, with no vision of what comes after) or untethered aspiration (vague ambition, with no concrete starting point).

How to run it: Small groups of 5–8 people, organized by function. One hour each. Managers should not run the sessions for their own teams — use a neutral facilitator, ideally from a different department. Promise a written summary back within two weeks. Keep that promise. Everything else in the framework depends on people experiencing that their input produced a visible response.

Expect surprises. The drudgery people name is rarely the drudgery leadership assumes. The aspirational work is often more commercially valuable than leadership realizes. Suppress the urge to filter answers during the conversation — filtering happens in the next stage.


Stage 2: Sanity check

Now comes the filter. Employee input is signal, not specification. Not everything surfaced will be feasible, wise, or valuable. This stage converts a raw list of ideas into a ranked portfolio.

Four criteria, applied to each idea:

1. Feasibility

Can AI actually do this reliably today? Not “could it theoretically?” but “can it do this in our environment, with our data, at acceptable quality?” An idea that fails this test isn’t dead — it goes on a “watch” list for when capabilities catch up. But it doesn’t move forward now.

2. Risk

What happens if the AI gets this wrong? Is the worst-case outcome embarrassment, cost, regulatory exposure, or customer harm? Higher-risk use cases aren’t automatically disqualified, but they require more governance, more human-in-the-loop design, and more executive attention. Match the containment to the stakes.

3. Value

What is the quantified business case? Time saved × hourly cost × number of people affected, or revenue impact, or error reduction value, or customer experience metric. Force a number, even a rough one. Ideas that can’t produce a number usually can’t produce value either.

4. Readiness

Do we have what we need to deploy this? The data in the right place, the systems integrations, the governance structure, the workforce capacity to absorb the change. Ideas that are feasible, low-risk, and valuable can still fail if the organization isn’t ready. Readiness gaps go on a separate list — they’re work to be done before deployment, not reasons to abandon the idea.

What you end up with: A ranked portfolio, typically 5–15 ideas, sorted by combined score across the four criteria. The top 2–3 are candidates for immediate pilot. The next tier is the 12-month roadmap. The rest is the watch list.

Critically, the portfolio should still show the ideas that didn’t make the cut and why. When you communicate back to employees in Stage 4, you’ll need this. Ideas that got dropped without explanation register as inputs that were ignored — which rebuilds the resistance this framework is meant to prevent.


Stage 3: Present to leadership for authorization

Leadership enters the process at Stage 3, not Stage 1. This is the structural inversion that makes the framework work.

Their job here is not to generate ideas. It’s to authorize — to say yes, no, or defer to a portfolio that has already been shaped by the people closest to the work and filtered by the people responsible for feasibility. This is a more useful role for leadership than idea generation, because it plays to what leadership actually has: budget authority, risk tolerance, strategic context, and the ability to clear organizational obstacles.

The presentation to leadership should include, in this order:

  1. What we asked, and who we asked. The number of people, the functions represented, the method.
  2. What we heard. The patterns in the drudgery and aspiration responses. Raw, honest, not filtered to flatter.
  3. The portfolio we built. The ideas, ranked, with the four-criteria scoring visible.
  4. Our recommendation. Which 2–3 ideas to start with, with pilot designs and success metrics.
  5. What we need. Budget, authority, governance decisions, sponsor names, timeline.

Leadership can approve, modify, or reject. What they cannot do — and this is the part that requires discipline — is add their own ideas to the portfolio at this stage. If leadership has use cases they want to run, those go through the same surface-and-filter process. Otherwise the framework collapses back into a top-down model wearing a bottom-up costume.

The test for Stage 3: Leadership should leave the meeting able to say, accurately, “this is what our people told us they want, and we’re backing them.” If leadership has to invent the rationale, the process failed earlier.

Expect some leaders to resist this. The framework requires them to accept a narrower role than they’re used to. The ones who understand that the mandate is more valuable than the origination will adopt it quickly. The ones who don’t will need to see the adoption results from an early pilot before they believe it.


Stage 4: Communicate back for buy-in

This is the stage most organizations skip, and it is the single most important stage in the framework.

After leadership approves, you go back to the people who answered the original questions and close the loop. The communication has a specific structure:

“You told us ___. We evaluated it using ___. Leadership approved ___. We’re starting with ___. Here’s why we’re starting there and not somewhere else. Here’s what to expect and when. Here’s how you’ll be involved.”

This is not a town hall. It’s a return visit to the same groups you ran the original sessions with, in the same format, with the same facilitators if possible. The message is consistent: you were asked, you were heard, this is what happened, you’re still in it.

This single communication does four things at once:

  • It validates the original ask. People who participated see that their input produced action, which makes them likelier to participate next time.
  • It builds organic advocates. The employees whose ideas made the cut become the pilot’s natural champions — not because they were recruited, but because they authored.
  • It neutralizes skeptics. The skeptics who predicted “nothing will come of this” are visibly wrong, publicly, on a schedule they didn’t control.
  • It pre-positions the pilot. When the pilot launches, it’s not a surprise — it’s the expected continuation of a process that’s already been running.

The test for Stage 4: If someone in the affected group is asked “why is the company deploying this AI tool?” they should be able to answer in terms of their own function’s needs, not in terms of corporate strategy. If they can’t, the communication hasn’t landed.

Organizations often rush past this stage because the “real work” of deployment feels more urgent. That’s a mistake. The deployment is a technical exercise that succeeds or fails based on whether the communication stage did its job. A technically excellent rollout with a skipped communication step underperforms a technically mediocre rollout with a thorough one. Every time.


What this framework gives you

Run this sequence and several things become true at once:

Your use cases are grounded in real pain. You’re solving problems people actually have, not problems a consultant’s deck said you should have.

Your adoption is pre-sold. The people expected to use the tool helped choose it. Resistance is dramatically lower before you even launch.

Your leadership is aligned without being overloaded. They’re approving a shaped portfolio rather than generating ideas in isolation.

Your communication is honest. You’re telling people what they told you, which is a different rhetorical position than telling people what’s good for them.

Your deployment has organic sponsors. Every pilot launches with champions who were part of its authorship, not trainees who received it.

And critically — you’ve established a reusable process. This isn’t a one-time exercise. The same four stages work for the next AI initiative, and the next. Over time, your organization builds the muscle of asking, filtering, authorizing, and closing the loop. That muscle is worth more than any individual deployment.


How this connects to deployment discipline

This framework handles the before of AI adoption: surfacing what to deploy, earning the mandate to deploy it, and pre-positioning adoption. Once you’re through Stage 4, you’re ready for the during — the deployment discipline that turns an approved initiative into a working production system.

That’s a different framework, covering governance, risk containment, pilot design, measurement, and scaling playbooks. The two work together: the Ground-Up Framework determines what to do and earns the mandate. The deployment framework determines how to do it well.

Most organizations try to run the deployment work without doing the ground-up work first. It almost always fails, because no amount of deployment discipline compensates for a missing mandate. You can’t project-manage your way out of an adoption problem that started as a consent problem.


A final observation

The most important thing this framework does is change who is in the conversation at what moment. Employees come first, not last. Leadership approves rather than originates. Communication is a return visit rather than a launch event.

These are small structural changes with disproportionate effects. They convert the core adoption question from “how do we get people to accept this?” into “how do we deliver what they already asked for?” Those are two completely different problems, and only one of them is solvable.

Most organizations are still trying to solve the first one. The ones that switch to the second one are the ones quietly pulling away from the pack.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *