5 spots left — Get a free How Good? assessment →
Skip to main content
hps.gdhps.gd

Inward: Where Your Team Meets AI

Most AI value comes from small internal wins that compound over time. Before looking outward, the smart move is making your existing people more effective with what you already have.


aistrategypeople

This is the second article in a five-part series called Success by a Thousand Paper Cuts, where I'm working through how organisations can approach AI adoption in a way that sticks. The theory is simple: organisations getting the most from AI are making a lot of small, deliberate moves that compound over time.

If there's one place most organisations should start, it's here. Inward. With their own people.

Inward: Where Your Team Meets AI
Illustration generated with AI, because of course it was.

The compound effect of small wins

I used an example in The AI Promise about someone running a client onboarding call - juggling a script, a demo, note-taking, and follow-ups all at once. Give that person a tool that handles the recording, transcription, and summary, and you've freed up a meaningful chunk of their cognitive load. They're not doing less work. They're doing better work.

Do the maths: your call recording and transcription tool saves someone 20 minutes a day. That's not a stretch - between not writing up notes, chasing action items, or re-listening to parts of a call, 20 minutes is conservative. 20 minutes a day across 30 people is about 10 hours of recovered time per day. Over 250 working days, that's roughly 2500 hours a year. That's the equivalent of a full-time person, reclaimed from admin and working on things you hired them to do.

No one's putting "we saved 20 minutes a day on meeting notes" on a board slide. But that's the kind of thing that moves the needle when it compounds across a team, across a year, across a business. A thousand paper cuts, working in your favour.

You're already paying for it

Something that bothers me about most AI adoption conversations: people jump straight to new tools, new platforms, new vendors. Meanwhile, the tools you already pay for have been quietly shipping AI features for the past two or more years.

Microsoft 365 has Copilot baked into Word, Excel, PowerPoint, Outlook, Teams. Notion has AI built into its editor and databases. Slack has AI search and summaries. Atlassian has AI across Jira and Confluence. HubSpot, Salesforce, Zendesk - all have rolled out AI capabilities included or available as add-ons to existing subscriptions.

Most organisations I talk to are using maybe 10% of the AI features available in tools they're already licensed for. That's not a technology problem. It's an awareness problem. And it's the lowest-effort starting point you'll find.

Before you go shopping for something new, take stock of what you've got. Get someone to spend a few hours cataloguing the AI features in your existing stack. You'll be surprised how much is sitting there, already paid for, waiting to be switched on. Not all of it will be useful - some will be gimmicky, some won't fit your workflows. But some will be good, and the cost of trying it is close to zero because you're already paying the bill.

That said, some tools not already in your stack come with real price tags. Claude Code at $200 USD per user per month. GitHub Copilot Enterprise at $39. Microsoft 365 Copilot at $30. These aren't rounding errors - they're line items that procurement will notice, and they're easy to measure. The value, on the other hand, is harder to quantify. If someone doesn't quit because their job became less soul-crushing, that doesn't show up on a spreadsheet. If the QA team finds more issues through deeper testing in the same time, they're delivering real value, but the $benefit is harder to track.

The cost is concrete and the benefit is often fuzzy, which makes these conversations uncomfortable in budget meetings. But the organisations getting this right aren't the ones who found a way to make AI free - they're the ones who recognised that non-negligible price tags come with non-negligible benefits that far outweigh them.

The cultural dimension

The tooling is the easy part. The harder bit (and the bit that determines whether any of this takes hold) is culture.

I've seen two approaches to rolling out AI internally, and they produce very different outcomes.

The first is the top-down mandate. Leadership announces that the organisation is "going all-in on AI." There's a strategy deck. There are KPIs. Someone gets the title of Chief AI Officer (or it's lumped in with CIO or CTO). Tools are selected centrally, rolled out broadly, and people are told to use them. This approach feels decisive and looks good from the outside, but it almost always creates resistance. People don't like being told their workflow is about to change, especially when the tools are half-baked or don't fit how they work.

The second is ground-up enablement. You make tools available. You give people permission to experiment. You create space for them to try things, share what works, and quietly drop what doesn't. You celebrate the small wins - someone automates a painful report, someone figures out a better way to draft client communications, someone builds a prompt that saves their team an hour a week. You let those stories spread.

The second approach is slower to show results on a dashboard, but it builds something the first one can't: genuine adoption. People use tools they choose, not tools they're told to use.

There's a related concept I touched on in The AI Promise that keeps coming up in practice: the "expert generalist." These are people who aren't deep specialists in every domain, but who know enough to guide AI well and catch it when it goes sideways. In an inward-facing AI rollout, these people are your catalysts. They figure out the useful applications and show their teammates what's possible. Find them. Support them. Give them room to experiment and share.

The permission to experiment matters more than most leaders realise. If your people feel like they need formal approval to try an AI feature in a tool they already use, you've already lost. The friction of asking permission will kill adoption faster than any technical limitation.

The uncomfortable truth is that your people are probably already experimenting without you. Shadow AI is real - people pasting customer data into ChatGPT, using personal subscriptions to tools the company hasn't approved, finding workarounds because the official process is too slow or too restrictive. The instinct is to crack down, but that's the wrong response. These are your most motivated people. They're breaking the rules because the rules aren't giving them what they need to do their jobs well.

The smart move is a moratorium. Acknowledge it openly: "We know some of you have been using AI tools outside of what's been approved. We're not here to punish that. We're here to flip the conversation - how do we give you what you need, safely?" That shift from enforcement to enablement changes the mindset: the shadow users become scouts. They've already figured out which tools work. Bring them into the light, give them guardrails instead of walls, and adoption will accelerate faster than any top-down mandate could.

The glass wall, revisited

In The AI Promise, I talked about a concept James Harvey introduced me to: the "glass wall." You start using AI tools, everything's moving fast, you feel brilliant, and then you slam into an invisible barrier. You've pushed things to a point where the AI can't take you further, and now you're in a worse spot than if you'd done the work yourself from the start.

That concept matters here because when you're rolling AI out across a team, the glass wall doesn't affect one person. It affects everyone at roughly the same time, in roughly the same way. If your sales team starts using AI to draft proposals and they all hit the wall at the same point - where the output is 80% right but the last 20% requires more effort to fix than writing from scratch - you'll see a wave of abandonment. People will try it, get burned, and go back to what they were doing before.

The glass wall is far less of a concern when you're not trying to do something huge. If you're making lots of small changes to improve little things - summarising a meeting, drafting a first pass at an email, triaging a support queue - you may never hit the wall at all. It's when you're deep in the trenches, asking AI to do something ambitious and sustained, that you run up against it. The thousand paper cuts approach naturally sidesteps the problem, because each cut is small enough to land cleanly.

When you do encounter the wall, be honest about where it is. Set expectations clearly: this will get you most of the way there, and you'll need to bring your expertise to the last stretch. That framing (AI as a starting point, not a finishing point) is the difference between tools that stick and tools that get tried once and forgotten.

The wall is also moving. It's further away than it was six months ago, and it'll be further still six months from now. The people learning to work with these tools today will be far better positioned when the tools improve. They understand the limits. They know the workarounds. That's another compounding effect, and it only comes from starting.

Patterns that work

The organisations we think will be most successful with inward AI share some common patterns.

They don't mandate specific tools. They make a range of options available and let teams gravitate toward what works for their context. A developer's AI needs are different from a CFO's, which are different from a BDM's. Trying to standardise on one tool across all of those is a recipe for mediocre adoption everywhere.

They start with existing pain points. Not "what can AI do?" but "what's painful right now, and could AI help?" That reframing matters. It grounds the conversation in real problems instead of theoretical capabilities. When someone's spending two hours a day on something and AI cuts it to thirty minutes, they don't need convincing. They become an evangelist.

They use force multiplier framing instead of replacement framing. "This tool will make you faster" lands differently to "this tool will do your job." Even if the practical outcome is similar, the framing determines whether people lean in or push back. Nobody wants to train their replacement. Everyone wants a superpower.

They measure adoption. Switching on a feature isn't the same as people using it. The organisations that do this well track actual usage, gather feedback, and iterate. They treat internal AI rollout the same way a good product team treats a feature launch: ship it, watch what happens, adjust.

This is the foundation

Everything else in this series - the outward-facing applications, the operational changes, the strategic positioning - builds on top of what happens here. If your people aren't comfortable with AI, aren't finding value in it, aren't weaving it into how they work, none of the bigger plays will land.

Start with what you already have. Give people permission to experiment. Celebrate the small wins. Let it compound.

In the next article, Outward, we'll look at where AI meets your customers, your market, and the value you deliver to the world. But none of that works if this part doesn't come first.

Want to discuss this?

We'd love to hear your thoughts. Drop us a note and we'll get back to you.