The Compound Effect
Individual AI wins are small. But when internal fluency feeds customer decisions and product instincts, those small cuts compound into something organisations can't easily replicate.
This is the fifth and final part of a series called Success by a Thousand Paper Cuts. If you've followed along, thank you. If you haven't, the short version: meaningful AI adoption doesn't come from one big move. It comes from many small, deliberate ones.
The series started with The AI Promise, where I laid out a framing I keep coming back to: AI as amplification. Making people better at what they already do, not replacing them or automating them out of a job. That article ended without a neat conclusion. I'm still not sure one exists.
Inward covered what happens when your team starts using AI well — the fluency that builds when people get comfortable with these tools. Outward explored how that fluency shapes customer experience. Forward looked at what happens when AI moves from supporting your product to becoming part of it.
Each article described a different surface area. A different place to make incisions. This one is about what happens when they compound.
Small things feeding bigger things
The compound effect isn't complicated to explain, it's just hard to see while it's happening.
A team starts using AI for something unglamorous — meeting notes, maybe, or drafting internal docs. Nobody writes a press release. But people start noticing things. When meeting notes are captured and searchable every time, patterns emerge. The same customer pain point across three calls in a week. An onboarding gap you didn't realise was there, because nobody was capturing and collating that information before.
That observation leads to an outward change. Someone adjusts how they handle a support interaction. A product manager rewrites a user story based on a pattern they spotted. An improvement gets prioritised that would've sat in a backlog for months.
Then someone asks: "Could we build this into the product?" Not because they read a thought piece about AI-native products, but because they've lived with the tools long enough to know what they're good at and where they fall short. They've developed taste.
That progression from inward fluency to outward improvement to forward opportunity isn't a strategy someone designed - it's what happens when an organisation builds enough experience with AI to trust its own judgement.
The real shift
The most telling sign of compounding isn't any particular tool or project. It's how people talk about AI.
Early on, the question is "should we be using AI?" Broad, slightly anxious, prompted by something a competitor announced or a board member read. A question about technology.
The compound version sounds different: "is AI the right approach for this problem?" That's a question about judgement. It means someone has enough experience to realise there's a question to ask. They know what good looks like because they've seen it work and fail.
That shift is the compound effect. It's the difference between an organisation experimenting with AI and one getting better because of it.
The big bet anti-pattern
Some organisations skip straight to forward. They see the trajectory, read the research, and make a big bet on an AI-native product. On paper, it makes sense. The technology is there. The market opportunity is real. But they haven't built the muscle to support it.
Their teams don't have fluency with AI tools. Their customer-facing processes haven't been informed by AI-assisted insights. They're making the hardest decisions about AI (what to build into their product, how much to trust it and where human oversight matters) without the experience from hundreds of smaller experiments.
The big bet fails because the organisation wasn't ready. They didn't have taste. They couldn't tell the difference between AI output that was good enough and output that would erode customer trust. That distinction is learned by doing, not planning.
You don't run a marathon by deciding to run one - you run shorter distances first and build capacity over time. The big bet approach is signing up for a marathon having never run more than a kilometre. The ambition isn't wrong. The sequencing is.
Building muscle before the lift
Starting inward matters more than most organisations think.
Some teams are waiting for an AI strategy document before doing anything. They want the roadmap, the framework, the approved use cases. I understand the instinct, especially in regulated industries or organisations burned by hype before. But you can't write a good AI strategy without experience. You end up with a document full of hypotheticals, written by people who haven't used the tools enough to know what's realistic.
The organisations getting the most from AI aren't the ones with the biggest budgets or most ambitious strategies. They're the ones with the most paper cuts. They started small, learned what worked, and let that inform the next step. Their strategy emerged from practice, not the other way around.
That doesn't mean strategy doesn't matter. It means strategy gets better when informed by experience. Start with the inward cuts. Let your team build fluency and pay attention to what they learn. Then make your outward and forward decisions from understanding, not theoretical enthusiasm.
What we keep coming back to
At hps.gd, this is the work we care most about. Not the grand vision stuff, the practical, compounding stuff. Finding the cuts that matter for a specific organisation, helping them stack, and watching what compounds.
Every organisation's version looks different: the tools change, and the order varies. But the pattern is consistent: small, deliberate improvements in how people work with AI lead to better decisions about where AI should touch customers, which leads to clearer thinking about where AI belongs in the product.
I wrote in The AI Promise that I didn't have a neat conclusion. I have something closer to one now, although it's still messy.
The AI promise is not one promise. It's a thousand small ones, kept day after day.
That's less exciting than most of the versions being sold. But it's the version that works, and the one I'd bet on every time.
If you've been making your own cuts — inward, outward, or forward — I'd like to hear what's compounding for you. What started small and turned into something you didn't expect? That's the conversation I find most valuable right now.
Want to discuss this?
We'd love to hear your thoughts. Drop us a note and we'll get back to you.
