5 spots left — Get a free How Good? assessment →
Skip to main content
hps.gdhps.gd

Outward: Where Your Customers Experience AI

When AI moves from internal tools to customer-facing experiences, the stakes change. What changes when your customers are on the receiving end, and how to get it right.


aistrategycustomer-experience

This is Part 3 of Success by a Thousand Paper Cuts. The previous article covered the inward wins, AI making your people better at what they do. This one's about what happens when AI turns outward and starts touching your customers.

It's a different game.

Outward: Where Your Customers Experience AI
Illustration generated with AI, because of course it was.

The line between learning and losing trust

When AI is internal, mistakes are (hopefully) cheap. A summarisation tool gets a meeting note wrong, someone catches it, you fix it. Nobody outside your organisation knows. That's the beauty of starting inward: you learn in relative safety.

The moment AI interacts with customers, the stakes change. A bad recommendation, a tone-deaf automated response, a chatbot that confidently tells someone the wrong thing. These aren't learning opportunities. They erode trust, and once lost, it's difficult and expensive to rebuild.

Don't rush AI into customer-facing channels because it worked internally. The logic seems sound: "It's great for our team, let's give it to our customers." But tolerance for error is different. Your team spots when something's off and works around it. Your customers don't have that context.

It's already there, and mostly you can't see it

AI is already embedded in most customer experiences. It's not wearing a name tag.

In Success by a Thousand Paper Cuts I used the example of putting tomatoes on the scales at a Coles or Woolworths self-checkout. The screen suggests "Tomatoes" before you've touched anything. That's AI vision. The camera sees something red, roughly round, in a bag, weighing about this much, and guesses. Worth thinking about, because it's a great example of outward AI done well.

The customer doesn't know AI is involved. They don't see a model confidence score. They don't get a disclaimer. They see a button that saves them navigating an alphabetical menu. If the suggestion is wrong, they ignore it and search manually — no harm done. If it's right, the checkout is faster and less frustrating. That's the entire interaction from the customer's perspective.

Behind the scenes, sophisticated work. It classifies produce from a camera feed in real time, factors in colour, shape, size, and weight (probably, I'm guessing). It learns from corrections. Every time someone rejects the suggestion, that feedback improves the model. It handles edge cases: limes that look like small avocados, loose ginger that could be anything. All of this across thousands of checkouts at once, getting better with every transaction.

The customer experiences none of that complexity. They experience a better checkout. That's the model for outward AI: invisible, useful, and gracefully degrading when wrong. The A-Z menu is still there as a fallback. The AI doesn't replace the old way; it makes the new way better most of the time.

Once you start looking, this pattern is everywhere. Your bank flagging an unusual transaction before you notice it. A delivery app adjusting your ETA based on real-time conditions you can't see. An airline rebooking you onto the next flight before you've reached the service desk. In each case, AI is doing work in the background that would've been too slow, too expensive, or impossible to do manually at scale.

The organisations getting the most from customer-facing AI are the ones treating it as an invisible layer, making experiences better without making AI the experience.

Personalisation without a data science team

One of the more interesting shifts recently has been personalisation at scale. It used to require a data science team, custom models, months of development. Now it's a feature toggle in your existing platform.

Email marketing tools that tailor content based on behaviour. E-commerce platforms that adjust what they show based on browsing patterns. Support systems that adapt based on customer history. These aren't moonshots anymore. They're configuration options and all pervasive.

That democratisation matters: small organisations can now offer experiences that were the domain of the big players.

But there's a line, and it moves depending on who you ask. Helpful personalisation ("we noticed you usually order this, would you like it again?") sits on one side. Creepy personalisation ("we noticed you've been looking at competitors, here's a discount to stay") sits on the other. The technology doesn't know the difference. You have to.

When organisations get this wrong, the reaction is visceral. Not "oh that's clever" but "how do they know that about me?" Once someone feels surveilled rather than served, you've lost them. The balance requires hard thinking about what customers find helpful versus invasive. That's not a technical question. It's a human one.

Governance gets real

In The AI Promise, I talked about the tension between moving quickly with AI and doing the hard governance work. ISO 42001, formal AI management systems, the challenge of implementation at the current pace of change.

Internal AI can get away with lighter governance. Your people, your data, your risk tolerance. Customer-facing AI can't.

Customer data flowing through AI is a governance concern regardless of context. Your team pasting client details into ChatGPT carries the same data risks. The difference with outward AI is that the customer is interacting with it, often without knowing where their data goes or that AI is involved. That shifts the responsibility onto you. The questions are the same: where does the data go, is it training models, what happens when someone asks you to delete it. But the accountability is sharper because your customer never opted in.

These aren't hypothetical concerns. Organisations deploy customer-facing AI without understanding the data flows, then discover customer conversations were being sent to third-party APIs with unclear retention policies. That's not a "fix it later" situation. That's a regulatory and reputational problem.

The governance work from The AI Promise (impact assessments, risk management, structured evaluation) becomes non-negotiable when customers are involved. You don't have to boil the ocean, but you do have to understand where customer data goes, what decisions AI makes on their behalf, and what happens when it gets something wrong.

Practically: clear data flow mapping for any AI touching customer information. Human oversight for AI-generated customer communications, at least initially. A straightforward process for customers to escalate past AI to a real person. And honesty about where you're using AI and where you're not.

This applies more broadly than you think

Customer-facing AI applies well beyond SaaS and digital-first businesses.

Professional services firms draft client communications with AI, summarise engagement history, prepare for meetings. Retail uses it for inventory-driven recommendations and support triage. Healthcare explores AI-assisted patient communications and appointment management. Education providers use it for student engagement and admin support.

If you have a service or support layer (I'd be interested to speak to any medium+ sized business that doesn't) this is relevant. The channels differ. A law firm isn't (or at least shouldn't be) deploying a chatbot on their homepage to give advice. But they might use AI to draft initial responses to client enquiries, or surface relevant precedents when preparing advice. The AI still touches the customer experience, just through different touchpoints.

The question isn't whether your industry is "ready" for customer-facing AI. It's whether you've thought about where AI already influences what your customers see, hear, and experience, and whether you're comfortable with the oversight you have.

More care, more reward

Moving AI from inward to outward is natural but non-trivial. The tolerance for error drops, the governance requirements go up, and the ethical questions get harder.

But the upside is proportionally larger. Internal AI makes your people better. Customer-facing AI makes your entire organisation feel better to interact with. Done well (thoughtfully, with oversight, with respect for the people on the receiving end) it builds loyalty and differentiation.

The key word is "feel." Your customers don't care about your AI strategy. They care whether their problem got solved, whether they felt heard, whether the experience was easy. If AI makes those things better without drawing attention to itself, you've done it right.

Next, I'll look at what happens when AI stops being a tool you use and becomes part of what you sell. The forward domain, where your product becomes AI. Narrowest of the three, but for the right organisations, where the real differentiation lives.

Want to discuss this?

We'd love to hear your thoughts. Drop us a note and we'll get back to you.