THE HUMAN SIDE OF AI (THE BIT EVERYONE SKIPS)
Most AI programmes are obsessed with tools. That’s understandable. Tools are visible. People are messy.
But AI adoption doesn’t live in the tool. It lives in behaviour.
We recently ran an AI training session for Wolseley Central Operations. Afterwards we collected feedback. The sample points to something we see everywhere: when AI is made relevant to real roles, people engage.
WHAT THE FEEDBACK TOLD US
100% rated the presenter very effective
100% said the session was clear
83% said they’re very likely to implement what they learned
100% were satisfied or very satisfied overall
Small sample, but the signal is clear.
THE REAL BARRIERS TO AI ADOPTION
They’re rarely “we don’t have the right tool”. They’re usually:
“I don’t know what to use this for in my job.”
“I’m worried I’ll get it wrong.”
“This feels like another initiative that will disappear.”
“I don’t want to break confidentiality.”
“We don’t have time to learn this properly.”
If you don’t address that, you don’t get adoption. You get polite nodding and zero behaviour change.
WHAT ACTUALLY WORKS IN AI TRAINING
-
Don’t teach AI in abstract. Teach it to the marketing person through the lens of what they actually do: emails, reports, planning, SOP summaries, meeting outputs, customer responses, internal comms.
Show the person how AI helps them do their job better. Not how amazing AI is in general.
-
People learn by doing. We build in-session, using examples they recognise. Someone asks: “How do I use this for X?” We show them, live, right then.
This creates confidence. They see it work in real time. They know they can try it.
-
If people feel judged, they won’t experiment. If they won’t experiment, they won’t learn. If they don’t learn, they won’t adopt.
We make it clear: there’s no stupid question. Wrong answers are just learning. And we laugh when things go wrong (because they will).
-
We are explicit about what not to do: sensitive data, confidential information, policy limits. Responsible AI isn’t optional. It’s the baseline.
People need to know the guardrails. Not so they’re scared to try. So they’re confident when they do.
-
Training is a spark. Adoption needs fuel: templates, prompts, internal champions, follow-up sessions.
One session isn’t enough. You need sustained attention.
tHE ADOPTION CURVE
Most organisations treat AI training like a one-off event. “We did the training in Q2.” But behaviour change takes time.
Week 1: People are curious, maybe excited.
Week 2-3: They try to use it and hit friction.
Week 4-6: They either give up or find their rhythm.
Month 3+: If you’ve got support and templates, adoption sticks. If not, it dies.
This is why follow-up matters. You need:
Weekly tips and prompts
Champions in each department who are using it
Regular check-ins: “How’s it going? What’s blocking you?”
Recognition when people do it well
THE CONVERSATION NOBODY’S HAVING
AI adoption fails because we talk about the tool, not the fear.
The real fear is usually:
Job security. “Is this replacing me?”
Competence. “I’m going to look stupid if I use this wrong.”
Change. “I finally got good at my current process. Now I have to change again?”
Trust. “Can I trust the AI output? What if it gets something wrong?”
These are human questions. They need human answers.
“No, it’s not replacing you. It’s making your job better by handling the boring stuff.”
“You won’t look stupid. You’ll look smart because you’re using new tools.”
“Yes, change is uncomfortable. But the people who get good at this first will be ahead of the curve.”
“You should always check the output. AI is a thinking partner, not an authority.”
Closing
AI is not a magic wand. It’s a mirror.
It reflects the maturity of your processes, culture, and leadership.
If your processes are chaotic, AI will amplify the chaos. If your culture is fearful, people won’t use it. If your leadership isn’t using it, nobody will.
But if you do the human work - clarity, safety, relevance, follow-up - adoption is fast and real.
Real proof: When we ran AI training for Wolseley Central Operations, 100% said the session was clear, 83% committed to implementation, and 100% were satisfied overall. That’s what happens when you make AI relevant to real roles and real workflows.

