Before designing any program, it helps to look closely at the work itself.Where is time being spent? Where are decisions being made? Where does effort feel repetitive or unstructured?In one engagement with a finance team, the initial ask was a broad session on tools like ChatGPT and Microsoft Copilot. But once we mapped their workflow, most of their time was going into drafting reports, summarizing Excel-heavy analyses, and preparing management updates.That reframed the entire approach.
Instead of introducing AI as a concept, the learning was anchored around:
-
- structuring prompts for financial summaries
-
- validating outputs
-
- and using Copilot within Excel and PowerPoint
The tool didn’t change — the context did.
Treat the First Program as Exposure, Not Transformation
It’s useful to set expectations early. A single session — even a strong one — will build awareness, generate interest, and show what’s possible. What it typically won’t do is create consistent usage or change how teams work immediately.
In one manufacturing client, a leadership group attended an AI session and left with strong interest. But in the following weeks, actual usage remained limited. It wasn’t a failure — it was simply that people hadn’t yet connected AI to their day-to-day tasks. The first step is familiarity. Application comes next.
Move Quickly from Awareness to Application
The shift happens when people start using AI in their own work. This usually requires smaller, focused sessions where teams work on real tasks — not generic examples.
For instance:
-
rewriting an actual vendor email using ChatGPT
-
summarizing a real SOP document
-
exploring Copilot within an existing Excel workflow
-
or using tools like n8n to automate simple internal processes
In one logistics team, introducing n8n to automate repetitive status updates created more interest than any broad AI overview. It was specific, visible, and immediately useful.
That’s when usage starts to feel natural
Focus on a Few Use Cases First
Trying to cover everything can dilute impact. What tends to work better is identifying a small number of use cases that are:
-
frequent
-
relevant
-
and easy to experiment with
In supply chain teams, this might be vendor communication or exception reporting.
In HR, policy drafts or interview summaries.
In sales, proposal drafting or quick research synthesis. In one FMCG setup, focusing only on demand review summaries and internal reporting created far more traction than a wider rollout would have. Clarity at this level makes learning more actionable.
Build the Right Learning Around Those Use Cases
Once use cases are clear, the quality of the learning experience becomes critical.
What tends to work well is designing around:
-
actual scenarios from the organization
-
real-time simulations
-
and guided practice using real data
One factor that makes a significant difference is who leads the learning.
When sessions are anchored by someone who has applied AI in similar business contexts, the conversation becomes far more practical.
Instead of staying at the level of features, it moves into how these tools are used in real situations — where they help, where they don’t, and what to watch out for.
In a pharma context, for example, having someone who had worked with regulated documentation made the discussion around AI-assisted drafting far more grounded than a generic session could.
At this stage, teams are not looking to understand tools — they are trying to apply them within the realities of their work. The relevance of the expert becomes important.
Build Comfort Before You Try to Scale
There is often an intent to roll this out widely. In practice, starting smaller tends to be more effective. In one case, a client began with their strategy team — focusing on research synthesis, deck structuring, and idea generation using tools like ChatGPT and Copilot. Within a few weeks, usage patterns became visible, and other teams started exploring similar applications. Scaling becomes easier when there are internal examples to build on.
Expect Iteration, Not Immediate Maturity
AI adoption doesn’t follow a straight path. There will be initial enthusiasm, uneven usage, some drop-offs, and gradual improvement. In one energy sector engagement, early adoption was inconsistent. But over time, as teams identified where AI genuinely helped — particularly in reporting and documentation — usage stabilized and expanded. This is part of the process.
Designing an AI learning journey is less about delivering a program and more about guiding a transition.
From curiosity to familiarity.
From familiarity to application.
From application to habit.
The first session is just the starting point.
What matters is what happens after.