Michelle Lentz didn’t open her session with a demo or a prompt. Instead, she started with a question about emotion.
At The Learning Guild’s 2026 Trends & Strategies Online Conference (Dec. 3–4, 2025), Lentz—founder of Innovate & Elevate Strategies—asked attendees to name the feeling AI most often triggers for them. The responses came fast and varied: “wow,” “hesitant,” “suspicious,” “stress,” “skeptical but enthusiastic,” “inappropriate expectations.” The point wasn’t to take the room’s temperature for novelty. It was to establish what Lentz sees as the real starting point for AI transformation: the human experience of change.
Overwhelmed by the Pace of Change
For learning leaders, she argued, AI isn’t simply a technology rollout. It represents a shift in how people relate to their work, to one another, and to their own professional identity. And it’s arriving after years of disruption—pandemic upheaval, remote and hybrid work shifts, economic uncertainty, restructuring, and layoffs. Many employees, she noted, are already exhausted.
During the session, Lentz referenced figures showing the emotional weight of this moment: 73% of employees report feeling overwhelmed by the pace of workplace change, and 68% worry about being left behind by technological advancement. In that context, resistance to AI often has little to do with the tools themselves. Instead, it reflects deeper fears—of obsolescence, widening skill gaps, unrealistic expectations to “just know AI,” loss of autonomy as systems dictate workflows, and unresolved ethical and cultural concerns.
L&D: Creating Safe Space to Learn About AI
This reality reshapes the role of L&D. Lentz positioned learning leaders as critical translators between technological capability and human impact. Their task is not only to explain what AI can do, but to help employees understand how it may change their day-to-day work and long-term trajectory. That requires designing continuous learning systems that evolve alongside AI without overwhelming people and cultivating psychological safety—spaces where employees can experiment, ask questions, admit mistakes, and express uncertainty without fear of judgment.
One observation resonated strongly: People often feel safer experimenting with AI because it doesn’t judge them. If organizations want responsible, sustained adoption, Lentz argued, they must create human systems that feel just as safe.
She also urged leaders to broaden how they think about AI itself. Today’s focus may be on tools like ChatGPT, but Lentz expects AI to become embedded into workflows much like the internet did—eventually fading into the background as an assumed part of work. The challenge, then, isn’t learning a single tool, but adapting to a future where AI quietly reshapes how work gets done.
AI Maturity & Changing Workflows
To ground that shift, Lentz referenced a five-stage AI maturity model from the Asana and Anthropic State of Work report. She described many organizations as clustered between experimentation and early integration—past skepticism and pilots but still navigating “potholes” as AI begins to reshape workflows rather than act as a super-powered search engine. For those worried about falling behind, she offered reassurance: This messy middle is where most organizations are right now.
A central theme of the session was reframing AI as a collaborator rather than a competitor. Lentz cited a quote from Dr. Fei-Fei Li, a long-time AI leader: “Artificial intelligence is not a substitute for human intelligence, but a tool that amplifies human creativity and ingenuity.” The strongest implementations, Lentz said, don’t remove human judgment—they enhance it. AI handles routine tasks so people can focus on creativity, empathy, strategic thinking, and complex problem-solving.
She translated this philosophy into practical guidance. Learning leaders should conduct human impact assessments before and after deployments, examining how roles, workflows, and team dynamics are affected. They should build cross-functional change networks that span IT, HR, operations, and business units, ensuring that AI decisions reflect operational realities and human needs—not just technical efficiency.
Continuous learning ecosystems matter more than one-time training events. Lentz emphasized the importance of combining formal learning, peer sharing, experimentation time, and reflection. Feedback loops should be visible and meaningful, showing employees that their input shapes how AI is implemented. Even small wins deserve attention—especially those that remove friction from daily work and return time to employees.
Getting AI Right: 2 Case Studies
1. An AI Assistant at Siemens
Two case examples illustrated this approach. At Siemens, Lentz described the rollout of an AI-powered IT assistant that ultimately supported more than 300,000 employees. The company began with pilots, identified champions, and was transparent about what the tool could and could not do. Continuous feedback allowed employees to shape its evolution. The result, she said, was an 85% adoption rate within 18 months, alongside reported time savings and improved job satisfaction.
2. Human-Centered AI Adoption at Zapier
She also pointed to Zapier as a standout example of human-centered AI change. Leaders modeled AI use in real work, created internal spaces where employees could experiment safely, supported peer learning cohorts, and allocated dedicated work time for learning. Zapier achieved an 88% adoption rate after roughly a year and a half. Lentz noted that, based on more recent company updates, she believed that number may have increased since—though she framed that as her own interpretation rather than a confirmed metric.
Lentz emphasized: “The tech doesn’t create the culture. The culture creates the adoption.”
AI Adoption Model
To give learning leaders a familiar structure, Lentz adapted Prosci’s ADKAR model for AI adoption.
- Awareness comes from transparency and tangible examples.
- Desire grows when leaders address real concerns honestly and show how AI can enhance work rather than replace it.
- Knowledge requires ongoing AI literacy that includes both technical and human skills.
- Ability is built through safe experimentation and cross-functional application.
- Reinforcement comes from recognition, shared stories, and systems that reward innovation—not just usage.
She closed with a challenge: Start small, scale thoughtfully, recruit champions early, and celebrate progress. Then she asked participants to commit—in writing—to one concrete action they would take in the next 30 days to advance AI readiness in their organization.
The message was consistent throughout the session. AI transformation isn’t primarily a technical problem. It’s a human one—and learning leaders are in a unique position to help organizations navigate it with empathy, clarity, and intention.
