Trust in the AI Rollout

By Megan Torrance and Lauren Milstid

Visit the website of any major technology consultancy firm and search for “trust in AI.” You’ll get articles full of essential and valid topics, such as explainability, security, data privacy, reliability. These are the cornerstones of technical trust, and they’re critically important. We want to know the AI models are sound, the data is safe, and the output isn’t unintentionally biased.

But there’s a big hole in that conversation—and it’s a human-sized one.

As learning & development (L&D) professionals, we often sit in the space between systems and people. And while IT teams can (and should) build systems that are secure and stable, they can’t do the human work of trust-building for us. That part’s on us and our colleagues in HR, Org Dev, and change management.

So when our organizations start rolling out AI-enabled tools, it’s not enough to check the box on governance and hope the humans will catch up. If we want AI adoption to stick, we must think about how people experience trust and how we help leaders show up in trustworthy ways.

Because if there’s one thing we know from decades in L&D, it’s this: People don’t trust what they don’t understand, don’t feel connected to, or don’t feel safe questioning.

Why trust feels so fragile in the AI era

AI is moving fast, faster than many people are comfortable with. And while the promise of increased efficiency, personalization, and insight is real, the fear is real too: Fear of being replaced. Fear of being surveilled. Fear that decisions are being made behind closed doors without understanding or input.

All of this puts trust on shaky ground. And once trust is lost, it’s tough to get back.

As our organizations launch into AI initiatives, including chatbots, analytics, workflow automation, and skill-mapping tools, we need a human trust strategy alongside the technology rollout.

That starts with understanding how trust works on the people side.

The trust triangle: Three legs we can’t skip

Harvard Business School professor Frances Frei’s Trust Triangle gives us a simple, powerful framework for understanding what makes people trust others, especially leaders in moments of change. It’s like a three-legged stool: strong when all parts are solid, and wobbly when any one of them is weak.

The three components?

  1. Empathy
  2. Logic
  3. Authenticity

Let’s look at how each one plays out in the context of AI—and how we, as L&D professionals, can help leaders and teams strengthen their foundation of trust.

Empathy: Show that you see me

This is the one most leaders do feel but don’t always know how to express. It’s the sense that “you get me.” It’s the feeling that you understand what I’m going through, what I’m afraid of, and what I hope for.

As AI reshapes jobs and workflows, people want to know:

  • Do you care about how this change affects me, not just the bottom line?
  • Will I still have a role here?
  • Am I being asked to use tools I don’t understand, without support?

And this isn’t just about emotions—it’s about perception. Even if leaders deeply care, if they don’t communicate that care clearly and consistently, it doesn’t land.

L&D opportunity:
We can coach leaders on how to communicate with transparency and compassion. We can design training and communicate in ways that make empathy visible. We can ask, out loud, “How will this rollout feel for different teams?” and build experiences that meet people where they are.

Logic: Help me understand the ‘why’

This leg is all about the reasoning behind decisions—and how clearly that reasoning is shared. People don’t just want to know what is changing; they want to understand why this is the right path and how the decision was made.

And with AI? The logic isn’t always intuitive. “We’re using this tool because it streamlines hiring decisions” might sound reasonable to leadership, but front-line managers and recruiters might be wondering, “Based on what data? And what if the algorithm misses something I’d catch?”

L&D opportunity:
This is where AI literacy, proficiency, and fluency come in. We can help the organization understand how these tools work, what they can and can’t do, and where human judgment still matters. We can equip teams to ask smart questions and engage with the tools, not just react to them. And we can help leaders tell a clearer, more grounded story about why the AI is here in the first place.

Authenticity: Be a real person

This leg is about alignment between what you say and who you are. It’s the gut check that tells us, “I believe you.”

And here’s where things can get especially tricky with AI. If people think the company is saying one thing (“We value our people!”) but doing another (quietly implementing AI to cut jobs), the authenticity leg breaks fast.

L&D opportunity:
Authenticity isn’t something we can fake. But we can help create the conditions for it. That means designing learning and communication experiences that feel real, not over-polished. It means inviting leaders to show their thinking, share their learning process, and acknowledge what they don’t know. It means creating safe spaces for questions, resistance, and dialogue.

Because when we see someone navigating change with integrity, it builds trust, even when the change itself is hard.

Bringing it all together: Trust by design

Trust doesn’t happen by accident. It happens on purpose, through the choices we make in how we communicate, how we teach, how we support, and how we listen.

As we prepare our organizations for AI transformation through reskilling, retooling, and rethinking workflows, let’s also prepare our leaders to be trustworthy guides through the change.

And let’s ask ourselves:

  • Are we helping people see where they fit in the future?
  • Are we making space for both excitement and concern?
  • Are we designing trust into our skilling, onboarding, and communication plans?

Because technology will keep evolving. But trust? That’s ours to build.

Keep learning

Want to explore how L&D can drive responsible AI adoption? Check out our recent articles on Skilling Up for AI Transformation and Impact Zones for L&D. Or explore our W.I.S.E. A.T. A.I. framework to learn how to lead with purpose in the age of AI.

Join Megan at the Learning Leadership Conference, October 1–3, in Orlando, Florida, where she will facilitate a full-day seminar on incorporating AI and other emerging technologies into your learning strategy. Pillars of Learning: Technology is a pre-conference learning event on September 30. Register for Pillars of Learning: Technology and the Learning Leadership Conference today!

 

Image credit: antoniokhr

Share:


Contributors

Topics: