Undercurrents surfaces insights from leaders driving capacity-building
By Mark Britz
and performance improvement beyond traditional L&D
to explore where learning happens and where it’s headed.
When leaders talk about AI adoption, the conversation often drifts quickly toward tools, training programs, or prompt libraries. But in speaking with Alex Kalish, Amazon’s Global Ops GenAI Transformation Lead, it becomes clear that framing the adoption this way misses something essential. For Kalish, AI adoption exposes the limits of treating learning as the primary lever. It’s really about work design and highlights how organizations have long confused activity with impact.
Kalish’s perspective didn’t emerge from a single career chapter. It’s the product of moving across operations, learning, product, and now GenAI transformation. Early operational experience, including time in Navy operations, shaped a mindset that still governs how he approaches change today: Effort only matters if it produces a measurable improvement in the end user experience.
“In an operational environment, you can’t confuse effort with outcome,” he explains. “If you try something and it doesn’t improve performance or the customer experience, you’ve learned something, but you haven’t created value yet.”
That mindset carried directly into his work in learning and development. Traditional signals, such as attendance, completions, or test scores, can be useful, but they often fail to answer the question that actually matters: Do people work differently as a result? With GenAI, the gap becomes even more visible, as adoption isn’t about whether someone used a tool once; it’s about whether AI meaningfully changes the quality and speed embedded in real work.
His product management background reinforced the same lesson from a different perspective. If something is built and no one uses it, that’s not on the user—it’s a product issue. The familiar trap of “if we build it, they will come” ignores the reality that behavior only changes when a real pain point is solved clearly enough that the new behavior becomes the obvious choice. Kalish carries a phrase with him: Problems are treasures. Recurring frustrations aren’t obstacles; they’re signals pointing directly to where value is hiding.
Disrupting the Mental Model, Not the Org Chart
When Kalish describes himself as a disruptor, he’s quick to clarify that he’s not interested in tearing down org charts or chasing the latest tools. The disruption he’s after runs deeper than process; it targets the mental models leaders operate from.
“The biggest, most durable impact I’ve had comes from helping people look at the same situation through a different frame,” he says.
“When you shift how a leader defines the
Alex Kalish
problem and what ‘good’ looks like,
what’s possible, what’s worth prioritizing,
you unlock benefits
that compound over time.”
That reframing shows clearly in his GenAI work. Rather than treating AI as something people need to be trained on, he pushes leaders to see it as something that reshapes how work gets done and how value is created.
The same reframing instinct has also informed how he talks about well-being; approaching mental health in practical, stigma-reducing terms. If something feels off, you get help, just as you would if you were physically ill. In more than a few cases, that reframing has been enough to lower the barrier for someone to reach out when they needed support.
When AI Starts Feeling Easy
One of the most common reactions Kalish encounters is surprise—not at what AI can do, but at how quickly it becomes useful once the barrier to entry drops.
He tells a simple story of showing an older family member how to use voice dictation to ask an AI about a household issue—an appliance error light, a minor malfunction. Instead of worrying, searching for half an hour, and sorting through conflicting advice, she spoke the question aloud and got a clear, step-by-step answer in seconds. For anyone who finds typing tedious, voice becomes the unlock. AI stops feeling like “something you have to learn” and starts feeling like help that’s immediately available.
The same dynamic shows up at work. Many people assume AI will only produce generic output, not realizing how much hinges on a small but critical move: teaching the AI who you are and what you’re trying to do. Even lightweight context—constraints, goals, environment, a definition of what “good” looks like—dramatically improves results.
The “wow” moment, Kalish says, is realizing that this isn’t hard to start, and the value isn’t abstract.
From Tool Training to Work Redesign
Kalish draws a sharp distinction between teaching people how to use AI and helping them rethink what work should look like with AI embedded.
Tool training focuses on interfaces: where to click, how to prompt, what features exist. It can help in the short term, but it ages quickly as tools evolve. More importantly, it keeps people in a manual mindset, waiting to be told the steps to follow.
Rethinking work with AI embedded is a different shift altogether. It’s behavioral. It’s moving from “I use AI occasionally” to “AI is my default first collaborator.”
One of the simplest reframes Kalish uses is deceptively small: Instead of asking an expert how to do something, ask the AI first. That single change compounds as people begin exploring on their own, teaching themselves, and embedding AI into real tasks instead of keeping it limited to a training environment.
If the real value of GenAI lies in synthesis and sense-making (not just speed) then “learning AI” can’t be treated like learning a new software feature set. Leaders get the most leverage when they bring the messy reality: unclear problems, conflicting inputs, half-formed ideas. Used this way, AI becomes a thinking partner, one that surfaces assumptions, connects disparate information, and helps leaders see around corners. The skill to build isn’t prompt memorization; it’s an experimentation habit.
Measuring What Actually Changed
Kalish’s emphasis on measuring cost and time savings rather than completions reflects a deeper belief: Learning alone isn’t the outcome; changed work is.
Completions and assessments can tell you whether content was consumed. They rarely tell you whether performance improved or whether the organization is operating differently.
In an AI-enabled world where content creation is increasingly commoditized, the differentiator shifts to the system designed to support behavior change around it: practice loops, reinforcement, guidance, and metrics tied to outcomes.
If L&D seeks to play a leading role in AI adoption, success would have to be defined differently. The core questions wouldn’t be about participation, but about behavior: Did people integrate AI into real workflows? Did that translate into better decisions, higher quality, less rework, faster cycle times, or reduced cost?
Answering those questions requires more discipline, experimentation, quasi-experimental methods, and a willingness to live with some measurement noise.
Leaders As the Catalyst
Across organizations, Kalish sees a recurring pattern. Leaders often become the biggest accelerator of AI adoption or, unintentionally, its biggest bottleneck.
AI adoption isn’t just about building skills; it’s about work design. If leaders don’t change the operating system and how work is prioritized, how work gets done, and what “good” looks like, then individuals may become faster, but the system stays the same. People get better at yesterday’s workflows.
Leaders can become a catalyst to AI adoption by using the technology themselves. That firsthand experience allows them to set expectations, remove friction, and recognize where AI can meaningfully reshape work.
Kalish starts with leaders personally. Once they experience tangible value themselves, they understand both the leverage and the limitations. Only then can they credibly redesign workflows and norms.
Designing for Habits, Not Just Knowledge
The large-scale AI learning experiences Kalish has led are built around a simple principle: Habit beats knowledge. The people who see the biggest returns aren’t necessarily the most trained; they’re the ones who experiment consistently.
That insight drives the design: Less like a traditional course, more like a system designed to support behavior change. Quick wins early. Practice on real work. Progression from basic usage to higher-leverage patterns. Inclusivity across starting points, so both skeptics and advanced users can find value.
The self-paced, work-integrated model isn’t just about convenience; it’s about scale and durability. AI transformation doesn’t stick in a sandbox. It sticks when people use AI on real documents, real decisions, real deliverables. Not everyone needs to become a power user. Some need just enough capability to remove friction and improve outcomes. A flexible model allows both.
A Question Worth Sitting With
There’s no single definition of “good” AI adoption, Kalish says. Context matters. But a few signals hold true: AI is used to redesign work, not just speed it up, and that redesign is clearly tied to business outcomes.
The real value shows up when teams become willing to challenge legacy assumptions and when they stop asking how to do the same process faster and start asking why the process exists at all.
For learning professionals curious about stepping into this space, Kalish’s advice is straightforward: Start using GenAI every day on real questions. Not as a future initiative, but as a personal practice. Let it challenge assumptions. Follow the threads that spark insight.
And keep asking one question that will help build habits: How else could I use AI?
That question is what begins to move AI from a productivity trick to a redesign partner for learning, for performance, and for how work actually gets done.
