By George Hall
Recently, Google disclosed that its AI system had received more than 100,000 structured prompts in what it described as a “model extraction” attempt. The goal wasn’t vandalism or ransomware. It was systematic probing—asking enough questions to approximate how the system reasons.
No breach.
No stolen code.
Just interaction at scale.
Most people read that story as a cybersecurity issue.
Learning professionals should read it as something else.
Because the same dynamic is beginning to appear inside our own organizations.
As we embed institutional knowledge into our AI systems—leadership assistants, diagnostic tools, sales strategy engines—we create consistent patterns of expertise. And consistent patterns eventually become visible—and vulnerable to attack.
That has an important implication: The half-life of learning expertise may be shrinking.
Why the Half-Life of Expertise Is Shrinking
Organizations increasingly encode years of experience into internal AI assistants. These systems help scale knowledge across the enterprise.
But scaling expertise also exposes it.
When AI systems produce responses that are structured, consistent, and coherent, the reasoning behind those responses becomes easier to study.
We can think of the dynamic this way:
Exposure × Volume × Consistency = Extractability
The more consistently expertise appears through AI outputs, the easier it becomes to characterize and map the logic behind it.
Although this does not eliminate your expertise, it shortens the window in which that expertise remains uniquely “yours.”
A Familiar Scenario: Leadership Coach AI
Imagine a global financial organization has run a flagship leadership development program for the past 15 years. Over time, it has accumulated a large body of institutional knowledge:
- Facilitator guides refined through experience
- Executive coaching transcripts
- Case studies rooted in organizational culture
- Debrief notes from difficult sessions
- Internal “How to Handle…” playbooks
- Change management language unique to the organization
Over time, experienced facilitators develop strong judgment and embed it into their practice. They know how to, for example, teach leaders to reframe defensiveness, sequence tension in difficult conversations, and push when necessary—while also knowing when to pause.
Now imagine this organization builds an internal AI tool: Leadership Coach AI
The system is trained on years of proprietary program materials, coaching transcripts, and facilitation notes.
Managers can now ask questions like:
- How do I handle a resistant senior engineer?
- How do I give feedback without triggering defensiveness?
- How do I lead through ambiguity?
The responses are structured, culturally aligned, and remarkably consistent. It feels like institutional wisdom at scale—and in many ways, it is.
But consistency has consequences.
Over time, patterns begin to emerge:
- How the organization reframes resistance
- What signals trigger escalation
- How difficult conversations are sequenced
- What language sustains accountability
The system does more than provide leadership advice. It expresses the organization’s coaching philosophy. And when that philosophy appears through consistent AI outputs, it becomes observable. “Commercially motivated” actors might try to clone this philosophy.
A Second Layer: Sales Enablement Strategy AI
Now consider another system.
A global enterprise sales organization builds an internal AI assistant to help account teams prepare for complex deals.
The model is trained on:
- Years of successful enterprise sales plays
- Discovery call recordings and proposal narratives
- Competitive intelligence reports
- Internal deal retrospectives
- Pricing negotiation strategies
- Established sales frameworks such as the Challenger sales method
Challenger, a widely used enterprise sales approach, encourages sales teams to challenge customer assumptions and reframe how buyers think about their problems. Now, using this AI assistant, Sales teams can ask questions like:
- How should we position against this competitor?
- Where should we challenge the customer’s assumptions?
- What signals suggest a deal may stall?
The system dazzles by analyzing competitor strategies, synthesizing prior deals, and linking recommendations to preferred sales approaches. The output is clear, structured, and highly actionable. It can even model the financial impact of moving legacy systems to the cloud or quantify the productivity gains made deploying mobile applications across the workforce. It feels like having the organization’s best strategists guiding every sales conversation.
But something else is happening as well. Over time, the system’s responses reveal valuable patterns:
- How the company interprets competitor positioning
- What assumptions it challenges in buyer conversations
- How it diagnoses deal risk
- What narratives it uses to shift customer thinking
The system is not simply offering sales advice. It is expressing the organization’s strategic selling philosophy.
And when that philosophy becomes visible through consistent AI outputs, it becomes characterizable. Once characterizable, it can be approximated and copied.
Again, no breach required.
Just interaction at scale.
When Tacit Knowledge Becomes Observable
These examples reveal a deeper shift.
AI does not just scale expertise. It makes the logic of expertise visible.
For decades, much of an organization’s advantage lived in tacit judgment—deeply embedded in conversations, communities of practice, and accumulated intuition about what works. It lived only “in motion” and “within” the culture.
When that judgment becomes consistent AI output, tacit capability begins to appear as observable logic.
Observable logic is incredibly valuable—and it will be studied by competitors in as much depth as they can manage.
Sometimes the exposure is less dramatic than we imagine. Like a museum convinced its crown jewels are protected behind layers of security—only to discover later that a simple ladder was enough to reach an overlooked second-story window.
The system was strong.
But the designers misunderstood where the real vulnerability in the architecture was.
Frozen Capability vs. Living Capability
Is there a difference between scaling knowledge and scaling judgment?
Yes—without any doubt there is:
- Knowledge can be documented.
- Judgment must be exercised.
When organizations encode institutional wisdom into AI systems, they create frozen patterns—highly scalable, efficient, and accessible.
But the most durable advantage in learning systems has rarely been frozen content.
It has been living capability.
Living capability includes practitioners comparing notes after sessions, revisiting what fell flat and why, and adapting their approach based on real-world experience.
AI should amplify those systems—not replace them.
Urgent Patience
This is where the concept of Urgent Patience becomes important.
Urgency pushes organizations to deploy AI tools quickly—to standardize feedback, automate diagnostics, and scale expertise across the enterprise.
Those are reasonable goals.
But patience asks a different set of questions:
- Where does our real advantage actually come from?
- What happens if/when the thinking behind our systems becomes visible?
- What parts of our expertise should stay flexible instead of frozen in a model?
- How will we keep evolving what we know?
Urgent patience is not hesitation.
It is architectural discipline under acceleration.
The Tension We Must Hold
The future of learning and development will not be defined by how quickly organizations deploy AI tools. It will be defined by how well their learning systems continue to evolve once their expertise becomes observable.
- Urgency builds reach.
- Patience builds endurance.
In a world where institutional judgment can be studied, modeled, and approximated, the organizations that thrive will be those that continually regenerate the capability behind their systems.
The advantage will no longer belong to the organization that encodes its expertise first.
It will belong to the organization that learns how to renew it continuously.
Image credit: VectorHot

