As I look back on the vision I shared earlier this year, it's clear that AI's potential in Learning & Development (L&D) remains transformative. In January and March of 2024, I wrote about how conversational AI and AI-powered experiential learning environments would fundamentally reshape the way we train employees, develop talent, and foster leadership in organizations. These predictions still hold. But if 2024 has taught us anything, it's that progress has been slower than anticipated.

In their October 2024 report, Donald Taylor and Egle Vinauskaite provide a thorough analysis of where organizations stand when it comes to AI in L&D. This report reaffirms much of what I've been hearing from leaders across our industry, as well as from vendors pushing the boundaries of AI solutions; it also tracks with nearly every serious conversation I'm familiar with in L&D and talent development. The findings are both illuminating and, for those of us close to the field, unsurprising.

Where we are in 2024: Vision vs. reality

Back in January, I painted a vision of a future where AI-driven, conversational performance support would empower employees and leaders to get support, and even work on and develop skills, much more on demand. Similarly, I spoke about AI simulations providing immersive, experiential learning environments, for example to build critical thinking and leadership capabilities, offering practice environments and immersion. But as 2024 progressed, it became clear that most organizations remain hesitant to move beyond tactical applications.

Don and Egle's work highlights that 17% of organizations have not even begun exploring AI in L&D, and those that have are largely stuck in the experimentation phase, as opposed to executing fully scalable solutions.

What that looks like is that—instead of strategic AI deployments—many businesses are still using AI primarily for content generation. This most basic, operational use case solves immediate problems but falls short of delivering long-term value.

This distinction between AI use cases and a broader AI strategy is critical. While use cases can provide immediate solutions, they won't drive sustainable transformation without integration into a long-term, holistic strategy. It's this shift, from experimentation to strategy, that may prove crucial as we move into 2025.

The barriers to AI adoption

The reasons for this slow progress are clear. From the C-suite to frontline talent developers, a mix of uncertainty, risk aversion, and organizational inertia continues to impede the adoption of AI at scale. My conversations, as well as the report's findings, all point to the same key barriers:

Privacy and data concerns

According to the report, the biggest barrier to AI adoption in L&D, cited by 19.5% of respondents, is rooted in privacy and data security concerns. This is hardly surprising. As organizations collect and process increasingly sensitive employee data, from performance metrics to learning behaviors, there is a pervasive fear of data breaches and non-compliance with regulations, such as GDPR, to name only one. Conversations with leaders in highly regulated industries reflect these concerns, particularly around the potential misuse of AI to infer sensitive characteristics that could lead to bias or discrimination.

In many ways, this caution around privacy reflects a recurring theme: While leaders understand AI's capacity to unlock incredible insights for the business and for employee development, they are equally cautious about trusting it with sensitive data, especially without clear governance frameworks. If AI is to be more widely adopted in 2025, organizations must follow the lead of those charging ahead and prioritize data privacy, build robust governance structures, and ensure their AI initiatives are aligned also with their ethical vision and standards.

Lack of trust in AI outputs

The second-largest barrier, identified by 13.4% of respondents, is a lack of trust in AI outputs. This concern is prevalent across the conversations I have had with decision-makers, whether in L&D and HR or in the wider business. Organizations are naturally hesitant to rely on AI-driven recommendations, due to fears of hallucinated data, biased results, or an inability to account for the nuances of human performance.

This is where the reliability gap becomes critical. AI systems are only as reliable as the data they're trained on, and bias in the data can lead to biased outcomes. Leaders tell me they need to see transparency—how AI systems make decisions, how data is used, and how risks are mitigated—before they can fully integrate AI into their learning and talent strategies.

Lagging AI policies and governance

A trend many of us have observed is the lagging development of AI policies. Governance frameworks have not kept pace with the technology itself, and, even this late in 2024, many businesses are still in the process of forming AI governance committees and determining who owns AI initiatives internally. This is not surprising: AI raises complex issues of data privacy, ethics, and risk management, and organizations are still trying to figure out who should lead these efforts.

Who should lead AI? The debate continues

This leads to one of the most persistent challenges: determining who should own AI within an organization.

I've seen an ongoing tension. Some believe AI should be led by IT, or data, given its technical complexity; others argue that AI is a strategic asset affecting the entire organization, and everyone therein, and thus should be driven by the people function. And then, there's everyone in between. After all, no one has figured this out yet. Or, for a better use of words, they are only in the process of figuring this out.

This lack of consensus on AI leadership is slowing down progress. Without a clear ownership model or a defined governance structure, many organizations prefer to wait it out, trying to resolve these internal dynamics before making bold moves.

Success stories: The exceptions that prove the rule

Despite broad hesitancy, there are pockets of progress where AI is making a tangible impact. The report highlights some great examples: Organizations like Ericsson, which has implemented an AI-driven skills intelligence system to dynamically update job profiles and map out skills development in real time. Similarly, NHS Elect has used AI to create scalable coaching solutions, addressing the logistical challenges of traditional coaching models.

These examples aren't isolated. I've seen firsthand how forward-thinking organizations that align AI initiatives with their strategic goals, versus merely viewing AI as an experiment, are the ones making real progress. What sets these organizations apart is their ability to move beyond content automation and embrace AI as a strategic driver linked directly to business outcomes such as talent management, leadership development, and operational efficiency.

The lesson is clear: Organizations that elevate AI from use-case-driven experiments to strategically integrated initiatives are those seeing the most success.

Looking ahead to 2025: The year for strategic AI adoption

As we look ahead to 2025, it's clear that this is the year for L&D and talent development to shift from experimentation to strategy. While 2024 is seeing organizations invest in data management, AI governance, and early literacy programs, the time has come to move beyond isolated AI use cases and build scalable, long-term strategies.

The way forward requires organizations to focus on three key areas:

  • Establish Clear Governance for AI: Successful AI adoption starts with robust governance. Organizations need to create governance frameworks that not only address privacy and ethical concerns but also ensure alignment with strategic business goals. This requires clear leadership and decision-making structures to ensure AI initiatives are integrated effectively across the enterprise.
  • Scale and widen your approach: While pilots are crucial to exploring AI's potential further, 2025 must be the year of scaling the number of pilots and the number of iterations. Phased approaches have been shown to work, and they allow for agility as the organization changes and as the tool landscape evolves. Organizations need to take the lessons learned and continue to develop and deploy use cases and solutions, in line with their wider, overarching strategy.
  • Invest in Cross-Functional Upskilling: AI isn't just a technology to be implemented, it's a cultural shift. For AI to be fully embedded in organizations, it's critical to build AI literacy across all departments. This involves upskilling teams in both technical and non-technical roles, ensuring that employees across functions can leverage AI to enhance decision-making and drive value.

I often remind leaders: A series of use cases does not, in hindsight, constitute an AI strategy.

Conclusion: A call to action

For the L&D community, 2025 is the year to stop waiting and start building. The barriers to AI adoption, while significant, are solvable. Organizations must commit to scaling AI strategically, leveraging its power not just for content efficiency but for transformative change in leadership, decision-making, and talent development.

The research from Don and Egl? is invaluable because it brings to light what those of us embedded in the field have been seeing firsthand: the groundwork for AI-driven transformation is here, but it requires commitment, investment, and strategic alignment to realize its full potential.

Those who embrace AI in 2025 will not only gain a competitive advantage but will also lead the way in creating future-ready learning ecosystems.

Explore AI at Learning Guild events!

Join us at DevLearn 2024 and Learning 2024—two opportunities to explore AI technology, strategy, and potential. Both events offer full-day seminars on AI in L&D, as well as a wealth of sessions and activities.

At DevLearn, November 6–8, 2024, in Las Vegas, you can:

At Learning 2024, December 4–6, 2024, in Orlando, you can: