Julian De Freitas and Elie Ofek remind us in their article "How AI Can Power Brand Management" (Harvard Business Review, September-October 2024) that a brand is a promise about quality, style, reliability, and aspirations. AI, in the context of eLearning, plays a crucial role in shaping this promise.  It facilitates learning by determining what the learner already knows and then leading the learner to mastery of a new set of skills and knowledge. While AI can't experience what a promise is in human terms, its voice can shape a learner's impressions, including trust. 

This article seeks to apply some lessons from brand management to learning design at a high level. Throughout the rest of this article, it is essential to remember that the context is an autonomous, interactive learning experience. The experience is created adaptively by Gen AI or (soon enough) by agents, not by rigid scripts. It may be that an AI will choose to present prewritten texts or prerecorded videos from a content library according to the human users' responses or questions. Still, the overall experience will be different for each user. It will be more like a conversation than a book.

Trust issues and AI

Pew Research recently found that three out of five people do not trust AI. The mistrust may be based on fear of losing their jobs, and in any case, dislike can be reinforced by the cold voice and impersonal style of some LLM models. As creative professionals, eliminating that chill is an important part of keeping your brand's promise when delivery is carried out by what many users will likely perceive as a “robot.”

When we talk about AI's "voice," we're referring to the distinct personality and style that the AI uses to communicate with its audience. This voice helps to create a consistent and recognizable brand identity, making it easier for users to connect with and trust the brand. 

Here are some aspects of what it takes for an AI to have a voice in autonomous, interactive learning experiences.

Consistency: The voice should be consistent across all design and communication modalities. This consistency helps build trust and recognition. For example, a friendly and conversational voice might use casual language and contractions, while a more formal voice might use precise language and avoid slang.

Tone: While the voice remains consistent, the tone can change depending on the context. For instance, the tone might be more serious when discussing important updates.

Personality: The voice should reflect an authentic personality and values. This could be professional, authoritative, friendly, supportive, or any other trait that aligns with your brand's identity.

Engagement: A well-defined voice helps engage customers by making the content more relatable and enjoyable to experience. 

It may seem odd to think of eLearning as having a voice.

So, a "voice" is needed for successful communication with users, but what does that mean when the  "instructor" is a program instead of a person? How can a designer apply this "voice" concept to asynchronous learning? How could it apply to dialog with a more autonomous chatbot that does not respond from a rigid script or FAQ file? Imagine setting up a debate between a human and an AI. How can that work?

The concept of a "voice" is essential in eLearning, especially when the "instructor" is a program. This concept can be applied to asynchronous learning and autonomous chatbots. Chatbots emulate human voices, and there are some additional requirements for their use.

Natural Language Processing (NLP): Use advanced NLP techniques to ensure the chatbot understands and responds in a way that aligns with the desired voice. 

Context Awareness: The chatbot should be able to adjust its tone based on the context of the conversation. 

Personality: Give the chatbot a distinct personality that aligns with the brand or course. 

Empathy and Support: The chatbot should be able to provide emotional support and encouragement, especially in educational settings.

Adaptability: The chatbot should be able to adapt its responses based on the user's input and emotional state. 

By applying these principles, you can harness the power of AI to create a more engaging and effective eLearning experience, whether it's through asynchronous content or interactive chatbots. 

User Resistance and Acceptance

Research indicates that some users resist learning experiences in which AI or chatbots actively participate autonomously. This resistance can affect user acceptance, participation, trust, and learning outcomes. Here are some key findings from published research.

Perceived Usefulness and Ease of Use: If users find the chatbot helpful and easy to interact with, they are more likely to accept it.

Trust and Relational Factors: Trust is critical in accepting AI chatbots. Relational factors, such as the chatbot's ability to build rapport with users, are also important.

Perceived Risk and Enjoyment: Perceived risk and enjoyment also play significant roles in user acceptance. Users who perceive a high risk of data privacy issues or find the chatbot interactions unenjoyable are less likely to engage with the technology. 

Summary

In summary, while AI chatbots have the potential to enhance learning experiences, their acceptance and effectiveness depend on several factors, including perceived usefulness, ease of use, trust, relational factors, perceived risk, and enjoyment. 

Personalization and building trust are essential for maintaining user engagement and achieving positive learning outcomes. The right "voice" for autonomous AI or a chatbot can enhance trust by making interactions more personal, consistent, and empathetic. Using human-like approaches such as providing examples, logical arguments, personalization, transparency, and adapting content based on feedback can further build trust and improve user engagement.

Because AI technology is moving so quickly, this article has not explored the "how." As more agents appear (probably in the next three to four months), I will return to provide details. In the meantime, you may now have enough information to try some experiments on your own with these LLMs: 

OpenAI's Voice Engine

Google's Tacotron 2

Amazon Polly

Microsoft Azure Cognitive Services