Before digital assistants can legitimately claim the title of the next hot thing in eLearning, they need to become more worldly, diverse, and inclusive. Technologies that exacerbate stereotypes or reflect and perpetuate stereotypes and biases—as many AI algorithms do—are not ready for eLearning prime time. L&D teams incorporating AI (artificial intelligence) engines and algorithms into eLearning and performance support need to consider the user experience and ensure that they are not creating or amplifying disparities, discrimination, or bias through the technologies they adopt.
This article describes several well-known gaps in AI algorithms and issues with AI-powered technologies that could impact eLearning, performance support, and learner experience and engagement.
Facial recognition technology
Facial and posture recognition technologies are notoriously poor at recognizing non-white faces, particularly in circumstances that don’t meet the ideals of good lighting and a full frontal view of the person’s face.
When Google’s facial recognition technology categorized photos of black people as gorillas, the company’s solution was to remove the “gorilla” tag from the software—along with “chimp,” “chimpanzee,” and “monkey.” According to Wired magazine, in the intervening two-plus years, Google has failed to correct the technology; the best fix was erasing apes from the algorithm.
This could affect eLearning and performance support in a very basic way: As more devices rely on biometric markers for authentication, nonwhite learners could face a frustrating experience simply trying to gain access to their training or performance support tools. If simply accessing eLearning is frustrating and time-consuming, the entire user experience becomes unpleasant—and engagement will suffer.
Voice assistants
Ever struggle to have your Amazon Alexa or Google Home understand your request? If you’re not a white male from the West Coast of the United States, garbled responses or a virtual shrug from your “assistant” are likely a common occurrence.
Voice technology has made enormous strides in the past few years, and voice-controlled digital assistants are becoming as indispensable as smartphones to many users. But if you’re one of the billions of humans who speak with anything the assistant perceives as an accent, you’re out of luck.
“Amazon’s Alexa and Google’s Assistant are spearheading a voice-activated revolution, rapidly changing the way millions of people around the world learn new things and plan their lives,” Drew Harwell wrote in the Washington Post. “But for people with accents—even the regional lilts, dialects and drawls native to various parts of the United States—the artificially intelligent speakers can seem very different: inattentive, unresponsive, even isolating. For many across the country, the wave of the future has a bias problem, and it’s leaving them behind.”
The Washington Post’s research found that people with regional US accents, such as Southern or Midwestern, experienced less-accurate responses from Google Home and Alexa devices; people with non-US accents fared even worse. That’s likely the result of the “training” of the algorithm; a more diverse set of voices for algorithm development could improve the devices’ performance.
As voice assistants increasingly appear in the workplace and become integrated into eLearning and, especially, performance support, discrepancies in how they respond to employees’ voices and accents will create barriers that include a frustrating user experience for many learners. This results in disparities of access—access to training and to essential tools that help some employees improve their efficiency and advance in their careers. As companies strive to improve diversity within the ranks and especially in management, L&D teams should be wary of inadvertently increasing bias or excluding learners.
Gendered technology
Beyond their parochial understanding of language, voice assistants exemplify baked-in gender-based assumptions, biases, and stereotypes.
“If you survey the major voice assistants on the market—Alexa, Apple’s Siri, Microsoft’s Cortana, and Google Home’s unnamed character—three out of four have female-sounding names by default, and their voices sound female, too. Even before the user addresses Alexa, the robot has already established itself as an obedient female presence, eager to carry out tasks and requests on its user’s behalf,” Ian Bogost wrote in The Atlantic in January 2018.
By creating assistants who are “female” gendered, technology companies play into broad stereotypes about whose role it is to assist, be compliant and ever-helpful, and, yes, take sexist abuse without flinching.
Bogost compares the reactions that users have when their voice assistants’ efforts come up short with their reactions to a Google search gone awry. “If you Googled for some popcorn instructions or a Mozart biography, the textual results might also disappoint. But you’d just read over that noise, scanning the page for useful information. You’d assume a certain amount of irrelevant material and adjust accordingly. At no point would you be tempted to call Google Search a ‘bitch’ for failing to serve up exactly the right knowledge at whim.”
Building gendered technology into eLearning or performance support perpetuates stereotypes, undermining efforts to increase diversity and reduce bias in the workplace.
Hidden biases
Many biases built into AI algorithms—intentionally or not—are hidden from users’ view. This includes content bias that could result in the algorithms detecting patterns that reflect past discrimination or the use of proxies that may factor in information, like race, in predicting the success of employees or applicants. This could result in steering employees to training or promotions based on incomplete or inaccurate information and assumptions.
Not all hidden biases are injurious; an article on algorithmically generated “smart replies” in email suggests that frequent prompting to express thanks in replies could nudge users to be more polite in their interactions. But whether beneficial or harmful, the biases are there, and L&D teams should consider gaps in AI algorithms when exploring AI-based technologies to enhance their eLearning and transform performance support tools.