Before digital assistants can legitimately claim the titleof the next hot thing in eLearning, they need to become more worldly, diverse,and inclusive. Technologies that exacerbate stereotypes or reflect and perpetuatestereotypes and biases—asmany AI algorithms do—are not ready for eLearning prime time.L&D teams incorporating AI (artificial intelligence) engines and algorithmsinto eLearning and performance support need to consider the user experience andensure that they are not creating or amplifying disparities, discrimination, orbias through the technologies they adopt.
This article describes several well-known gaps in AIalgorithms and issues with AI-powered technologies that could impact eLearning,performance support, and learner experience and engagement.
Facial recognition technology
Facial and posture recognition technologies are notoriouslypoor at recognizing non-white faces, particularly in circumstances that don’tmeet the ideals of good lighting and a full frontal view of the person’s face.
When Google’s facial recognition technology categorizedphotos of black people as gorillas, the company’s solution was to remove the“gorilla” tag from the software—along with “chimp,” “chimpanzee,” and “monkey.”According to Wired magazine, in the interveningtwo-plus years, Google has failed to correct the technology; the best fix waserasing apes from the algorithm.
This could affect eLearning and performance support in avery basic way: As more devices rely on biometric markers for authentication,nonwhite learners could face a frustrating experience simply trying to gainaccess to their training or performance support tools. If simply accessingeLearning is frustrating and time-consuming, the entire userexperience becomes unpleasant—and engagement will suffer.
Voice assistants
Ever struggle to have your Amazon Alexa or Google Homeunderstand your request? If you’re not a white male from the West Coast of theUnited States, garbled responses or a virtual shrug from your “assistant” arelikely a common occurrence.
Voice technology has made enormous strides in the past fewyears, and voice-controlled digital assistants are becoming as indispensable assmartphones to many users. But if you’re one of the billions of humans who speakwith anything the assistant perceives as an accent, you’re out of luck.
“Amazon’s Alexa and Google’s Assistant are spearheading avoice-activated revolution, rapidly changing the way millions of people aroundthe world learn new things and plan their lives,” DrewHarwell wrote in the Washington Post.“But for people with accents—even the regional lilts, dialects and drawlsnative to various parts of the United States—the artificially intelligentspeakers can seem very different: inattentive, unresponsive, even isolating.For many across the country, the wave of the future has a bias problem, andit’s leaving them behind.”
The Washington Post’sresearch found that people with regional US accents, such as Southern orMidwestern, experienced less-accurate responses from Google Home and Alexadevices; people with non-US accents fared even worse. That’s likely the resultof the “training” of the algorithm; a more diverse set of voices for algorithmdevelopment could improve the devices’ performance.
As voiceassistants increasingly appear in the workplace and becomeintegrated into eLearning and, especially, performance support, discrepanciesin how they respond to employees’ voices and accents will create barriers thatinclude a frustratinguser experience for many learners. This results in disparities ofaccess—access to training and to essential tools that help some employeesimprove their efficiency and advance in their careers. As companies strive to improvediversity within the ranks and especially in management, L&Dteams should be wary of inadvertently increasing bias or excluding learners.
Gendered technology
Beyond their parochial understanding of language, voiceassistants exemplify baked-in gender-based assumptions, biases, andstereotypes.
“If you survey the major voice assistants on themarket—Alexa, Apple’s Siri, Microsoft’s Cortana, and Google Home’s unnamedcharacter—three out of four have female-sounding names by default, and theirvoices sound female, too. Even before the user addresses Alexa, the robot hasalready established itself as an obedient female presence, eager to carry outtasks and requests on its user’s behalf,” IanBogost wrote in The Atlanticin January 2018.
By creating assistants who are “female” gendered, technologycompanies play into broad stereotypes about whose role it is to assist, becompliant and ever-helpful, and, yes, take sexist abuse without flinching.
Bogost compares the reactions that users have when theirvoice assistants’ efforts come up short with their reactions to a Google searchgone awry. “If you Googled for some popcorn instructions or a Mozart biography,the textual results might also disappoint. But you’d just read over that noise,scanning the page for useful information. You’d assume a certain amount ofirrelevant material and adjust accordingly. At no point would you be tempted tocall Google Search a ‘bitch’ for failing to serve up exactly the rightknowledge at whim.”
Building gendered technology into eLearning or performancesupport perpetuates stereotypes, undermining effortsto increase diversity and reduce bias in the workplace.
Hidden biases
Many biasesbuilt into AI algorithms—intentionally or not—are hidden from users’view. This includes content bias that could result in the algorithms detectingpatterns that reflect past discrimination or theuse of proxies that may factor in information, like race, inpredicting the success of employees or applicants. This could result insteering employees to training or promotions based on incomplete or inaccurateinformation and assumptions.
Not all hidden biases are injurious; an articleon algorithmically generated “smart replies” in email suggests thatfrequent prompting to express thanks in replies could nudge users to be morepolite in their interactions. But whether beneficial or harmful, the biases arethere, and L&D teams should consider gaps in AI algorithms when exploringAI-based technologies to enhance their eLearning and transform performancesupport tools.







