Your cart is currently empty!
Surprising Ways AI Can Improve eLearning Accessibility

Advances in artificial intelligence technologies have acertain “cool” factor. But their import goes far beyond automation,transforming the future of manufacturing and offering engaging new ways topresent eLearning and performance support. The impact of AI on accessibility isprofound, with implications for L&D and far beyond. AI can improve eLearningaccessibility in ways unimaginable just a few years ago; here are a few ofthem.
Making language more accessible
Grammar tools can help most business communicators; whoamong us doesn’t misplace punctuation or abuse pronouns now and again? But Grammarly, afree tool that checks written content for grammatical, spelling, andpunctuation errors, has gone a step farther, carving out a niche helping peoplewith dyslexia communicate in a polished and professional way. Since dyslexia isone of the most common language-based disabilities, Grammarly and its ilk couldhelp thousands of people land and keep jobs where written communication plays akey role.
The secret to Grammarly’s almost uncanny ability to intuitwhat word a writer is trying to spell is the broad education of its AIalgorithms. According to Lisa Wood Shapiro, writing in Wired, “AI learns from studying millions ofdocuments and other language-based data sets, along with computer generatedmisspelled words, and, yes, user feedback. For example, with any given spellingerror, Grammarly presents one or a few possible corrections. As Grammarlystudies the behavior of a subset of users, it sees which replacement spellingsusers accept and ignore. That information is incorporated into the optionsoffered up to users in the future.” This is machinelearning at work.
Automating access to text and audio
AI-powered tools offer additional language-based assistance.For instance, VocaSeeuses natural language processing and machine learning to automate closedcaptioning and transcription services. Their initial release focuseson “lecture-type” videos and single-speaker audios, but the company plans todevelop the technology to be able to distinguish multiple speakers. They’realso training their algorithms on a variety of regional US accents sotranscripts and captions will be more accurate. While the results of thecurrent product require careful editing, the process is very fast and easy, streamliningthe process of captioning and creating transcripts for large amounts of videocontent.
While many other captioning services exist, VocaSee’s speedyturnaround and automated service has advantages. A user can “train” the servicewith a set of keywords and concepts to increase accuracy. This profile, calleda “knowledge domain,” can be industry-specific or department-specific—useful foruniversities creating bodies of online learning content. The AI-poweredsoftware creates captions, a transcript, and a summary.
Other companies, such as Salesforce,have also created tools that summarize long texts. These algorithms eitherextract key phrases from the text or use a higher-level AI-powered process,called abstractive summarization, that can generate coherent paraphrases of thetext, potentially using words that don’t actually appear in the original text.
AI-powered translation apps, similar to captioning apps,convert spoken words into another format, providing access to non-nativespeakers (or non-speakers) of the language of a presentation or conversation.
Conversion of spoken language into captions and transcriptsis of obvious benefit to people who are deaf or hard of hearing; it can also improveaccess for language learners and anyone who struggles to process speechquickly. MicrosoftTranslator even has a “live” feature that allows multiple people tosimultaneously receive real-time translations into different languages. And theability to summarize long texts quickly can appeal to busy executives—as wellas to individuals with difficulties reading or processing language.
A Seeing AI … app
Accessibility is about more than language, though. Microsoft’sSeeingAI app is one of many tools to combine AItechnologies—natural language processing, visual processing, imagerecognition, and optical character recognition—to assist visually impairedusers in interpreting their environments. Seeing AI is a fairly robust tool,with the ability to:
- Read short texts, such as the name on anenvelope or a grocery list
- Read longer texts while also providing cues tothe text hierarchy by noting headings and titles
- Recognize US currency bills, to assist a blindperson in accurately handling money
- Guide a user to hold the camera over a bar codeon a product so that the app can identify the product
- Assist users to take photos, then describe thephoto
- Cover for people who didn’t add alt text tophotos they’ve emailed or tweeted by describing photos in several apps—or inthe user’s own photo gallery
A host of more specialized apps perform the same functions. KNFB is awell-regarded text reader, for example, and the LookTel money reader canhelp users recognize bills in 21 currencies. A related class of apps help thosesame users navigate unfamiliar environments. Blindsquare and the Sendero Group’swayfinding apps provide walking instructions, including describing points ofinterest and informing users of street intersections as they approach a corner.
Increasing accessibility of text and providing informationabout the visual environment make eLearning tools accessible to learners whohave limited vision. The AI powering these tools can also be incorporated intoeLearning to offer descriptions of virtual environments or help learners recognizecolleagues in person or on screen, improving collaboration and increasing sociallearning opportunities.
Voice commands overcome mobility barrier
Virtualassistants that respond to voice commands provide convenience to allusers, but for people with limited mobility, they can be a game changer.Voice-activated assistants, such as smart speakers, allow people to search theinternet, access information, play music or podcasts, or otherwise interactwithout using their hands. For individuals whose mobility impairments maketyping challenging or impossible, this vastly expands their potential to engagewith and produce content. As these assistants “learn” more tasks, the ways theycan increase access to eLearning and performance support—as well as provideperformance support—will continue to increase. Smart speakers’ growingrepertoire of skills is based in part on AI technologies like natural languageprocessing and machine learning.
Eye tracking improves VR accessibility
Eye tracking is getting more traction in VR as headsetdevelopers like FOVEand HTCVive work to integrate it into their consumer products.
Studying a person’s gaze tells you what she is payingattention to—valuable information to marketers. In a virtual environment, eyetracking can increase the feeling of immersion or “presence”;used in multiplayer VR games, it can improve the social communication among theplayers’ avatars. But the utility of eye tracking extends beyond the potentialto improve games or even gather data about learners or players. Knowing where alearner is looking in a VR-based training offers many opportunities to targetcontent and questions, for example.
It also improves accessibility. If, for instance, someeffects or text instructions are cued by a user’s gaze, eye tracking can alsoprovide triggers to cue subtitles that explain the sounds to learners whocannot hear them. Without knowing where that learner is looking, there’s no wayto know that she will see the subtitles.
Medium blogger LucasRizzotto says that eye-tracking in VR is a big deal because itallows users to interact with content directly, without a controller. Forlearners who cannot use a mouse or pointer or easily touch specific areas ofthe screen, eye tracking allows them to interact, period.