Working with data to better understand learners, determine (or demonstrate) the effectiveness of training and performance support tools, or figure out which applicant to interview, hire, or promote can be efficient and effective—if you have good data. Unfortunately, much of what is regarded as “big data”—or any size data—consists of proxies. The problem with proxies in eLearning data is that proxies often miss the mark, omitting crucial information, or focusing attention on the wrong details.

Codified opinions

Proxies are models. They are stand-ins for facts. A proxy might consist of data about actual people who are similar to the person whose learning needs you’re trying to anticipate. Or, it could be information that is easily available or legal to gather, like a person’s ZIP code, that proxies information—intentionally or not—that cannot easily or ethically be considered, like race. Most insidiously, proxies can include patterns that an AI (artificial intelligence) algorithm has detected and is using, without the knowledge or intention of the humans using the AI-powered application. AI refers to computers performing in a way that “mimics some operations of the human mind, such as making decisions based on data, recognizing objects and speech, and translating languages,” according to Jane Bozarth, the Guild’s research director and author of Artificial Intelligence Across Industries: Where Does L&D Fit?

Proxies are ubiquitous because people building data-based, automated tools “routinely lack data for the behaviors they’re most interested in,” Cathy O’Neil wrote in Weapons of Math Destruction. “So they substitute stand-in data, or proxies. They draw statistical correlations between a person’s ZIP code or language patterns and her potential to pay back a loan or handle a job.” An obvious—but increasingly common—problem is that many of the correlations lead to discrimination.

A solid data model, which should be the foundation of any AI-based eLearning or performance support tool, requires a large amount of good, reliable data. The developers behind the model, and the L&D teams and managers using its outputs, need a steady stream of real data. Adding new data allows them to constantly test the model, get feedback on how it’s working, analyze that feedback, and refine the model. Off-the-shelf, “black-box” AI algorithms lack these features.

“Models are, by their very nature, simplifications,” O’Neil wrote. “Models are opinions embedded in mathematics.” The programmer who creates the model decides what to include and what to leave out. Sometimes the omissions are not harmful and are even necessary and helpful: O’Neil provides the example of Google Maps, which “models the world as a series of roads, tunnels, and bridges.” The details—landmarks, buildings, traffic lights—that don’t appear don’t make the model any less useful for its intended purpose.

Not all omissions are benign, though; many models sacrifice considerable accuracy in their quest for efficiency. An algorithm that filters applicants for a job might improve efficiency by quickly narrowing the applicant pool to a manageable number of individuals to interview, but if it filters out all individuals who live outside a specific area or whose resumes don’t specifically mention a software package, the best candidate might be passed over.

In some cases, the difference between whether a model is helpful or harmful depends on what it’s used for.

Consider how proxy data will be used

Many large corporations use automated testing to screen job applicants. Some of these tests, which resemble well-known “five-factor” personality tests, purport to be able to predict which applicants will perform better on the job or are less likely to “churn”—that is, which applicants might remain with the company longer. Obviously, actual data about the individual applicants’ performance on this job, for which they haven’t yet been hired, is not available. Neither is actual data on these individuals’ historical work performance. So, the automated hiring and onboarding programs use proxies to predict the behavior of applicants and rate them, relative to others in the applicant pool.

While there is a role for predictive analytics in eLearning and in personnel decision-making, those predictive analytics should use solid data. Instead, they might rely on the answers to unscientific questions that often force candidates to choose between two illogical and unsuitable responses. One example O’Neil provides is choosing which of these statements “best described” them:

  • “It is difficult to be cheerful when there are many problems to take care of.”
  • “Sometimes, I need a push to get started on my work.”

What if neither describes the applicant? Or both? The algorithm-based tests are rigid and lack any appeal or “sanity check,” such as a human with whom to discuss responses and results.

A critical problem with automated screen-out algorithms is that there’s no way to tell how many rejected candidates would have been stellar employees, information that would provide valuable feedback for improving the tests. This lack of feedback is problematic for both the applicant and the hiring organization: Rejected candidates have no idea why they were rejected and cannot appeal. The hiring team has no insight into why the candidates who “passed” were selected.

But the tests are not inherently problematic; it’s the use of them to screen job applicants that is frustrating for applicants, potentially discriminatory—and extremely ineffective, according to the Harvard Business Review. But if the tests were used with existing employees, they could enhance team building and communication. “They create a situation in which people think explicitly about how to work together. That intention alone might create a better work environment,” O’Neil wrote.

She suggests using automated testing to gain insights, not to screen out applicants. Rather than analyzing the skills that each applicant brings, O’Neil wrote, these tests judge applicants by comparing them with “people who seem similar.” They tend to identify people who “are likely to face great challenges,” she wrote—in school, in a job—and reject them.

Instead, the information gleaned from applications could be used to identify the services and programs applicants need to meet their challenges and succeed. If managers used automated questionnaires to learn about new hires during onboarding, say, rather than to filter out nonconforming applicants, the actual data they’d have about people they’d hired could help them improve workplace practices. Managers could use this data to tailor training, create performance support tools, and shift workplace culture and policies to ensure the success of a broader pool of applicants and new hires.

No proxy for soft skills

Finally, an additional problem O’Neil identified with using automated testing and proxy models to screen employees and applicants is their inability to capture essential skills. Employees have value far beyond their years of schooling or the number of phone calls they complete in an hour, but the models have no digital proxies for soft skills. The inherent value in being the team member who coaches the others or defuses tension in group meetings is immense—but impossible to quantify. Teams need all kinds of people, and not all skills are countable or digitally measurable.

As with predictive analytics, AI-powered screening tests are a useful tool, but only when used as part of a system that includes feedback, analysis, and collaborative decision-making by managers who know the individuals they are evaluating. When used for suitable purposes and with appropriate human intervention, proxies in eLearning data collection and analysis can enhance efficiency and effectiveness.