The quest to categorize and define people—in hopes of making better hiring and promotion decisions, for example—is ages old. New tools that purport to do this raise issues that first arose a half-century ago. The parallels between corporate use of personality inventories and some algorithm-driven assessments are striking; the ethical and privacy concerns they raise are similar and serious. But the bottom line is that these tools are not an accurate way to predict learner success.
Flawed personality inventories
In a recent eLearning Guild research report, Jane Bozarth examined the use of personality assessment tools, which, she wrote, “were used by organizations for decades, until the Civil Rights Act of 1964 shone new light on problems with using personality tests for selection and promotion.”
Problems identified include:
- Validity—whether the instruments measure what they claim to measure and the results predict what they purport to predict
- Reliability—will a personality inventory return the same result for the same individual if that individual is retested
Many personality assessments consist of multiple-choice questions. These self-reported data are not verifiable; people taking the “tests” at work or as part of a job application can often figure out what the “right” or most desirable responses are, making it easy to game the test.
The format is problematic for other reasons as well. In attempting to boil down a complex individual into a handful of traits, all nuance is lost. Test results are often reported as an either/or choice—the test-taker is either an introvert or an extrovert; either open or traditional. In reality, most personality traits occur on a continuum, and people tend to have elements of differing, even opposing, traits.
More critical, from an employer’s or L&D manager’s perspective, is that the tests fail to measure or predict individuals’ abilities or skills. And even if a personality inventory could predict job success, at what cost? Researchers and legal experts caution that the tests:
- May fail to adequately consider socioeconomic or cultural differences, thus skewing scoring
- Be unclear or use wording that is subject to different interpretations, also skewing scoring
- Ask about or rate applicants based on characteristics that relate to protected classes (e.g., gender-linked traits or age-related behaviors or traits)
- Raise privacy issues
Algorithmic assessments
Algorithm-based assessments are, in some ways, the twenty-first century’s version of personality tests. Claiming that removing human evaluators can remove bias, these algorithm-based tools are commonly used to filter job applicants, identify candidates for promotion, and more—including tasks where personality tests have been used in the past.
But these tools, powered by inscrutable artificial intelligence algorithms, raise many of the same privacy and fairness concerns: Algorithms are far from objective; and algorithm-based tools are rife with hidden bias, assumptions, and potential for misinterpretation or errors. The lack of human involvement and oversight could exacerbate the problems.
Over-reliance on crude instruments
What personality tests and algorithm-based filtering tools share is that they are at best crude instruments. They look at a small, superficial set of responses and attempt to predict a person’s potential. They see the vast variety of human traits and abilities—and purport to sort them into four or five boxes.
Both personality inventories and AI-based tools are often used for purposes beyond what they were designed to do. “Over time, the instruments were administered instead to other ends [beyond selection and promotion], such as leadership development endeavors, understanding group dynamics or decision making, and career counseling,” Bozarth wrote of personality testing tools.
Similarly, AI-powered tools with limited abilities and completely lacking in nuance, insight, or flexibility rely on patterns and flawed historical data to guess which resumes to accept or reject, or which employees should be promoted.
Rather than helping managers identify promising candidates, the overly-broad categorization of employees based on limited data could reduce opportunities by either removing people from the candidate pool or convincing them that they are a poor fit or lack the aptitude to succeed. Rather than easing collegial relationships, they could divide coworkers into groups who believe their differences to be insurmountable.
Instead of turning to imperfect and risky tools, Bozarth cites researchers who advocate for using tests of ability that “are easier to construct, are more valid and reliable, and are more likely to predict job performance.” Now that is the sort of instrument an L&D team can support—and create.
Curious about personality tests?
Read Bozarth’s full report for a deeper dive into the use and misuse of personality inventories. The report, Personality Inventories: Fiction, Fact, Future, is available for free download to all eLearning Guild members.