Can We Really Test People for Potential?
We need a more nuanced approach to predicting job performance.
Topics
People Analytics
Editor’s note: This article is part of a new MIT SMR series about people analytics.
Have you ever taken an aptitude or work personality test? Maybe it was part of a job application, one of the many ways your prospective employer tried to figure out whether you were the right fit. Or perhaps you took it for a leadership development program, at an offsite team-building retreat, or as a quiz in a best-selling business book. Regardless of the circumstances, the hope was probably more or less the same: that a brief test would unlock deep insight into who you are and how you work, which in turn would lead you to a perfect-match job and heretofore unseen leaps in your productivity, people skills, and all-around potential.
How’s that working out for you and your organization?
My guess is that results have been mixed at best. On the one hand, a good psychometric test can easily outperform a résumé scan and interview at predicting job performance and retention. The most recent review of a century’s worth of research on selection methods, for example, found that tests of general mental ability (intelligence) are the best available predictors of job performance, especially when paired with an integrity test. Yet, assessing candidates’ and employees’ potential presents significant challenges. We’ll look at some of them here.
People Metrics Are Hard to Get Right
For all the promise these techniques hold, it’s difficult to measure something as complex as a person for several reasons:
Not all assessments pass the sniff test. Multiple valid and reliable personality tests have been carefully calibrated to measure one or more character traits that predict important work and life outcomes. But countless other tests offer little more than what some scholars call “pseudo-profound bullshit” — the results sound inspiring and meaningful, but they bear little resemblance to any objective truth.
People often differ more from themselves than they do from one another. Traditional psychological assessments are usually designed to help figure out whether people who are more or less something (fill in the blank: intelligent, extraverted, gritty, what have you), on average, do better on whatever outcomes the organization or researcher is most interested in. In other words, they’re meant to capture differences among people. But several studies have found that, during a two-week period, there can be even more variation within one individual’s personality than there is from person to person. As one study put it, “The typical individual regularly and routinely manifested nearly all levels of nearly all traits in his or her everyday behavior.” Between-person differences can be significant and meaningful, but within-person variation is underappreciated.
People change — and not always when you expect them to. The allure of aptitude, intelligence, and personality tests is that they purport to tell us something stable and enduring about who people are and what they are capable of. Test makers (usually) go to great lengths to make sure people who take the test more than once get about the same score the second time around. Yet compelling evidence suggests that we can learn how to learn, sometimes in ways we didn’t anticipate. We can also shift our personalities in one direction or another (at least to some degree, though not always without cost) for both near-term benefits and longer-term goals. Interestingly, one recent study with more than 13,000 participants found that people tend to become more conscientious right before getting a new job, which is conveniently around the time a hiring manager would be trying to figure out how hard they would work if they landed the role.
The nature of the task can matter more than the nature of the person. Most of us have heard the theory that we each have a preferred learning style, and the more we can use the one that fits, the more we’ll remember. Unfortunately, virtually no evidence supports that theory. That doesn’t mean that all approaches to studying are equally effective — it’s just that the strategy that works best often depends more on the task than on the person. Similarly, different parts of our personalities can serve different types of goals. We act extraverted when we want to connect with others or seize an opportunity, and we become disciplined when we want to get something done or avoid mistakes. In one study, conscientiousness especially emerged when the things that needed to get done were difficult and urgent — even for people who were not especially organized and hardworking in general.
One way to read this list of challenges is to come away convinced that people analytics is a fool’s errand. But that would ignore the fact that each of these caveats has been uncovered through rigorous analysis of people data.
Instead, it’s probably more constructive to remember what personality psychologist Brian Little, while channeling psychologist Henry Murray and anthropologist Clyde Kluckhohn, says in his popular TED Talk: “Each of us is … in certain respects, like all other people, like some other people, and like no other person.” People analytics, in other words, needs to include better person analytics.
What It Would Take to Go Granular
What would better person analytics look like?
For starters, we would consider the context. Companies selling off-the-shelf assessments often tout the many thousands of diverse professionals who have already taken their survey to prove that it can work for all kinds of people and circumstances. A large validation effort can be a sign of an invaluable general-purpose tool, but that doesn’t mean it’s right for every job. Sometimes the situation calls for customization.
Consider a project that the Wharton People Analytics research team did with Global Health Corps (GHC), a leadership development organization aimed at improving health equity. Each year, GHC screens thousands of applications to find the most promising candidates for yearlong fellowships, and the management team had developed a hunch that a certain personality trait might be predictive of a fellow’s job performance. So we devised multiple methods to measure it. The first was a general measure of said trait that was previously developed, validated, and published in a peer-reviewed journal, while the second was a new situational judgment test (SJT) we developed with GHC so that we could look for evidence of this trait in how people responded to a number of job-relevant scenarios. We also tried a more advanced linguistic analysis to flag indicators of this trait in candidates’ application essays. Whereas the established measure had the best evidence behind it, and the linguistic analysis was the most technically sophisticated, in the end the situational judgment test was the only significant predictor of job performance for candidates.
When considering why this worked best, we think it’s not just because the SJT took the organization’s unique context into account but also because it captured the extent to which this trait showed up in many different situations, not just on average. Custom measures are not always the answer, but sometimes the context really is important.
Next, we would design new measures with variability in mind. Given the findings mentioned above about how much people’s behavior can change from one situation to the next, it might seem paradoxical to even try to find something enduring about a person’s character. But just because personality is dynamic does not mean it is undiscoverable. Some researchers have proposed using if-then questionnaires to detect nuanced patterns in each person’s personality profile, although such techniques have yet to be well-tested in the workplace. A better approach might be to take repeated measures from the same employees over time. That is often easier said than done, given the challenges many organizations face in getting employees to fill out even a single survey. If the participation problem can be overcome, however, repeated measures can lead to insights — about what people are like in general and the ways in which they vary — that onetime surveys simply can’t generate.
Earlier this year, for example, George Mason University researcher Jennifer Green and her colleagues took a novel approach to understanding the relationship between employees’ personalities and their organizational citizenship behaviors (those often-underappreciated extra ways that employees support their colleagues and organizations over and above their job duties). By using an experience sampling methodology in which they collected multiple reports from more than 150 employees over the course of 10 workdays, they were able to show that employees with more consistent personalities were, in turn, more consistent in going beyond the call of duty — even after controlling for their general dispositions. For jobs where consistency is key to success, these researchers argued, repeated measures offer a chance to find stability in the variability of employees’ personalities.
Email Updates on the Future of Work
Monthly research-based updates on what the future of work means for your workplace, teams, and culture.
Please enter a valid email address
Thank you for signing up
Finally, we would give people their own data in ways that would help them develop. Although most employees won’t have the skills or even the interest to track and analyze their own data, that doesn’t mean they wouldn’t be able to use it if it were summarized well and presented clearly.
Some tools have been designed expressly for that purpose. Microsoft MyAnalytics, for example, is an add-on to Office 365 that aims to reduce the pain of collaborative overload by sending you reports about your schedule and communication patterns. While there are nudges and recommendations built in, the basic premise behind the service is that providing you with a summary of your own data will help you identify your own strategies for making your work life better. In a similar vein, Ambit Analytics received a preseed round of funding in early 2018 for its technology that uses real-time voice analysis to coach managers in the moment on their communication skills. While the long-term viability and utility of both of these tools remains to be seen, both point to the potential for giving individuals more opportunities to learn from their own data.
Taking a more granular approach to people analytics does have its risks, of course. For one thing, because individuals can be more easily identified by their data, privacy may be an even larger worry than it normally is. It’s a valid concern — one that underscores the need for vigilance. Organizations must develop robust policies and practices to govern ethical data collection, access, and use. They must also be transparent with employees not only about what kinds of data are being collected but also about what the data says about them. Open and ongoing dialogues about the costs and benefits of more personalized analytics should be as common as the legalese-filled privacy statements people too often just click through.
A bigger-picture concern is the risk of hypercustomization. Organizations are prone to an often inaccurate uniqueness bias in which they assume that no other group of workers has ever been quite like them. That can lead to situations like the one I found myself in a few years ago, when senior leaders from two organizations independently — in the same week — asked me about the idea of measuring their employees’ levels of grit. When I explained that grit is usually measured as passion and perseverance for long-term goals, both were quick to say that they were defining it differently for their context. One said it was really about resilience and tenacity at her nonprofit; the other insisted that ambition was at its core. They may each have identified important traits for their respective organizations, but the problem was that they both contended they wanted to measure something called “grit.” If bespoke measurements with identical names start to proliferate, it’ll become much harder for all of us to talk with and learn from one another.
And learning, after all, is the raison d’être for people analytics. Organizations invest in it because they hope it will tell them something about their current or future employees that will increase the odds of forming productive long-term relationships with them. Employees engage with it when they have a reasonable expectation that it might reveal something about themselves and who they might yet become in their careers. Both of these goals will be better served if we pursue a finer-grained understanding of human potential.
Comments (3)
Kwesi da-Costa Vroom
Deanna Brown
Barry Deutsch