When You Reject People, Tell Them Why

Explainable AI and ethical human judgment both play important roles in fair, accurate talent assessment.

Reading Time: 5 min 

Topics


Humans are meaning-making machines, continually searching for patterns and creating stories to make sense of them. This intense desire to understand the world goes hand in hand with the rise of artificial intelligence in organizations. We expect AI to advance our quest for meaning — not just to predict that X leads to Y in the workplace but to shed light on the reason. In short, we expect it to be explainable.

Definitions vary, but in a recent academic paper, my colleagues and I described explainable AI as “the quality of a system to provide decisions or suggestions that can be understood by their users and developers.” That’s important for applications designed to evaluate people.

For example, most hiring managers are not content knowing that an algorithm selected a certain person for a job or that someone “did well” on a video interview where AI was used as the scoring engine. They also want to know in what ways people performed well: Did they make more eye contact than others? Were they less sweaty and fidgety? Did they use more words with emotional impact? Of course, the candidates want to know those things too. Otherwise, the results feel arbitrary and nothing can be learned and applied to the next job application or interview.

In the early days of the AI revolution, companies were excited about their new window into employee behavior: If someone, say, went to the bathroom more than three times a day (at work, that is — back when more of us worked in an office), they were deemed X% more likely to leave their job. But such insights about people can only be described as pointless — unless we can qualify them by saying that those who left were (a) stressed, (b) bored, or (c) fired for doing drugs in the bathroom. That’s a hypothetical range of options, but the point is that any explanation is better than no explanation. This is something scientists have known for ages: To go from data to insights, you need context or, even better, a model. Science is data plus theory, and that’s just as true when we’re assessing people as when we’re assessing ideas. Why matters.

Transparent Tools

Explainable AI sounds like a new concept, but in essence it has been around for decades. For example, in the U.S., credit scoring has a well-established “right to explanation” via the Equal Credit Opportunity Act, enacted in 1974. Algorithms rather than humans determine whether your credit score is high or low, but you should be able to find out why. Is it low because you forgot to pay your tolls and were late paying your internet bill, or because you fit a certain demographic profile that is associated with higher risk? Credit agencies comply with this regulation by providing a list of reasons so you can both understand and improve your score. This transparency benefits borrowers and lenders alike.

By the same token, in the world of recruitment and HR, ethical employers owe job candidates and employees a similar right to explanation. It has never been easy to tell people why they didn’t get a job or a promotion. That’s why so many organizations send off-the-shelf rejection letters simply stating that there were many qualified candidates — the equivalent of an “it’s not you, it’s me” breakup line. But to the degree that AI can reveal specific reasons for not selecting candidates, this information should be shared with them and may actually make the news easier for them to process.

For instance, if a resume-scraping tool did not short-list someone, they would benefit from knowing it was because they hadn’t listed certain hard skills. They might revise their resume to highlight existing skills more explicitly or acquire new ones to boost their employability. If a video-interview scoring algorithm determined that someone lacked the warmth or confidence to be a good fit for a sales role, that feedback could help them prepare for their next interview; they could practice making more eye contact, talking more, pausing less. Or, if natural language processing algorithms detected evidence of a “dark side” or toxic tendencies in someone’s social media behavior, perhaps this should be flagged, not just for the recruiter but also for the candidate so they can become more aware, adjust their behavior, and keep undesirable tendencies in check.

Ethical Tensions

To be sure, there will always be controversy around probabilistic judgments about future performance. But not using AI does not stop humans from making those decisions, and humans are much more biased than AI.

Managers who worry about “black box” strategy models should have similar concerns about a lack of transparency in vetting people. We can never be sure why someone selected or rejected a candidate if the judgment call was purely human; all we can do is trust the decision maker’s biased explanation. While AI tools aren’t free of bias, they can help mitigate it, and employers can improve decision accuracy and fairness by raising the ethical standards for designing algorithms and for sharing results.

That said, just because AI is explainable doesn’t make it ethical. It can still promote unethical decisions if it’s not used in tandem with principled human judgment. Imagine, for example, a tool that helps managers assess people for cultural fit based on political ideology, religious beliefs, or social class. No amount of transparency about the decision criteria will make that tool fair. What’s more, most of the psychological traits that explain behaviors may influence whether you are likely to enjoy working with someone. So the irony is that when we increase explainability, we also, in some ways, make it easier for humans to decide on the basis of their own biases and preferences.

Still, on balance, we’re better off with explainability than without it. Algorithmic auditing, a process that screens for baked-in biases, is a technological step toward making explainable AI ethical, but it’s far from perfect. Biases can elude detection even when we’re actively looking for them. As we make progress on that front, we must also keep the impact of algorithms top of mind — continually asking ourselves how they may affect people when used to evaluate them, whether they will increase or harm fairness, and what the consequences might be for society more broadly. By consistently trying to answer those types of questions, we’ll be in a better position to simultaneously make less biased decisions about people and help them understand and develop their own potential.

Topics

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.

Comment (1)
Jeff Schwartz
An insightful article.  The distinction between explainable and ethical AI is particularly useful.