Best Practices in HR

Follow Us:
Elyse Schmidt
  October 26, 2017

Your AI Might Be Biased: Here’s How to Build Tech for a Better World


Tech companies have recently come under scrutiny for a lack of diversity in their recruiting processes. According to a recent report from Recode, women hold just 30 percent of leadership roles and less than 27 percent of technical roles—an average based on data from Google, Facebook, Twitter and other top tech companies.

As human resources departments work to address this hiring bias—often due to unconscious bias—artificial intelligence recruiting tools seem like a perfect solution. AI for HR promises to eliminate bias in the hiring process by making decisions based on data, not personal opinions or predilections.

Of course, overcoming bias is not that easy. While AI recruiting tools can be asked to turn a “blind eye” to things like race, gender and socioeconomic background when considering new candidates, it cannot eliminate bias that’s already present in the data. “Since AI builds on history, it’s pretty difficult to ask it to tell you about the space of no-history. That’s where much of bias lives. It exists in the people who didn’t even bother to apply and the people who weren’t chosen,” says John Sumser, principal analyst at HRExaminer.

The good news, though, is that it can reveal that bias and help companies make real progress toward fair hiring and equality.


Understanding Data Bias

According to a study out of Northeastern University, employers are more likely to hire candidates they’d ultimately be friends with. Did the hiring manager attend the alma mater of a candidate? Grow up in the same town? Know one of the candidate’s references personally? If the answer is to yes to these questions and others, a manager is more likely to hire the candidate.

How do we help AI do as we say, not as we do?

While AI doesn’t look for friends when it’s crawling resumes, that doesn’t mean it’s not set up for bias.

“There’s a great question there about how much bias is built into the data,” says Jana Eggers, founder and CEO of Boston-based AI company, Nara Logics. “These people were successful because they were hired. Well, how many people would have been successful that weren’t hired?”

According to a study from researchers at Princeton University and the University of Bath, in crawling human data, AI learns human bias—the same way a child might. The study looked at an artificial intelligence process called word embeddings which, put simply, allows AI to correlate other commonly associated words to create a definition with more context and accuracy.

The problem is, social constructions of words typically contain bias. The study found“female” and “woman” were “closely associated with arts and humanities occupations and with the home,” while male and man were “closer to maths and engineering professions.” What’s more, European names were closely associated with positive words, while African American names mapped to “unpleasant” words.

AI needs to understand what people mean when they write or say things to be effective. How do we help AI do as we say, not as we do?


Catching and Correcting

The first step, according to Eggers, is using AI recruiting tools to identify where things like race, gender or other inherent biases influence hiring decisions.

Once we discover bias in algorithms, it’s easier to counteract than bias in humans.

Just because we remove race from current data, for example, doesn’t mean that previous data didn’t impact the way the AI learned to make decisions. Rather than scrubbing this data from the system entirely, Jana advises learning from it. “When did you do better with diversity, and how can you highlight those decisions and promote them more in your network?” Eggers says.

Data ethics researcher Sandra Wachter says once we discover bias in algorithms, it’s also easier to counteract than bias in humans. “At least with algorithms, we can potentially know when the algorithm is biased,” she told The Guardian. “Humans, for example, could lie about the reasons they did not hire someone. In contrast, we do not expect algorithms to lie or deceive us.”


Using AI as a Tool

AI presents both opportunities and challenges, so companies should think of the technology as a tool for HR departments—not a replacement for HR managers. “AI is not magic,” Eggers says.

For example, current AI tools can’t comprehend complex concepts, such as personality traits, that are essential to the hiring process. What makes someone a happy person, for example, may fall on a very wide spectrum. Companies should be careful to avoid relying too much on AI to hire for qualities that, at least for now, are hard to quantify.

AI is really just another kind of employee.

“AIs are really just another kind of employee. Learning to manage, direct, train, supervise and maintain them is something we are going to learn a lot about,” says Sumser.

Rather than being a decision-making engine, AI can help companies understand their biases and work to counteract them, while also discovering promising indicators in potential candidates. Used effectively, it’s a tool that promises to help HR professionals do their jobs faster and more effectively—and over time, more objectively, too.


Personalization extends far beyond the career site. We partnered with Glassdoor to outline how nurture can create a better candidate experience for your interested leads.


Source: Elyse Schmidt