AI, Getting Ahead of Us Again

Well here’s a creepy example of artificial intelligence racing ahead of our ability to govern it wisely: a tech startup that uses AI to study job applicants’ facial expressions, speaking style, and word choice to generate an “employability score” for people and then filter out low-ranking candidates. 

The startup in question is HireVue, and you can read a detailed account of its doings in the Washington Post. The short version is that job applicants conduct their preliminary interviews by video, where all applicants answer a fixed set of questions provided by the hiring company. HireVue’s AI then analyzes each applicant’s answers to predict factors like a candidate’s enthusiasm for the work, or how the applicant might respond to surly customers. 

That leads to an employability score for each candidate, and then candidates are ranked against each other, and then company can slice off the bottom tiers to focus on those candidates with the highest score. 

Now, confession: HireVue is not a new startup. It’s been in business since 2004, and has analyzed more than 1 million job candidates over the years. Hundreds of businesses use its services, including Hilton, Unilever and Goldman Sachs. The Washington Post is simply the latest large media outlet to discover AI-driven recruiting.

What’s interesting is that a cottage industry has emerged to help applicants succeed at HireVue job interviews — that is, to game the AI system as best as we mere  humans can. Search “HireVue interview tips” on Google, and you’ll get more than 100,000 results urging you to look directly into the camera, smile often, relax, don’t talk too long, and so forth. Colleges even train their students on how to handle HireVue interviews.

The frustrating part, however, is that applicants don’t have any real recourse here. You can’t ask the AI what you did wrong, or how you might improve. 

Now, I understand that even in the old days when humans still screened other humans, most hiring managers wouldn’t bother to tell you how to improve either. But consider this from the Post story:

HireVue offers only the most limited peek into its interview algorithms, both to protect its trade secrets and because the company doesn’t always know how the system decides on who gets labeled a “future top performer.”

The company has given only vague explanations when defining which words or behaviors offer the best results. For a call center job, the company says, “supportive” words might be encouraged, while “aggressive” ones might sink one’s score. 

The company doesn’t always know how the system decides — ah, there it is. Now we’re getting to the ethics and compliance part. 

Audits, Ethics, and AI

When I read the Post story, my first question was whether these AI-driven employability scores might somehow discriminate against people with autism or Asperger’s syndrome. 

After all, people on that spectrum can have difficulty making eye contact, or might use odd facial expressions when speaking, or use a flat tone of voice. That doesn’t mean they’re incapable of work or interacting with other people — but those are behaviors that AI algorithms might score negatively. (It’s a bit poetic that Alan Turing, the father of modern computing, might have been on the spectrum himself. One wonders how he’d score in a HireVue interview.)

Or consider Indians, where it’s customary to nod during conversation as a form of respect, but can be confusing to foreigners. Or consider people who stutter, which would include Winston Churchill, Samuel Jackson, and Jack Welch. How would they fare in an AI-evaluated interview?

AIYou see where I’m going with this: poorly designed AI could inadvertently disadvantage a whole class of people. Once we start talking about a disadvantaged class, litigation risk follows, along with bad publicity and lord knows what else. 

We can assume that HireVue has rebuttals for those concerns, like any competent AI company would. But here’s the thing — I don’t trust the company. Why should I? Like the vast majority of people out there, I don’t know HireVue from a hole in the way.

That’s fundamental tension with AI taking over processes previously run by people. Unless we have the power to audit or inspect the AI, we can never fully trust the AI. But as soon as we can inspect those algorithms, they lose their value — because outsiders can see the code and exploit its weaknesses more effectively. (This also assumes we can even find auditors able to examine the AI, which is a whole other subject.)

I’d like to call this a stand-off, except it’s not. The capabilities of AI are racing ahead, while humans are still bickering about — or not even considering — the social constructs we should use to handle AI. Hence we wind up with unease and resentment among the public, while corporate use of AI marches onward.

Corporate America will need to confront that unease and resentment sooner or later, and I’d recommend sooner. Otherwise, nobody should be surprised that public trust in companies is generally low, and enthusiasm for drastic political solutions is high. That’s what happens when people feel powerless, and powerless is exactly how people feel when they encounter mysterious automated things like a HireVue employability score. 

We can’t have AI exist beyond inspection and reproach. We can’t have AI spread into business processes like plaque in the arteries, until one day you discover that you can barely move. It needs governance, but it doesn’t just need governance one company at a time by those companies that use it. It needs clarity and consensus from all. 

And we are nowhere near that clarity right now.

Leave a Comment

You must be logged in to post a comment.