It’s here! An AI Enforcement Action!

Well, we have our first-ever enforcement action over bias from an artificial intelligence system: an age discrimination complaint from the Equal Employment Opportunity Commission that seems rather boring and obvious, but still offers compliance officers a glimpse into what AI enforcement might look like in the future. 

The company in question is iTutor Group, a China-based company that hires people in the United States to help Chinese nationals living here develop their English-speaking skills. The EEOC sued iTutor last year for age-discrimination. Last week the two sides reached a settlement, where iTutor agreed to pay $365,000 and implement numerous reforms to its policies, procedures, and training.

The allegations of the case, as laid out in the EEOC’s original complaint, are as follows. In early 2020, iTutor was hiring English tutors. Would-be tutors had to apply online. The company, however, had programmed its HR software to automatically reject female applicants over the age of 55 and male applicants over the age of 60.

We know this because one woman over the age of 55 applied using her real birthdate, and was immediately rejected. She then applied the next day using the same resumé but a younger date of birth; the software promptly offered her an interview. 

Ultimately, the EEOC says, iTutor’s software discriminated against more than 200 applicants solely because of their age. That’s a violation of Section 7(b) of the Age Discrimination in Employment Act, and here we are. 

For its part, iTutor neither admits nor denies any of the EEOC’s allegations. In addition to the $365,000 penalty (which will go to the aggrieved applicants), iTutor agreed to numerous compliance reforms, including new anti-discrimination policies, better training for management employees, and better record-keeping. It will operate under a consent decree with the EEOC for the next several years.

AI Enforcement in the Future

I know that most compliance officers will look at this case and mutter, “My company would never do something so deliberately stupid.” That’s probably true. But the iTutor case is useful to study nevertheless, because it lets us ponder an important question. 

How might a company do something accidentally that stupid? 

That is, iTutor deliberately encoded its screening tool to discriminate against older job applicants — but AI tools can, and already have, discriminated against job applicants without any deliberate intent from the companies using those tools. How might that happen, and how could you work to prevent it? Those are the questions compliance and internal audit executives need to ask themselves. 

For example, you might use an AI tool to screen job applicants for software engineer jobs. The tool has taught itself what experience a high-quality applicant should have by analyzing the employment files of previously hired engineers. But historically, most software engineers have been men — so the AI tool might then screen out applicants from women’s colleges, because those colleges don’t correlate to graduates who fit the ideal engineer profile. 

Result: your AI tool is discriminating against women, even if you program the AI tool to exclude gender as a screening factor

The above scenario is not a hypothetical, by the way. It actually happened at Amazon.com in the late 2010s

That threat of implicit, automated bias is the real risk that compliance and internal audit teams should worry about. You could have AI discriminating against a protected class without anyone ever instructing it to do so, and for reasons we mere humans might not be able to decipher. Enforcement agencies, however, aren’t much likely to care about that. They’ll see harmed individuals and want to take action. 

So how do we prevent that sort of threat? 

Start With the Data, End With the Data

The precise question here is how to identify the potential for unconscious bias in the AI tools we use to run our business processes. 

The first place to start is with the data we feed into the AI tool so that it can learn. That’s how AI works; by crunching ever more data, so it can draw ever more precise and sophisticated conclusions.  If the underlying data itself is biased, then the AI will learn to be biased too (just like humans). 

We can somewhat tackle the problem by auditing the data fed into AI programs, although that can require some rigorous analysis by audit teams. For example, you’ll need to ask HR teams: Where did this candidate data come from? How did you define the criteria for inclusion or exclusion? Do those criteria have any adverse impact on protected classes? If they do, how can we rectify that by adding other data or amending the criteria itself?

Still, auditing the underlying data will only get you so far. Then we’re likely to come to a gap, because in many cases, we won’t be able to audit the AI code itself — it will operate in ways that are simply too sophisticated or too obscure for humans to understand. Instead, we’ll need to jump directly to auditing the results of the AI’s performance. 

Indeed, that’s already emerging as a requirement by some regulatory agencies. For example, New York City now requires all employers using AI in the hiring process to perform a “bias audit” every year, and to publish the findings of that audit online. The city’s guidance for those audits is that companies must examine the “selection rate” (how many people are moved forward in the hiring process) of people in protected classes, and compare those rates against the rate for the most commonly selected group. 

We can take a deeper dive into that guidance some other time. My point is that we can audit the data before it goes into the AI, and audit the results after the AI does its thing — but we might never be able to audit the AI itself, since its artificial thinking will increasingly depart from what we can comprehend. 

That’s a rather mind-bending reality for us to accept — but the sooner audit and compliance teams can accept it, and plan accordingly, the better. 

Leave a Comment

You must be logged in to post a comment.