Great Example of Ethics, Compliance, and AI

Anyone looking for another example of how artificial intelligence is going to raise a host of ethics and compliance issues for corporations, look no further than today’s New York Times and an article about how British retailers are using facial recognition to crack down on shoplifters. 

The full article is well worth your time if you care about AI; the abridged version is as follows. For several years now, retailers in Britain have been using facial recognition technology from a company called Facewatch to identify shoplifters. When the retailer suffers a shoplifting incident, security teams upload images of the thieves to Facewatch. Facewatch then shares those images with all its retail customers so that when the shoplifter strikes another store, cameras in that store recognize the person and send alerts to on-site security guards. 

We should note that Facwatch doesn’t run entirely on auto-pilot. When its algorithms do identify a potential shoplifter, that match first goes to a human “super-recognizer” who has been trained to remember and recognize faces. That person confirms the match against Facewatch’s database of shoplifters before the system sends an alert to its retail customers. 

Retailers pay roughly £250 to use Facewatch’s service. I’m not sure how much money that saves British retailers specifically, but for comparison purposes, the National Retail Federation in the United States estimates that “shrinkage” costs retailers here roughly $100 billion a year — equivalent to about 1.4 percent of your annual inventory walking out the door. So clearly this is a cost-effective use of artificial intelligence to address a severe and widespread business problem.

The real question is whether this is a proper use of artificial intelligence to address a severe and widespread business problem. That’s what makes this such a fascinating case-study for ethics and compliance professionals.

Issue 1: AI and Ethics Concerns

Retailers’ use of Facewatch might seem creepy and intrusive at first glance — but upon further reflection, I’m not so sure it is. 

After all, security guards have been trying to remember shoplifters’ faces since time immemorial. It’s common practice for stores to keep photo albums of shoplifters they’ve encountered before, so that cashiers or security teams at the door can spot those shoplifters when they return. 

Well, isn’t that a form of artificial intelligence? Photography is nothing more than a technology that helps humans to remember something they’ve seen before. How is Facewatch any different? It’s just executing the same process — using photographs to help security guards identify shoplifters — more quickly, and on a larger scale. 

That strikes me as an important ethical question for artificial intelligence: Is the AI enhancing your company’s ability to do something it already does? Or is the AI allowing you to do something wholly new, where we should pause and consider the ethical implications of doing that new thing? 

I can appreciate that facial recognition might be dangerous or improper in other situations. For example, Madison Square Garden uses facial recognition to identify lawyers who are involved in lawsuits against it, and then bars them from attending events there. That’s wrong, because those lawyers are not committing any crime or causing any harm against the events held there, where MSG management would have an interest in keeping those lawyers away from the premises. 

That’s not what the retailers are doing with Facewatch. They are using facial recognition technology to keep away people who arguably should be kept away, because they’re shoplifters who might cause the retailer harm. If those retailers were instead using security guards who had photographic memory, or could whip through photo albums of known shoplifters at blazing speed, would we even be having a conversation about whether that’s unethical or intrusive? Probably not. 

In other words, Facewatch isn’t a clear-cut case of AI creepiness. It’s a clear-cut case of the ethical issues that companies will need to consider as they find ways to use AI. 

Issue 2: The Compliance Concerns

The Facewatch story is also fascinating because it demonstrates the compliance issues that companies are likely to encounter, too. 

For example, the Biden Administration unveiled a proposed “AI Bill of Rights” last fall that defined five principles for how AI should be used. One of those principles was that people should always have some right to appeal to a human when AI gives you a result you don’t agree with. 

So would retailers’ use of Facewatch meet that criteria or not? The security guard who receives a Facwatch alert and then intercepts the supposed shoplifter — does he or she qualify as your right to appeal to a person? I’m not sure, but clearly retailers would be wise to develop policies and procedures for how such interactions proceed. And if the policy is something draconian, such as all security guards must always remove all suspected shoplifters, I could see lawsuits quickly following that decision.

Another principle for using AI would be disclosure. Again, how should that work in practice? Would it be enough simply to post a sign on the front door declaring, “This store uses facial recognition to help security teams identify shoplifters”? 

The compliance concerns keep on coming! The EU General Data Protection Regulation, for example, has tight rules about using technology for automated decision-making. So how would you insert a human into the AI processes you’re considering, to avoid that violation? 

Facewatch seems to answer that question by using those human “super-recognizers” before issuing alerts. What might that human involvement look like in other AI-driven processes? In previous posts I’ve called this issue identifying the human point: that place in a business process where AI decision-making ends and human involvement begins. You’ll need to be crystal clear on where the human point is in your AI adventures. 

For the record, the U.K. Information Commissioner’s Office did conduct a year-long investigation into Facewatch, and required the company to make some changes to its operations — but after that, the regulator allowed Facewatch and its customers to proceed. Under British privacy law, companies can use biometric technologies if those uses have “a substantial public interest.” The ICO decided that this use of facial recognition to prevent shoplifting qualifies. 

Of course, not every jurisdiction will follow that same standard for biometric data and AI. Companies might face a blizzard of state and national standards on how they can use these two technologies. That means you’ll need strong capabilities in regulatory change management and a disciplined process for evaluating those regulations. 

And all of these issues arise from one example of how one AI technology might be used. More uses are coming over the horizon every day. Corporate ethics and compliance teams are going to be busy. 

Leave a Comment

You must be logged in to post a comment.