FTC Hits Rite Aid on AI Usage

The Federal Trade Commission has ordered Rite Aid not to use an AI-driven facial recognition system to identify potential shoplifters, in the latest and most vivid example yet of regulators cracking down on how companies might put artificial intelligence to use.

The settlement was announced on Tuesday. According to the FTC, Rite Aid deployed AI-based facial recognition from 2012 to 2020 to identify customers who may have been engaged in shoplifting or other problematic behavior. Except, the FTC said, Rite Aid failed to take reasonable measures to prevent harm to consumers — and sometimes the AI technology flagged customers as potential shoplifters by mistake, 

“Employees, acting on false positive alerts, followed consumers around its stores, searched them, ordered them to leave, called the police to confront or remove consumers, and publicly accused them… of shoplifting or other wrongdoing,” the FTC said. 

Rite Aid said it was pleased to settle the FTC’s complaint, although it did paint a rather different picture of what happened. Rite Aid said the facial recognition system was “a pilot program deployed in a limited number of stores” and discontinued more than three years ago, before the FTC’s investigation even began.

The larger compliance and privacy community can ignore those quibbles about the scope of the program. For us, the important point is what the FTC found objectionable in the program, regardless of how many stores were involved. That tells us about the mistakes you could make in your own AI experiments, and the precautions regulators will expect you to have in place as those experiments unfold. So let’s take a look.

The Rite Aid AI Program

The actual facial recognition technology in question here isn’t terribly new. As described in the FTC complaint, Rite Aid worked with two technology vendors to compile a database of suspected shoplifters: people seen shoplifting on Rite Aid properties via closed-circuit TV, or photos snapped by employees, or images obtained from law enforcement databases. Soon enough, tens of thousands of people were in the database.

Rite Aid then used a blend of AI, facial recognition technology, and in-store cameras to identify potential shoplifters. If a customer entered a store and his or her face matched against the database, an alert automatically went to store employees’ mobile phones. The alert included side-by-side images of the customer and the database photo, so employees could make an on-the-spot comparison. The alert typically did not, however, include any confidence score from the AI itself — that is, nothing saying, “The AI is 98 percent sure this is the same person.” 

If all this sounds familiar, that’s because other retailers have been experimenting with AI-driven facial recognition too, with the same uneasy results. Earlier this summer the New York Times had a feature on how British supermarkets have been using the technology against shoplifters, with plenty of thorny compliance issues in tow

The obvious risk here is that of false positives: the AI tells employees it’s found a shoplifter match, when the customer isn’t that person in the database.

For example, sometimes the system flagged a customer as a would-be shoplifter even though the original shoplifter had been spotted in another Rite Aid hundreds of miles away. Sometimes the system generated hundreds of matches for the same one or two database photos in just a few days. Sometimes the system would identify one customer as a would-be shoplifter in one state, and then flag another customer as that same shoplifter the next day in another store several states over. 

The false positives led store employees to confront customers under false pretenses. In one egregious example, employees searched an 11-year-old girl based on a false positive match, and her mother had to miss work to get her daughter out of that mess. 

Overall, the FTC said, those false-positive confrontations “potentially exposed consumers to risks including the restriction of consumers’ ability to make needed purchases, severe emotional distress, reputational harm, or even wrongful arrest.” Hence the complaint and the settlement.

So what are the issues here? What precautions did Rite Aid not have in place, which others will need to implement to avoid similar experiences? 

FTC Expectations on AI Usage

As we often see in enforcement actions around consumer privacy, the issues flagged by the FTC were all “failure to…” shortcomings.

Failure to enforce image quality controls. Start with that database of shoplifters that Rite Aid compiled. Rite Aid did draft several policies that were supposed to guide the quality of those images — “should have equal lighting on the entire face, no hotspots or shading,” for example, or “person’s eyes should be aligned with the top of their ears.” Except, Rite Aid employees compiling the database deviated from those policies on a regular basis, and the company had too few controls to assure that the images collected were of sufficiently high quality. 

So that’s Lesson 1: you need processes to govern the source data that your AI program uses to learn. 

Failure to monitor or test accuracy of results. If you’re going to entrust a business process (such as identifying shoplifters) to technology, you need rigorous testing and monitoring to confirm that the system works. Rite Aid didn’t do that. For example, it didn’t track the rate of false-positive matches, or test the accuracy of its matches and alerts. 

And those are only the technical procedures Rite Aid could’ve used to monitor system accuracy. The company also didn’t have sufficient procedures in place to test system accuracy via its human workforce, such as requiring them to check a suspected shoplifer’s ID before asking that person to leave the store.

Lesson 2: consider how you would test the accuracy of your AI system, and how you would also monitor and enforce that accuracy with policies and procedures for employees using the system “in the field.” 

Failure to train and oversee employees. Rite Aid policy was to provide store employees with one to two hours of training on its facial recognition system. In practice, however, the company had no effective system to confirm that employees had taken the training. At least some employees using the system weren’t authorized to do so, and didn’t undergo that training at all.

Moreover, the training itself focused on how to use the system, how to enroll newly identified shoplifters, and so forth; but nothing on the limits of facial recognition, how to handle the risk of false positives, or the potential for bias against certain groups.

Therefore, Lesson 3: develop processes to manage the humans using the AI tool. 

That seems self-evident, but it brings us to another important point: when you develop a new technology, you also develop new ways that technology might be mishandled — and you need systems to manage that too. For example, when people invented airplanes, we also invented the plane crash. When we invented email, we invented phishing attacks. 

Now that we’ve invented AI, we’re discovering all the accidents that come with it too, and your training and controls will need to anticipate those misuses. 

Indeed, it’s worth noting that the FTC took this enforcement action without any AI-specific regulations driving the conversation. Such regulations are supposedly coming in 2024, thanks to the executive order about AI that President Biden released earlier this fall — but they’re not necessary for enforcement action over AI today. All the FTC’s issues cited above are about poor governance of technology, poor training on risks and misuse, poor monitoring. Those are mistakes you could make with any technology.

So as cutting-edge as AI is, the old principles of governance still shine through.

Leave a Comment

You must be logged in to post a comment.