FTC Strikes Again on AI Controls
The Federal Trade Commission has dinged a California business for making misleading statements about the accuracy of facial-recognition technology that the company sells — yet another enforcement action over artificial intelligence that offers numerous lessons for corporate compliance and audit professionals.
The company in question is IntelliVision Technologies, a subsidiary of Nice North America, which makes automated home security systems. IntelliVision makes AI-driven facial recognition technology that goes into those security systems, so that users could, for example, stand in front of a home security camera that would immediately recognize their face and allow them into their home or grant access to the system’s control panel.
So what was the FTC’s beef with IntelliVision? As described in the FTC complaint released Tuesday, IntelliVision had claimed in marketing materials that its AI software was a “fast, accurate, deep learning-based facial recognition solution … that can detect faces of all ethnicities without racial bias,” and that it achieved those stellar results “through model training with millions of faces from datasets from around the world.” The company also claimed that its AI used anti-spoofing technology so that the system wouldn’t be fooled by someone holding up a photo of an authorized user.
Except, according to the FTC, none of those claims were true. When IntelliVision submitted its AI algorithms to the National Institute of Standards and Technology for validation, NIST found that the algorithms weren’t even among the 100 best-performing AI systems that NIST technicians had tested. Nor was the company’s model training based on “millions” of faces; it was based on images of roughly 100,000 real people — which were then digitally altered to create millions of variants of those same images.
As for the anti-spoofing technology, IntelliVision “does not possess testing of its anti-spoofing technology sufficient to support its unqualified claim that the technology ensures the system cannot be fooled by a photo or video image,” the FTC said. Plus, the testing that IntelliVision did undertake didn’t assess how the anti-spoofing measures performed across different demographic groups.
At first glance, then, this looks like an enforcement action over false advertising. So what lessons about AI can the compliance and audit communities draw here? Several, actually.
Yet Again, It’s About Governance
What strikes me most about this case is that it’s really about IntelliVision’s failure to anticipate the risks that its AI product might spawn — and that’s the key point that all other businesses slouching toward AI need to anticipate, too. You need mechanisms in place to govern the risks that might arise from your company’s embrace of AI. IntelliVision didn’t have those mechanisms in place, which led to an enforcement action over misleading claims to the public.
Your company might face very different enforcement consequences, but those consequences all flow from the same fundamental error: a lack of AI governance. For example, consider the “AI-washing” cases we’ve seen from the Securities and Exchange Commission this year, where the SEC sanctioned firms for making misleading statements to investors about how the firms used AI. Those cases are essentially the SEC’s version of what the FTC just did to IntelliVision (which is not publicly traded).
Or maybe you’re a healthcare company that uses AI to assess patient health; you could face enforcement from healthcare compliance regulators at the federal or state level. Maybe you’re an insurance company that uses AI to quote premium prices to consumers; you could face enforcement from state regulators such as the New York Department of Financial Services (which already spelled out rules for how insurers should govern their AI usage).
This isn’t even the first case from the FTC over a company’s sloppy use of artificial intelligence. Last year the agency sanctioned RiteAid for mismanagement of facial recognition technology the company was using to identify potential shoplifters. That complaint identified all sorts of governance and internal control shortcomings: poor testing, poor employee training, poor data quality controls. Read that complaint and you’re left thinking, with governance as weak as that, what else did RiteAid expect to happen?
So yes, on one level, this case against IntelliVision is about false statements, and that’s not an enforcement risk that might affect all companies. But the root cause here is a governance breakdown — and that’s something that can affect all companies; the specific enforcement consequences that land on your head are irrelevant.
AI Governance Mechanisms
OK, back to the FTC and IntelliVision. The two also reached a proposed settlement, which bars IntelliVision from making any statement about the AI system’s overall effectiveness, its lack of bias in facial recognition, or its effectiveness at detecting spoofing unless the company “possesses and relies on competent and reliable testing of the technology.”
What does that mean in practice? The FTC settlement defined “competent and reliable testing” as…
- based on the expertise of professionals in the relevant area; and
- has been conducted and evaluated in an objective manner by qualified persons; and
- is generally accepted by experts in the profession to yield accurate and reliable results.
IntelliVision must also document its testing extensively, with details such as the source and number of all images used, whether the company used technology to modify any of the images used, demographic data on the people whose images are used, and the dates and results of all testing.
I dwell on all this only to give you a sense of what effective testing of AI might look like in the eyes of regulators. After all, testing your AI systems will be a critical part of your overall approach to governing AI risks. So as your tech teams develop testing procedures and your internal audit team considers whether that testing is sufficient, this IntelliVision case is one example of what your testing procedures and documentation might need to look like.
We can come up with a few other frequently asked questions about AI governance and control, too.
- What are your technical controls, to assure that data fed into your AI system is complete, accurate, and in accord with any policies you have for data quality?
- What training do you have for employees who use the AI system?
- What access controls do you place on employees, so only authorized people can use the system?
- How do you test the AI system to be sure it delivers expected results? And how do you plan to continue testing in the future, to confirm that the AI doesn’t somehow pick up bad habits (and start making worse decisions) in the future?
- How are you disclosing AI usage to consumers, customers, or anyone else who might interact with the AI system?
The good news is we already have lots of governance frameworks to help you answer those questions, from NIST, ISO, and other groups. MIT also published a compendium of AI risks that your system developers and IT auditors will probably love.
Now companies need to go out and implement those governance systems, since AI isn’t going away — and neither are its risks.