SEC Talks AI Enforcement Risk
We have more advice this week on artificial intelligence, this time from a top voice at the Securities and Exchange Commission who urged companies to do better at crafting — and implementing — thoughtful policies to govern AI risks.
The speech came from Gurbir Grewal, head of the SEC’s Enforcement Division. He gave a speech Monday that touched on several trends in SEC enforcement, and then took a deeper dive into AI and the risks it poses to investors. His primary message for corporate compliance officers: If your company is making statements about artificial intelligence to the investing public, “you must ensure that you do so in a manner that is not materially false or misleading. This becomes ever more significant as AI-related disclosures by SEC registrants are increasing.”
Grewal’s remarks come several weeks after the SEC launched its first two enforcement actions over “AI-washing” — that is, companies making misleading statements to investors about the companies’ use of artificial intelligence. Those two enforcement actions were both civil fines imposed against online investment advisory firms that were bragging about how they used AI to help the firms’ customers find investment opportunities, when in fact neither company used AI at all.
OK, fair enough; but most businesses aren’t sketchy investment advisory firms trying to woo the day-trader crowd. Most businesses are stable, publicly traded operating companies across a wide range of industries. That means what you disclose about artificial intelligence will be quite different, and so your risk of misleading statements about AI will be quite different too — but those risks will still need attention.
“There are any number of reasons that a public company may disclose AI-related information,” Grewal said. “It may be in the business of developing AI applications. It may use AI capabilities in its own operations to increase efficiency and value for shareholders. Or it may discuss security risks or competitive risks from AI. But irrespective of the context, if you’re speaking on AI, you too must ensure that you do so in a manner that is not materially false or misleading.”
Three Principles for AI Risk
OK, so your company might make statements about its use of AI. How can you stay on the right side of SEC disclosure rules? Grewal encouraged companies to embrace “proactive compliance” (a phrase that desperately needs to be retired) by following three principles.
First, he said, compliance officers and in-house counsel should educate themselves on AI risks that relate to your business.
“That means reading the AI-related enforcement actions,” he said. “It means reviewing any future enforcement actions that may follow in this space… And it means staying abreast of how potential AI-related issues are actually impacting companies in the real world.” (Clearly it also means you should subscribe to Radical Compliance since we write about this stuff all the time.)
Second, take what you’ve learned about AI risks “and engage with personnel inside your company’s different business units to learn how AI intersects with their activities, strategies, risks, financial incentives, and so on,” Grewal continued. “Ask: what public statements are we making about our incorporation of AI into our business operations? Are they accurate, or are they aspirational? Does AI present a material risk to our business operations in some way?”
Third, take specific, actionable steps to improve your disclosure controls and procedures.
For example, Grewal said, “Does your use of AI require updating policies and procedures and internal controls? If so, are those policies and procedures bespoke to your company… and then, have you taken the steps necessary to implement those policies and procedures? As we have seen time and again, adoption is only part of the battle; effective execution is equally important and that’s where many firms fall short.”
He also offered one other bit of advice: “It’s not enough to go to ChatGPT or a similar tool and ask it to produce an AI policy for you” — and no, Grewal wasn’t just trying to be witty there.
Many times we’ve seen the SEC and other regulators warn that when you develop policies and procedures, they cannot be generic policies that simply parrot the regulation in question. You need policies that reflect actual operations at your business, so employees will understand more clearly and specifically what they are supposed to do. Hence Grewal’s reference to “bespoke” policies.
A Word on Liability
We should also reflect for a moment on how a company might stumble into these risks, and under what circumstances an executive might face personal liability for misleading disclosures, too.
Probably the best example to cite here is the SEC’s lawsuit filed last year against SolarWinds and its CISO for poor disclosure of cybersecurity risks. In that case, the SEC cited a “cybersecurity statement” that SolarWinds had published for years promising investors and the public that the company embraced the highest standards of security.
According to the SEC, the reality at SolarWinds was anything but that, and since the CISO was aware of those shortcomings but didn’t raise protests about those rosy disclosures, he too is now on the liability hot seat. (SolarWinds denies the SEC allegations and has vowed to fight the SEC in court.)
One can easily see how poor AI disclosures might follow a similar path: you promise nothing but the best, most thoughtful safeguards against AI in the 10-K or other statements to the public; and on the inside the company’s AI controls are a mess that never gets any better. Presto! There’s your SEC lawsuit.
Grewal didn’t expressly use SolarWinds to make that point, but it’s the obvious example from what he did say:
I would look to our approach to cybersecurity disclosure failures generally: we look at what a person actually knew or should have known; what the person actually did or did not do; and how that measures up to the standards of our statutes, rules, and regulations. And as I’ve said before in the context of CCO and CISO liability, and I will say it again in the context of AI-related risk disclosures: folks who operate in good faith and take reasonable steps are unlikely to hear from us.
The question for compliance officers is whether your company has the right oversight structures in place to prevent AI from following that path.