Justice Dept. Talks AI Concerns

The Justice Department wants companies tinkering with artificial intelligence to be more open-minded about testing their AI systems and products for vulnerabilities, and specifically wants them to adopt a “vulnerability disclosure program” much the same way tech companies already disclose software bugs.

So says Nicole Argentieri, head of the Criminal Division, who delivered a speech today on how law enforcement and the private sector should work together to address the ways that artificial intelligence could drive new types of criminal misconduct. Corporate compliance officers and internal auditors should take note here, since Argentieri spelled out several steps the department would like corporations to take. Plan accordingly.

Foremost, the department wants more companies dabbling in AI to adopt vulnerability disclosure programs, where the companies would essentially pledge to allow third parties (say, security researchers) to test the code of their AI programs and then to disclose any issues that those testers find, so that people using those flawed AI systems can correct them. 

Argentieri

Tech giants including Amazon, Microsoft, Google, and Facebook already made such a pledge last year as part of the Biden Administration’s executive order to address AI risks as the technology becomes more mainstream. Argentieri said other companies should now make the same promise as a matter of good corporate citizenship.

“Companies that have not signed onto the White House’s voluntary commitments for leading AI companies, which include facilitating third-party discovery and vulnerability-reporting, should consider implementing a vulnerability disclosure program or extending existing programs to cover their new AI products,” she said. “In fact, independent research on the functioning and security of AI systems — often referred to as ‘AI red-teaming’ — will be essential to ensuring the integrity and safety of AI systems.”

Where Testing and AI Collide

Conceptually, Argentieri is not proposing anything new; companies using or developing enterprise-grade IT have been testing their software for bad code for decades. The question is how we apply those historical ideas of software testing to the new and more complicated field of artificial intelligence.

For example, software testing has traditionally aimed to determine whether data is secure and IT processes are free from tampering. AI goes further than that; you need to test whether AI algorithms behave in ways that fall within the law and expected social  norms

Argentieri even alluded to that point in her speech. “AI red-teaming has an additional important role to play beyond ensuring the security of AI systems,” she said. “It can also help protect against discrimination, bias, and other harmful outcomes.” 

OK, that’s a valid point — but how, exactly, will security teams develop testing methods to address discrimination, bias, and other harmful outcomes? At what point in the software development cycle would IT teams introduce that testing? Better yet, at what point in the software development cycle would compliance teams enter the conversation, saying, “Here are the legal violations that we require our humans to avoid. Be sure that the AI systems you’re developing don’t make those mistakes either”?

We can keep going. What if you release an AI product into the wild, and outsiders test it without your knowledge or permission? What will you do if they bring security glitches to your attention? What about behavioral glitches, perhaps if the researchers publish a paper showing that your AI product discriminates against certain populations or has a bad habit of making new promises to customers that your product development team never intended?

Those are the questions companies need to start contemplating, if they haven’t already. They’re especially tricky because they cut across multiple corporate functions, so as we’ve said many times before here, companies will need to establish AI steering committees that bring all those voices into the conversation. 

Only then will companies be able to govern their AI development and usage with security, lawful conduct, and ethics all receiving proper attention — which, let’s remember, is what the Justice Department wants to see. 

From a Speech to Real Action

We’d be remiss if we didn’t note that Argentieri also talked about artificial intelligence just one week ago, when she unveiled updated guidelines for corporate compliance programs. Those updates included a heap of new material about AI, such as how a company tries to reduce “unintended consequences” resulting from AI and how it manages AI-driven fraud risks.

Well, how do those guidelines released last week relate to Argentieri’s call for more testing and vulnerability disclosure today? 

For example, if your company doesn’t encourage vulnerability testing and disclosure, and then you have some sort of incident with an AI product — maybe you release a consumer-facing app that has a security flaw; or you use an AI tool for pricing or hiring decisions that turns out to be discriminatory. How will your decision to soft-pedal testing and monitoring look to regulators investigating such issues? 

It seems like Argentieri is encouraging companies to be more thorough and forthcoming in their AI testing now, with the subtle threat of more aggressive treatment from prosecutors later if you don’t take her words to heart. 

Let’s remember what Argentieri was really trying to get at with her speech today: that AI is a tremendously powerful technology, which could cause all sorts of problems if it’s not handled with care. The Justice Department wants companies to apply that care early and diligently, so that criminals won’t be able to race ahead with AI. To quote Argentieri:

Criminals have always sought to exploit new technology. While their tools may change, our mission does not. The Criminal Division will continue to work closely with its law enforcement partners to aggressively pursue criminals — including cybercriminals — who exploit AI and other emerging technologies and hold them accountable for their misconduct. 

Companies are those “law enforcement partners” that Argentieri is talking about.

Well, by the same token, companies with an effective ethics and compliance program are also supposed to be partners with the Justice Department, striving to hold criminal offenders accountable for their deeds. That’s what embracing the spirit of ethics and compliance is all about. You’re supposed to want to do all the voluntary self-disclosure, cooperation, and remediation. 

So I see Argentieri’s exhortations today about AI and the updated guidelines for compliance programs from last week as very much linked. Argentieri is spelling out the department’s expectations for how companies can be good corporate citizens regarding AI. If companies ignore those expectations and something bad then happens, what does that say about your commitment to ethics, compliance, and good corporate governance? 

So read Argentieri’s speech thoughtfully. Read the updated compliance program guidelines closely. Then go tell your board, management, IT team, business units, and anyone else in your enterprise tinkering with artificial intelligence that they need to tinker quite carefully.

Leave a Comment

You must be logged in to post a comment.