Justice Department Eyes AI Risks

Last week the Justice Department announced that it will cast a more critical eye at abuses of artificial intelligence. Today let’s unpack what that news means for compliance officers in practice, and how you might need to adjust your compliance program to accommodate this brave new AI world.

The pledge about increased scrutiny of AI risks came from deputy attorney general Lisa Monaco, speaking at the American Bar Association’s annual conference on white-collar crime. Monaco gave two specific warnings.

Monaco

First, she said, prosecutors will seek stiffer penalties against individuals and companies alike for fraud and other corporate crimes “where AI is deliberately misused to make a white-collar crime significantly more serious.” Second, when prosecutors are evaluating the effectiveness of a company’s compliance program, “prosecutors will assess a company’s ability to manage AI-related risks as part of its overall compliance efforts.”

You can’t fault the department for doing this. New technology has brought forth new ways of committing crime since time immemorial, and companies do have a responsibility to address those risks. That said, these perfectly reasonable steps from the Justice Department will challenge corporate compliance and anti-fraud functions. 

Let’s start with Monaco’s warning about AI-driven fraud. My first reaction was puzzlement: Why would AI-enhanced fraud be different from other tech-enhanced fraud? We don’t have special penalties for, say, spreadsheet-enhanced fraud or LinkedIn-based fraud. What makes AI so different? 

After a moment, however, I realized that certain types of fraud do depend specifically upon artificial intelligence to work. For example…

  • Voice-cloning technology fraudsters might use to scam money from a person or business;
  • Deepfake technology to smear the reputation of a person or business competitor;
  • Generative AI that fabricates passports or other identity documents to evade identity verification processes;
  • Generative AI that floods the internet with fake product reviews, fake media reports, or other content to overwhelm due diligence checks.

Those few ideas probably only scratch the surface of what’s possible, but they’re enough to get us thinking about the real issues here. First, who would use AI to commit fraud, and how? And then, what does that mean for companies’ compliance and anti-fraud efforts? 

First, New Realms of Fraud Risk

What strikes me about those AI-driven fraud examples above is that they are almost exclusively frauds that individuals would commit against companies, rather than misconduct that companies (or some small group of rogue employees within the company) would undertake themselves. 

For example, deepfakes or voice-cloning — how would they ever help something like financial statement fraud or FPCA violations? Executives determined to commit crimes like that can already do so perfectly well with existing technology. How would generative AI be so much more helpful that it would merit additional penalties from prosecutors? (If any anti-fraud thinkers out there have views on this subject, I’d be eager to hear them. Email me at [email protected] to discuss.) 

On the other hand, one can see all sorts of ways that individuals might use AI to push the boundaries of crime. We have those four examples outlined above, and there are more. Maybe a rogue employee uses AI on your own corporate data to foster some elaborate insider trading or market manipulation scheme. Maybe the rogue employee uses generative AI to falsify business records. 

In all those cases, it is an individual using AI against companies — so how do you build anti-fraud controls to defend against those attacks? 

That seems to be the most immediate lesson from Monaco’s remarks. The Justice Department can talk all it wants about stronger penalties for AI-enhanced misconduct, but the arrival of those enforcement actions is years away, if ever. The need for stronger internal controls to address AI-enhanced misconduct is clear and urgent. 

So perhaps compliance officers don’t need to run to the general counsel or outside counsel saying, “OMG, Monaco said this, what do we do?” The wiser course might be to run to internal audit and say, “OMG, all our fraud risks are going to explode thanks to AI, what do we do?” 

Oversight of AI as Part of Compliance

More relevant to compliance officers was Monaco’s second point: that prosecutors will now start asking about how your company manages artificial intelligence and other “disruptive technology risks” when evaluating your corporate compliance program.

Specifically, Monaco said, “Our prosecutors will assess a company’s ability to manage AI-related risks as part of its overall compliance efforts. To that end, I have directed the Criminal Division to incorporate assessment of disruptive technology risks — including risks associated with AI — into its guidance on Evaluation of Corporate Compliance Programs.”

Notice the jujitsu move there. Monaco took a risk management priority (your ability to manage AI-related risks) and turned it into a corporate compliance duty. How will compliance officers do that in practice? 

For example, artificial intelligence is likely to permeate the entire corporate enterprise, driving new opportunities and efficiencies in one business function after another. So what should the compliance officer’s role be during that AI-driven transformation? Do you join senior-level executives in conversations about AI strategy, to help steer that strategy in a compliance-aware direction? Or will you be stuck playing a perpetual game of catch-up, as various parts of the enterprise implement some new AI-inspired idea and then compliance gets brought into the picture? 

I fear it will be the latter. 

Let’s go back to that part about the Justice Department incorporating AI concerns into its guidelines for effective compliance programs. This wouldn’t be the first time the department has done something like that. In 2023, it updated its guidelines to reflect new concerns about ephemeral messaging apps — so is there anything in that section which might suggest how the department will treat AI concerns? 

I think so. That section on ephemeral messaging (Page 17 of the guidelines, if you want to check) includes a long list of questions to ponder, such as:

  • What are the relevant code of conduct, privacy, security, and employment laws or policies that govern the organization’s ability to ensure security or monitor/access business-related communications?
  • How does the organization manage security and exercise control over the communication channels used to conduct the organization’s affairs? 
  • Is the organization’s approach to permitting and managing communication channels reasonable in the context of the company’s business needs and risk profile?

You could quite easily substitute “artificial intelligence” for “business communications” in the above three questions, and they’d read just as well. We’re talking about different technologies, but the policy and risk management issues are the same. 

Then again, that still leaves the compliance officer playing catch up, lobbing policies and risk management controls at every new AI use-case that comes along. Wouldn’t it be better to pay more attention to questions about the compliance officer’s autonomy, resources, and stature within the organization, so that he or she can participate in those deeper, more strategic conversations about AI usage? 

Leave a Comment

You must be logged in to post a comment.