AI Risks Coming Fast; Controls Lag Behind

Gird your loins, compliance officers. In the last several weeks numerous examples of AI-enhanced risks have streaked across the headlines, all of them reminding us of just how woefully unprepared most companies are to address the threat of artificial intelligence in unethical hands. I fear this will become a recurring theme in 2026.

You might have seen some of these stories already. Let’s string them all together. 

First is a new type of fraud risk affecting the food industry. Scammers are ordering meals to be delivered, taking pictures of the meals in hand, and then using AI to alter those photos to make the meals appear undercooked, rotten, or otherwise poorly prepared. The scammers then use the doctored photos to demand refunds for their meals (sometimes posting the images on social media for additional pressure).

Our second example comes from the accounting world. The Association of Chartered Certified Accountants has decided it will no longer allow members to take licensing exams remotely, because too many people are using AI to cheat on the exams. Starting in March, ACCA members will need to take the tests in person at supervised testing centers.

Third and most disturbing is what’s happening on Twitter. Users there are now asking its AI-powered chatbot, Grok, to “nudify” photos of women and girls: you provide Grok with a photo of some woman or girl you know, and Grok returns a porn image of her naked and in some sexual pose, which the user might then circulate online. Regulators around the world are launching investigations into the matter. 

Fraud risk, employee cheating, harassment risk— and those are just the easy, recent examples. We could also mention receipt fraud, defamatory falsehoods spread by ChatGPT, and lord knows what else. AI-enhanced risks are surging. 

Controls for AI-Amplified Risks

What strikes me about all three risks above is how companies or regulators are trying to fight those AI-induced threats. None of the ways we’ve devised so far are particularly effective.

Start with the food frauds. Essentially, food delivery companies and restaurants are stuck trying to use other AI tools to detect the AI-generated frauds. That’s not a new idea; schools, for example, have started using AI-powered plagiarism detection tools to identify AI-written homework from students.

The image on the left is a real burger; on the right is the doctored image. (Source: The Times)

OK, that’s better than nothing, but those are just detective controls meant to identify a fraudulent incident after it happens. If you’re only using technology to detect AI scams, you’ll always be one step behind the scammers, endlessly innovating new ways to fleece your organization via AI. Put another way: you’ll always be in an AI arms race you can never win.

The ACCA was in a similar situation. Unethical students were cheating on exams so often (a problem endemic to public accounting firms around the world) that ACCA’s anti-cheating tools simply couldn’t keep up. “We’re seeing the sophistication of [cheating] systems outpacing what can be put in, in terms of safeguards,” one ACCA executive said.

Now consider the ACCA’s return to in-person exams. That’s a step in the right direction because it’s a preventive control introduced at the process level, which is miles ahead of detective controls chasing down one doctored food photo after another. But it’s an astonishing admission, too: AI, a technology meant to deliver all sorts of operational improvements and efficiency gains, also delivered a new risk so poisonous that the ACCA had to revert to in-person processes to solve it. Wow.

That’s the issue compliance officers, internal auditors, and risk managers need to contemplate here. How can you move beyond transaction-level detective controls, which will always leave you one step behind the AI scammers and cheaters; toward process-level preventive controls that can thwart AI scammers in the first place? 

More to come in future posts, but that’s the lens through which we need to view these AI-amplified risks.

A Whole Other AI-Amplified Weakness

Then we have Twitter and its reprehensible new nudifying capabilities. First, just imagine this happening in the analog, in-person world: an employee takes a headshot photo of a female coworker, pastes it onto a nude centerfold from Penthouse magazine, and then hangs that creation in the breakroom, the office lobby, and utility poles out on the street. HR and legal teams would be incandescent with rage and set a new speed record for tying up a termination letter. If that employee used photos of minors, he could be facing child pornography charges.

Yet somehow this is OK when the employee — or anyone else — does it behind an anonymous Twitter account? 

The Twitter/nudify mess underlines the weak accountability mechanisms society currently has for misuse of artificial intelligence. Ideally, we should have a legal and regulatory framework that holds vendors such as Grok accountable for facilitating sexploitation, child pornography, and similar materials. We don’t. Some countries (Britain, India, France, Malaysia, and the European Union) are trying to hold Grok and its owner Elon Musk accountable, but so far the United States isn’t. Given Musk’s financial support of President Trump, I don’t expect that we will.

For the record, Twitter said in a corporate statement that it will move to take down sexually suggestive materials involving children and permanently shut down those accounts. So what about women subjected to nudify harassment? Musk has only responded with laughing and fire emojis. That tells the women all you need to know about his stance.

Nor is Twitter our only example of the poor regulatory framework. Google’s AI recently concocted an entirely false story about Canadian musician Ashley MacIsaac, accusing him of being a child sex offender. A music venue read that summary and canceled an upcoming concert with MacIsaac. 

MacIsaac is threatening to sue Google, and good for him — but is that really the proper recourse here? That victims are left to clean up the mess of AI gone wrong, assuming they even know about the offense at all? (How would you know when ChatGPT says something false about your business, your CEO, or even you personally? It’s not like those false answers are publicly available.) 

The bleak truth, however, is that the frontier AI models behind these perverse capabilities — Grok, OpenAI, Anthropic, Google, Facebook — are all governed by a vanishingly small number of ultra-wealthy men divorced from the consequences of their AI systems’ actions. They claim that they want accountability, but as soon as regulators come within a country mile of that idea, those men lean on friends in the Trump Administration to pull levers of executive branch power in their favor. 

Don’t forget, we’ve already seen this happen. Last month Trump sanctioned five EU officials involved in hitting Twitter with a €120 million fine for flouting transparency obligations under the Digital Services Act. Those sanctions came after Musk complained about the €120 million fine and called for the EU to be “abolished.” If regulators around the world try to hold AI companies accountable now — for Grok’s nudifying tendencies, generative AI’s potential for defamation, or other similar abuses — rest assured, the Trump-Broligarch industrial complex will rear up again.

That will leave the rest of us to clean up the messes that AI keeps foisting upon us. By “the rest of us,” I mean you — and right now, we don’t have the tools to do that.