Thoughts on the AI Job-Pocalypse
By now you may have seen one of numerous media reports recently that artificial intelligence is poised to decimate the white-collar job market in coming years. I’m not sure I agree with those dire reports, but clearly AI is going to transform how white-collar workers — such as compliance and audit professionals — arc through your careers. Let’s talk about that.
One good place to start is an interview that Dario Amodei, CEO of AI software firm Anthropic, gave to Axios the other week. Amodei predicted that AI could wipe out half of all entry-level white-collar jobs, and cause unemployment to spike as high as 20 percent by 2030. Most people “are unaware that this is about to happen,” Amodei said. “It sounds crazy, and people just don’t believe it.”
Other recent headlines do give Amodei ammunition here. Procter & Gamble has announced that it will lay off 7,000 people, roughly 15 percent of its non-manufacturing workforce, as part of a two-year restructuring program. Walmart is laying off 1,500 of its corporate technology workforce. Microsoft laid off 6,000 people last month, roughly 3 percent of its workforce, even as the company pours more resources into AI. Keep looking and you’ll find a steady stream of other examples.
Is AI entirely responsible for recent layoffs? Of course not; President Trump’s tariffs, sudden cuts to previously promised government spending, and other macro-economic factors all play a role too, and probably a large one. But advances in AI are responsible for at least some job destruction, and that factor is likely to keep gaining steam over time.
So back to Amodei at Anthropic. He, like any good technology guru, says that we are at an inflection point. Until now, advances in AI have largely been about augmenting employees’ ability to do work. The AI helps a rank-and-file employee handle the scut work of his or her job more efficiently, so the employee can devote their human brainpower to more important, strategic tasks.
Now, however, AI has advanced to where it can automate the work of those employees entirely — and in that case, what use does the company have for that employee any longer?
Who’s Being Augmented, Really?
This idea of augmentation versus automation is important, so let’s unpack it more thoroughly.
Say you’re a law firm, and you want to use artificial intelligence to draft standard commercial contracts. Historically, the senior partner would outline the basic objectives and terms of the contract; then a junior associate would then do the boring work of drafting and proofreading the actual text; then the senior partner would give a final review before sending it off to the client.
Now along comes AI. The senior partner can just dictate those basic objectives and terms to the AI agent, which generates the whole contract in a few minutes. Maybe the firm still has a senior associate check the AI’s work, but the phalanx of first-year associates checking commas across 100-plus pages of text are now unemployed.
Appreciate what happened here. AI didn’t augment the first-year associate’s ability to do scut work, so he or she could focus on more important tasks. AI augmented the senior partner’s ability to get scut work done efficiently, while the senior partner still focuses on more important tasks.
The future of work is going to be a lot of that, I think; and compliance and audit professionals need to think through the implications of that shift carefully.
Let’s consider another example closer to the compliance officer’s home. Say you’re a financial firm, and you want to use AI to identify potentially suspicious activity. Historically that would have been work for an entry-level compliance analyst, who flags potentially suspicious activity and then passes those incidents up to an AML compliance manager; and that compliance manager then decides whether to file a Suspicious Activity Report with regulators.
Now the AI does that first pass at screening for suspicious activity, and sending alerts to the AML manager for final review and SAR filing. So did AI automate the analyst out of a job, or augment the manager’s ability to get more screening done at scale?
The most accurate answer seems to be that AI did both things at the same time; or did either one, depending on your perspective.
AI and Compliance Programs
Let’s keep going with our AI-as-AML analyst example, because it raises other questions that will need to be answered as the compliance profession marches onward into this brave new world.
First, how would you, the AML compliance manager, gain comfort that the suspicious activity alerts the AI is sending your way truly are suspicious? How would you identify and correct hallucinations? How would you assess the explainability of the AI’s answers, especially since explainability is a cornerstone requirement of the EU AI Act and other AI laws proliferating around the world? How will you monitor the AI to be sure it hasn’t picked up some bad habit, such as sourcing personal data without permissions?
Plenty of AML compliance software vendors will promise that they’ve already conquered these issues, and who knows? Some of those promises might actually be true. My point is simply that these are the deeper questions compliance officers will need to ask and understand as we move into the AI era.
There’s also a straightforward personnel issue: Who will be your AML compliance manager in 2035 if the junior compliance analysts are all automated away by 2027?
After all, a company can’t really entrust those more senior AML compliance duties and the attendant decision-making to AI. Just imagine the damage that would come from an AI system filing an erroneous or hallucinated SAR report to regulators on your behalf. Regulators would be irate at the waste of time and crawl up your butt with a microscope at annual regulatory examinations for years to come.
So clearly some human would need to make the important decisions about compliance at your firm — but where would that person’s judgment come from, if he or she never learned how to make good judgment earlier in their careers?
I don’t know what the answers to these questions are. But I do think that we need to understand and answer them before an AI-driven compliance program can really act with any comfort.