The Many Risks of Mandating Employee AI Usage

Here’s a thorny question quickly bubbling into existence at the intersection of HR, risk management, corporate ethics, regulatory compliance, and technology. To what extent should a company require its employees to use artificial intelligence as part of their jobs? 

This is on my mind because numerous stories have streaked across the headlines lately of companies doing exactly that: requiring employees to use AI as part of their daily work, and then monitoring their AI usage so it can be used as a performance criteria for promotions, pay raises, and job security.

For example… 

And for a bonus disturbing article, don’t miss a recent Wall Street Journal column exploring how companies might require employees to use AI so the AI system can learn how that specific employee thinks, and then replace that employee because the AI has memorized his or her judgments and thought processes.

All of the above is creepy, sure. At a more practical level, however, it also points to a host of ethical, compliance, internal control, and legal issues that companies have barely begun to consider, much less resolve.

So Many AI Adoption Risks

Let’s start with the legal and HR issues since they’re the most thorny.

Imagine that you, big corporate employer, require me, solitary and powerless employee, to use an AI chatbot at work to help me do my job. That chatbot is always engaging and thought-provoking and complimentary; and I fall in love with that bot. Eventually I tell my spouse I want a divorce because my spouse just doesn’t get me like CompanyGPT does. Could my spouse then sue you for ruining our marriage? 

Even better: I’m emotionally dependent on my CompanyGPT bot, but you just got a better enterprise software deal and are going to switch LLM platforms. My beloved companion will be gone! Can I claim mental distress and sue you for damages? Could I negotiate taking my CompanyGPT with me as part of an exit package? What if my state enacts legislation saying I can? 

Maybe I have a history of depression or abusive relationships, and I worked hard to remedy those unhealthy mental habits. Now you, by requiring me to use AI, risk pushing me into AI psychosis or some other mental health relapse. What workplace accommodations can I demand? Could I request a change in the AI system’s behavior to make it less sycophantic, so my mental health is protected? 

After all, if the office chair gives me a bad back, I can ask for lumbar support. If the computer terminal causes me eye strain, I can ask for a larger, no-glare screen. So if another piece of office equipment (AI) also causes me distress (mental rather than physical), why wouldn’t I be able to seek accommodations for that, too? 

My point is that by requiring employees to use certain pieces of equipment, companies assume a duty of care to assure that the employees use that equipment safely and properly. Except, AI risks extend into psychological and mental health realms companies have rarely needed to consider before. Now they do. 

So what policies do you have to address those issues? How are you assuring that those policies square with legal protections employees might have? What if you don’t need to offer protections in one jurisdiction (read: United States) but do need to offer them in another (read: Europe)? How will you run a global workforce juggling so many differences? 

Security and Internal Control Risks

Now for all you IT audit and internal control enthusiasts, let’s take a closer look at the Amazon glitch and the GRC risks.

According to the Financial Times, the outage happened in December. Engineers allowed Amazon’s internal AI coding tool, Kiro, to make autonomous decisions on behalf of its users, and in this particular instance Kiro decided to delete and recreate an AWS system that customers use to measure cost of services. The system went down for 13 hours.

Standard Amazon procedures would require two human engineers to co-sign a decision like that. Here, however, the engineer involved had expanded privileges which therefore extended to Kiro’s decision-making authority. Hence Amazon describes the whole incident as “user error, not AI error” because “the same issue could occur with any developer tool or manual action.”

Please, Amazon. That’s a technicality meant to distract everyone from the real issue: that AI agents can work so quickly that we need to revisit the permission structures governing what the humans in charge of AI agents allowed to do. Which is a GRC, security, and internal control challenge.

That is, perhaps a single human engineer could cause those system outages when you look at the permissions his or her role has in some abstract way — but as a practical matter, the chance of that person doing something so dumb is low. Humans have a sense of loyalty, judgment, and fear (“I am so fired if I f—k this up”) that they proceed cautiously. 

AI systems have no such constraints, so companies will need to reassess all their role-based permissions for humans using AI. If that means curbing your human employee’s permissions, because the AI now acting on behalf of said human could cause worse damage, then so be it. 

Think of it this way. We train police officers on how to use firearms as part of their job, and they only shoot under precise circumstances. If we then give them bazookas, capable of doing terrible damage, we wouldn’t let the officers follow the same rules for using firearms; we’d change the permissions officers have because the risk of damage has escalated.

That’s the GRC exercise that needs to happen now, before AI agents acting on humans’ behalf start causing terrible damage.

Shameless Self-Promotion

If you want to ponder these questions further, it just so happens that Radical Compliance and Ethena are co-hosting a webinar this week, Tuesday Feb. 24 at 2 pm ET, on the risks of employees slipping into “ethical autopilot” as they entrust their judgments and critical thinking to artificial intelligence.

I didn’t write this post specifically to support that upcoming webinar, but the two subjects are very much entwined. We have some great speakers lined up and our Rewired AI webinar series has already been a great success, so please register, join us, and tell us what you think!