An Accountability Model for AI

Who’s looking for another thoughtful speech about the compliance implications of artificial intelligence? Because we have one from a top banking regulator in the United States, who raises deep issues about AI that go well beyond the banking sector.

The speech comes from Michael Hsu, head of the Office of the Comptroller of the Currency, which makes him the top consumer banking regulator in the country. Hsu spoke last week at a conference sponsored by the Financial Stability Oversight Council exploring AI’s potential risks to financial stability. 


Why are Hsu’s remarks so worth a compliance professional’s time? Because he put his finger on the principal headache for businesses trying to embrace artificial intelligence today: lack of trust and accountability. If we can’t figure out how to enshrine and preserve those two values in the AI systems we’re trying to develop, this whole project could collapse under the weight of its own hype.

The problem, Hsu said, is that AI allows for diffused accountability. Nobody knows why ChatGPT hallucinates a wrong answer to a simple question, nor how other AI systems decide to discriminate against certain groups or commit some other offense that leaves the public anywhere from exasperated to aghast. The errors just happen, literally ex machina, and so far society has no clear method to assign blame.

“With AI, it is easier to disclaim responsibility for bad outcomes than with any other technology in recent memory,” Hsu said. “The implications for trust are significant. Trust not only sits at the heart of banking, it is likely the limiting factor to AI adoption and use more generally.” 

That’s the crux of the matter: we need to develop a clear, recognized model for accountability for AI. Only then can businesses move forward with confidence that their AI adoption plans will stay within ethical and regulatory guardrails.

A Misalignment of Risk and Accountability

Before we get to what a possible model of accountability for AI might look like, let’s consider a real-world example of AI risks gone wrong today.

The example Hsu cited is one we’ve explored on Radical Compliance already: the case of an AI chatbot at Air Canada that misinterpreted the airline’s refund policies, and gave a customer incorrect advice which cost that customer hundreds of dollars. The customer sued, and earlier this year a Canadian tribunal ruled that Air Canada was responsible for the chatbot’s mistake. The customer ended up winning $650 in ticket refunds plus interest and other costs. 

As Hsu noted, Air Canada tried to argue that the chatbot was more akin to “a separate legal entity” responsible for its own actions than to a corporate web page or employee. That sounds ridiculous to us on the receiving end of bad chatbot advice (it did to the Canadian tribunal, too), but Hsu urged people to consider things from Air Canada’s perspective. 

“With a faulty web page or incompetent employee, a company can identify who is at fault and then put in place controls to mitigate a repeat of that problem in the future,” he said. “With a black-box chatbot that is powered by third parties, most companies are likely to struggle to identify whom to hold accountable for what or how to fix it.”

That’s the accountability challenge here. The technology infrastructure companies use to implement AI is heavily dependent on third parties developing AI algorithms that the companies then use. The legal infrastructure people use to settle disputes, however, is heavily dependent on strict liability. The chatbot is operating under your corporate umbrella, Big Company XYZ; so you’re responsible for its bad actions and dumb advice even if some other tech company built the impenetrable algorithm that serves as its brain.

That misalignment brings us, Hsu said, to this topsy-turvy situation:

Today with AI the companies most capable of affecting outcomes have limited responsibility for them. For instance, the ability of Air Canada to fix its chatbot pales in comparison to the ability of the AI platform, which runs the large language model upon which the chatbot was built. Yet Air Canada was the one held responsible for its chatbot misinforming [the customer].

Right now, nobody is dwelling on this misalignment too much because the examples of AI gone wrong are mostly about consumer harm: a customer quoted a made-up refund policy; people erroneously identified as potential shoplifters based on bad data. We have legal mechanisms to remedy such problems. 

But we don’t know that the mistakes of AI will always stay in the realm of consumer harm. What if AI creates a liquidity crisis in the banking sector? Who should be held responsible for a disaster like that? 

Or, more to the point of Hsu and other regulators: How can we create a better system of accountability for AI now, before such a disaster happens, so that companies embracing AI will know their responsibilities for doing so wisely? 

Which brings us to Hsu’s other real-world example. 

Shared Responsibility Model of Accountability

Hsu pointed to the world of cloud computing, which these days undergirds pretty much the entire corporate technology world. Companies rent applications from vendors that the companies then use themselves; they rent data storage space to house and run applications they’ve developed themselves; and they mix-and-match those two ideas in countless other ways to get the IT infrastructure they need. 

Regulators and companies have worried about how to account for privacy and security risks in such a complex world for years. They have developed a system of shared responsibilities, as shown by this god-awful image:


The more your company wants direct control over its technology, the more on the left-hand side of the image you are; and the more responsibility you have for IT risks (seen in blue). The more you’re willing to relinquish control to cloud-based providers in exchange for lower cost, the more you slide over the right; and the more responsibility they have for managing risk (seen in gray).

Somehow, Hsu said, we — the great collective “we” of AI developers, corporations, regulators, auditors, consumers, and everyone else — need to develop a similar system of shared responsibilities for AI. 

We don’t have one today, but we’ll never be able to embrace the full potential of AI without one, because we’ll never have a universally recognized, scalable way to assign accountability. In turn, if we can’t hold parties accountable for AI gone wrong, nobody will trust it — and then, why are we bothering with any of this stuff at all? 

Leave a Comment

You must be logged in to post a comment.