Hertz Lessons on AI Governance
I didn’t plan on writing about artificial intelligence again so soon, but sometimes clumsy human intelligence at major corporations forces my hand. Hertz, please join us here in the spotlight today. Bring your AI adoption strategy along with you.
The story is as follows. In April, car rental giant Hertz rolled out a new AI technology to identify damage to its vehicles as customers are returning them. First, AI-powered cameras take detailed, 360-degree pictures of the vehicle as you drive it off the lot; then the AI-powered cameras take another set of pictures as you return it to the lot. Software compares the before and after images to see whether you dinged the car — and if you did, the AI software sends you a bill.
OK, that’s a straightforward use case. What could go wrong?
Lots, according to unhappy Hertz customers. They complain that the AI software has been overly calibrated to interpret any blemish as a defect: scratches to a tire, smudges of dirt, or even small pre-existing dings that might be ignored from one angle, but identified as damage in another.
Then comes the AI-imposed consequence to the Hertz customer: an automated alert threatening to bill you for hundreds of dollars. Of course, you can receive a small discount on the damage fee if you pay immediately; or you can request a review by a human Hertz employee, who may or may not get back to you at some point in the future.
For example, one Hertz customer was billed $440 for a one-inch scratch on his tire. That included $250 for the repair, $125 for processing, and a $65 “administrative fee” (which seems like good money for an AI system intended to make human damage inspectors unnecessary). He tried to reach customer service by emailing the “Contact Us” link on the rental website, but responses via that channel can take more than a week. By then, the discounts for paying right away have expired.
Another critic on LinkedIn — who works in AI automation, no less — complained about receiving a $190 bill for “one small ding on the roof (but possibly just dirt or anything else that could throw off a camera) and one similar artifact on the hood. Nothing any human would detect or reasonably consider damages.” He vowed never to use Hertz again.
So what does all this populist, consumer ire have to do with compliance and audit? Nothing, yet also everything.
It’s About Governance
Hertz says the AI camera system is meant to assure that customers aren’t charged for damage they didn’t incur, “while bringing greater transparency, precision, and speed to the process when new damage is detected.” More than 97 percent of scanned cars show no damage at all, the company said.
In the strict sense, Hertz’s use of AI in this manner is not a compliance issue. It’s perfectly legal, and at the abstract level one can see the business logic to the idea, too. Car rental companies need to offer nice cars to customers, and repairing damage is expensive. So the more a rental car company can shift those costs back to the customer — either by tempting the customers to buy liability insurance they’ll probably never use, or just by making them more cautious drivers overall — the better.
But we can all see the fiery consumer outrage here, right? Nobody is against using AI to identify damages per se; the issue is Hertz using AI in a manner that leaves the customer feeling powerless against a system imposing costs on them. That’s the misstep that is causing bad headlines and fomenting online outrage. I suspect it will also soon cause Hertz to say it is “constantly learning from customer feedback” or some such language, while the company scrambles to retool its AI algorithms and customer services processes.
None of that is a compliance or audit problem — but it is very much a governance risk.
That is, as companies implement AI systems across their enterprises (and especially as you roll out systems that interact with customers), you need a governance model that can identify likely missteps in your AI adoption, and then either (a) change your processes to reduce the cost of those mistakes to acceptable levels; or (b) confirm that, yes, the risks you’ve identified are ones the company is willing to accept.
It’s possible that Hertz did indeed run through all those governance processes and decided that the risk of bad headlines and customer dismay was worth the cost savings its AI cameras deliver. We here on the outside of the organization don’t know what deliberations happened inside the business.
My point is simply that all companies do need some way to analyze these AI use cases and think through the possible risks before you proceed.
For example, did the Hertz marketing and customer care teams fully understand the risk of customer dismay before Hertz rolled out the AI cameras? Did the technology team confirm that the AI systems (developed by an Israeli company UVeye) can be recalibrated as desired to allow trivial wear and tear to go overlooked? What about revising customer complaint processes so that drivers can reach human support reps more quickly? Because you know that’ll be a sore point.
AI and ‘The Human Point’
Another lesson that strikes me from this episode is that business about customers having difficulty reaching a person at Hertz to complain. It’s another reminder that as your business rolls out artificial intelligence, you need to think carefully to identify what I call “the human point” in your business processes.
The human point is that point in the business process where AI machinations end and human judgement begins. For example, historically, the human point in damage inspection for car rentals was very near the front of the process: you pulled into the return lot, and there was an employee with a camera or laser gun or a notebook, walking around the car while you were there, noting potential damage. If you wanted to quibble with a ding, you could, right there and then.
With these AI cameras, Hertz has now moved the human point to the end of that process: you return the car, and AI decides what damage has or hasn’t happened. If you want to dispute its decisions, your only choice is to flag the AI-generated alert, and a human employee will review that appeal sometime later. At best, those human agents can provide feedback through an AI chatbot, but customers on the receiving end have little say in the timing or direction of conversation. (According to an article in USA Today, “The company is also working on integrating live agents into the app.”)
Foremost, your decision about where to place the human point in a business process will be driven by finance (“How much money will we save by using AI in this manner?”) and by sales and marketing (“How much will customers complain when we do this, and will we look like idiots in the press for doing so?”).
Those aren’t the only variables in this equation, however. Moving around the human point might lead to new security risks (“Could hackers manipulate our AI when it’s running this business process for us?”) — and now, quite possibly, you could incur new compliance risks, too. If you place the human point too near the end, making it nearly impossible for people to appeal an AI decision to a human or to understand how an AI decision was made, you might trip regulatory concerns.
That idea of a right to appeal an AI decision to people is not new. The Biden Administration included it in their “AI Bill of Rights” unveiled in 2022, dubbing the idea “human alternatives.” Various U.S. states are moving ahead with legislation to regulate AI, many of them including some version of it too. The EU AI Act stipulates that “final decision-making must remain a human-driven activity.”
So Hertz has given us lots to think about. Check your AI governance roadmap before you get lost.
