You can learn a lot from Google — including, apparently, how corporations might approach the delicate matter of ethics and artificial intelligence. Because Google tried exactly that not long ago, failed in a painfully public way, and raised numerous AI lessons for ethics and compliance professionals to consider before your company tries this at home.
The tale is not pretty. On March 26, Google announced the creation of an “Advanced Technology External Advisory Council,” a panel of eight people who would meet four times in 2019, to help Google understand the ethical issues around developing and using artificial intelligence.
The idea went wrong immediately. Some employees objected to one ATEAC member who is an outspoken critic of transgender rights. Others objected to another member whose company works on drone technology for the U.S. military.
Then a third member of the ATEAC board announced that he wouldn’t serve. A fourth was asked whether she would serve with the first two, and answered, “Believe it or not, I know worse about one of the other people.”
By the following week Google decided to dissolve the board, no meetings ever held. “We’ll continue to be responsible in our work on the important issues that AI raises, and will find different ways of getting outside opinions on these topics,” the company said in a statement.
Google was right to try confronting issues of ethics and AI, no matter how this particular effort failed. This is an issue companies shouldn’t ignore, and one increasingly more companies can’t ignore. Either your firm is developing AI for its own products, or you’re using AI to manage business processes.
Which brings us back to Google, and its short-lived effort. What lessons can compliance officers take away from that experience?
Searching for Better Results
First, set a clear scope of what you want an AI review board to do. The ethical implications of AI are broad and vast, and society won’t fully understand them for many years. So consider how your organization might start small, on specific questions, with a clear timeline to articulate practical answers.
For example, an AI ethics advisory board could be tasked with exploring: “How might our product recommendation algorithms lead to different recommendations for different demographics? At what point might that be considered discriminatory?”
Or the board might explore how the company might respond to requests by outsiders to see the code behind AI making decisions that affect those outsiders. Or how to let people avoid its technology in an easy, practical way if they choose.
Regardless of your company’s specific goals for an AI advisory board, they should be specific goals.
Second, include people who can identify real answers and actions. Notice our examples above about product recommendation algorithms, or requests to see code, or opt-out processes. Those questions are tied to specific issues your company might face, which means an AI advisory panel should have people who understand how to address them.
An advisory panel of outsiders, no matter how thoughtful those people are, can only identify ethical goals a company should achieve. The ideal panel also includes insiders — that is, employees — who can offer ideas about how to achieve those goals. Those insiders might include lawyers and compliance officers, as well as coders, marketers, sales executives, or HR representatives. They can keep the panel grounded in reality.
Third, focus on key principles. Those deep thinkers about AI and ethics do have a role here, because they can keep the insiders focused on broader issues. For example, an ability to opt out of facial recognition technology or product recommendation algorithms isn’t just about privacy. It is, fundamentally, about consumers’ fear that they will lose control of their experiences in daily life.
That’s a subtle concept, but once you grasp it, you can start tailoring your business processes to anticipate what the public will demand for governance of AI in the future. Any advisory panel you establish will need an ability to identify basic principles like that, before translating them into practical steps for your company to take.
So make that part (but not all) of the advisory panel’s mission, and include minds who know how to fulfill it.
Fourth, understand what your company should really be trying to do. Grappling with ethics and AI isn’t really an exercise in corporate compliance, because AI is moving too fast for regulators to establish a set of rules for businesses to comply with.
The challenge with AI is more about ethics and governance — and we’d do well here to remember the origin of the word “governance.” It derives from the Latin gubernare and the Greek kybernan, which both mean “to steer or pilot a ship.”
In other words, governance never ends. Governance is about prudence and restraint, to avoid floundering on rocky shores because your business made poor decisions.
That’s a great way to think about the ethical implications of AI-driven technologies, too: as a continuous discussion, grounded in strong corporate leadership, and guided by advisory panels that can tie your specific business risks and objectives to the broader concerns AI raises.