What Boards Are Saying About AI
Today I want to start an occasional series about artificial intelligence, and how businesses can take a more risk- and compliance-aware approach to integrating AI into their operations. We might as well start that exploration at the top: What are boards saying about how they oversee the adoption of artificial intelligence?
Not much, apparently.
At least, that’s the conclusion I draw after reviewing the proxy statements that large companies have been filing lately. Right around now is when most large companies file their proxies with the Securities and Exchange Commission, describing the board’s approach to governance and reviewing what board directors did in the prior year. So I went to Calcbench.com, one of the most comprehensive data warehouses around, and searched for mentions of “artificial intelligence” in the texts of proxy statements from the S&P 500.
Of the 429 proxy statements filed so far by those companies, only 178 contain the phrase “artificial intelligence.” Most companies mention artificial intelligence in little more than a cursory way: listing it as a skill possessed by one of their directors, or including a sentence about the importance of AI in an introductory letter to shareholders. Several faced shareholder proposals calling for management to publish a report about artificial intelligence (and in every instance I found, management opposed that idea).
Very few of those 178 proxy statements discussed board oversight of artificial intelligence in real depth, such as identifying a board-level committee responsible for thinking about AI risks or specifying how often the whole board debated AI’s strategic threats and opportunities. Where boards did talk about their oversight of artificial intelligence, usually it was some version of — wait for it — “the audit committee does that.”
Specific Examples of AI Disclosure
Let’s step away from those broad statements about AI for some specific examples to consider.
- Williams Cos. ($WMB), a natural gas processing company based in Oklahoma, said that its audit committee “discussed implications of generative artificial intelligence in regard to cybersecurity and overall risk.” That’s good, but it also narrowly frames AI as a security and compliance risk, when the technology is vastly more than that.
- Kraft Heinz ($KHC) mentioned that the company “leveraged proprietary artificial intelligence (“AI”) to power the platform to drive efficiencies across our supply chain.” How, exactly? Is this project complete or still underway? We don’t know. A rather odd statement to include in the proxy if that’s all you’re going to say about the matter.
- Disney Corp. ($DIS) talks about how the board’s nominating and governance committee receives an annual report on human rights risks, “which has included risks associated with artificial intelligence.” The full board also reviews reports on “certain potential uses of generative artificial intelligence and the development of generative artificial intelligence governance principles.”
- Boeing ($BA) says some of its board directors received outside training on artificial intelligence; and that the full board “participates in regular briefings with management” on a variety of issues, artificial intelligence among them.
That’s all reasonable enough, since none of the above are technology companies in the traditional sense of the word. Their boards are aware of AI, and taking some steps to understand how management is trying to integrate AI into business operations. Fair enough for now.
More interesting are the tech giants themselves. These are the ones bringing artificial intelligence into the world, after all — and they too had precious little to say about how their boards approach artificial intelligence.
For example, Google ($GOOG) had three shareholder proposals in its proxy statement calling for various reports about how the board oversees artificial intelligence or the risks that Google’s AI products might create. The first proposal specifically called for Google to amend the charter of the board’s Audit and Compliance Committee so that artificial intelligence is expressly included in the charter.
Google rebutted that proposal with a terse: “Oversight of risks and exposures associated with AI is already being effectively carried out at both our full board and audit committee levels. Explicitly calling out AI in the Audit Committee Charter is unnecessary as it is already subsumed within the broader risk assessment areas set forth in its charter and would provide no incremental benefit to our stockholders.”
OK, no surprise that a company will recommend against a shareholder proposal; but the charter for Google’s audit and compliance committee never actually mentions artificial intelligence. Nor does Google have a committee dedicated to technology risk. Is that really a good idea when technology is the lifeblood of your company’s products and operations?
Microsoft ($MSFT) faced a similar shareholder proposal last fall, calling for a report on threats of AI misinformation. In its rebuttal to that idea, Microsoft noted that it had more than 120 full-time employees dedicated to responsible AI development (plus another 200 working part of their time), and cited a company blog post from May 2023 on the subject. That post does lay out an impressive approach that management takes to think about AI; but the board isn’t mentioned at all. (Microsoft’s board does, however, have an “Environmental, Social, and Public Policy Committee,” and that committee lists responsible AI as one of its oversight issues.)
Numerous other tech companies don’t have a board-level committee dedicated to technology risks, where artificial intelligence would presumably be a paramount concern. Not Apple ($AAPL), not Netflix ($NFLX), not Google. Amazon ($AMZN) does have a security committee, and Facebook ($META) does have a privacy committee, and I bet artificial intelligence does crop up in conversations there as part of larger discussions around cybersecurity or privacy regulation. But is that the robust oversight and discussion that AI deserves?
Why Are We Talking About This at All?
We’re talking about this because artificial intelligence is a groundbreaking technology, one that will spawn all manner of risks and opportunities. Corporate boards will need to provide guidance and oversight about how management should handle that risk — and right now, it looks like boards are all over the place on that score.
That’s not necessarily bad governance right now, because artificial intelligence (and specifically, generative AI) is still so new. But it could be bad governance by 2030 if boards don’t come up with a more thoughtful plan for achieving good oversight of AI by the time AI permeates all business operations.
So for example, a chief audit executive might want to have frank conversations with the audit committee chair about whether AI should be part of the committee’s purview, or should be packed off to a dedicated technology risk committee. Or if the organization is too small for another board committee, then perhaps have a follow-up frank talk with the chair of the nominating and governance committee about recruiting more AI-savvy directors double-quick, so that even with the board’s existing structure directors will still be able to give AI the attention it deserves.
Separately, we should remember that plenty of boards already do get briefings about artificial intelligence. They just get those briefings as a full board, to ponder how AI might help or hinder the company’s strategic objectives. Well, is that approach going to suffice? Will it allow a fulsome debate about AI’s compliance, security, financial, and ethical risks? Will the board end up depending on management too much for those discussions, because the board doesn’t have a committee able to debate those issues (and then brief their board colleagues) in depth?
And where do investors fit into this? What transparency should they have into the board’s ability to oversee artificial intelligence? After all, if the company’s adoption of AI somehow goes screwy, one of the first questions unhappy investors will shriek is, “Where was the board???”
The proxy statement is where the board first tries to answer that question, ahead of trouble rather than after it. Right now, those answers look inconclusive.