Call for More Corporate Disclosure on AI
An advisory committee to the Securities and Exchange Commission will meet next week to consider whether publicly traded companies should be required to disclose more about artificial intelligence, such as whether boards have formal oversight of AI and what the company believes its material AI risks are.
The proposals come from the SEC Investor Advisory committee, which will meet on Dec. 4 to discuss those ideas and several other measures. According a draft discussion paper, the committee will recommend three new requirements to the SEC:
- That companies define what they mean when they use the term “artificial intelligence.”
- That companies disclose what mechanisms the board uses (if any at all) to oversee adoption of AI at the business;
- That companies report on how they are adopting AI; and the effects of its AI adoption on internal business operations and “consumer-facing matters.”
To be clear, the above recommendations are still just proposals right now. The Investor Advisory Committee might decide not to pass them along to the SEC, or perhaps vote to send along some amended version instead.
Nor is it clear that the SEC would adopt the committee’s recommendations anyway. You’d typically expect SEC chairman Paul Atkins to oppose anything that would add to companies’ disclosure burdens, but AI is one subject where even Atkins and his fellow right-wing zealots on the SEC might be tempted to support more disclosure.
All that said, the Investor Advisory Committee stressed in its discussion paper that while investors are interested in what companies have to say about AI, right now those discussions are all over the map:
There are several reasons beyond lack of guidance for why reporting remains uneven, including, but not limited to the fact that there is no single accepted definition of what is AI, the technology is rapidly evolving, companies have not yet captured what they are investing into AI or developed sufficient metrics for measuring its impact in operations, lack of training and adoption, and different industries deploy AI for both internal operations and to support business lines.
So my bet is that the SEC probably will do something here to clarify what companies are expected to say about AI. Whether that takes the form of a dedicated sub-chapter in Regulation S-K or just tweaks to existing items in S-K is anyone’s guess; but I suspect something will get done in 2026.
Good Disclosure of AI Governance
Let’s assume that the SEC does adopt those three disclosure proposals along the lines of what the Investor Advisory Committee suggests. That brings us to some interesting (and long overdue) questions about corporate governance and IT risk management that companies should think about.
For starters, does your board need a dedicated AI governance committee? I would say no, but only because your board should already have some sort of technology risk committee and AI governance issues belong there.
The audit committee already has plenty to do reviewing the company’s financial statements and internal control over financial reporting; it will never have enough time to exercise proper oversight over cybersecurity, artificial intelligence, and related technology issues. The best solution is to establish a separate, dedicated committee to oversee how the company’s technology strategies affect security, operational, and compliance risks; and then lump AI risks into that purview.
The bad news is that most companies don’t treat IT risks that way. For example, one recent study found that even among the S&P 500, 70 percent of firms assign oversight of cybersecurity to the audit committee. That alone is a questionable governance decision, and the rise of AI is making the default approach of, “Um, it’s gotta go somewhere, so give it to the audit committee,” no longer tenable. Good. It’s a bad habit we need to break.
Second, can we move to a more clear and precise definition of artificial intelligence? Because companies and investors alike sorely need one.
For example, if your company uses a predictive analytics tool to offer product recommendations or price quotes, then you’re already using AI (and probably have been for years, since predictive analytics has been around for at least a decade). On the other hand, if you’re using ChatGPT to summarize reports or write marketing copy, you’re using a different type of AI. If you’re using robotic process automation to expedite accounting transactions, that’s another type of AI.
Each type of AI has its own unique risk profile. Predictive analytics might put you at greater risk of algorithmic discrimination and enforcement under state anti-discrimination statutes; generative AI could carry higher security risks from hackers feeding the AI a poisoned prompt to generate polluted results.
Plus, as the Investor Advisory Committee noted, different industries and companies will use AI (or multiple types of AI) in different ways. That’s confusing to investors, business partners, and employees alike, so more precision around what a company means when it says “we use AI!” would be helpful.
Third, once we have those clear definitions of AI, can companies then define their material risks from AI? Some of those risks will center around security and operations, others around regulatory compliance, and still more around financial projections, workforce development, and more.
Management and the board should think about those material risks so that oversight can be allocated to the proper people. For example, while I wholly endorse a board-level technology committee that can oversee AI’s security and regulatory risks, that committee might not be the right place to manage AI’s risks for workforce development or strategic market position; those are issues for the full board to consider in close consultation with the CEO.
From Good Idea to Good Guidance
All in all, these proposals from the Investor Advisory Committee make a lot of sense. The whole point of the committee is to give the SEC advice on what matters to investors, and what matters to investors is that they (a) understand corporate disclosures; and (b) trust that the board and management are taking a thoughtful approach to managing the company’s risks while advancing on business objectives.
Right now AI is a huge risk, and investors want more clarity about it. You don’t need ChatGPT to tell you that.
