Stating Your Ethical AI Principles
Today we have another chapter in our ongoing series about artificial intelligence, and how companies can take a more compliance-aware approach to integrating AI into their operations. This time around I want to look at what the companies themselves are disclosing to the public.
The idea came to me as I was researching my previous post in this series about what boards have said about AI — because most boards haven’t been saying that much. Where boards have said something, usually they were pointing investors to some other statement about AI that company management had already posted online.
OK, fair enough; so what do such statements actually say?
We can start with the example of Facebook, which devotes a whole portion of its corporate website to the company’s efforts at artificial intelligence. There you can find a statement from 2021 declaring Facebook’s “five pillars of responsible AI development.” Those pillars are:
- Privacy and security
- Fairness and inclusion
- Robustness and safety
- Transparency and control
- Accountability and governance
Those pillars sound reasonable enough, but one immediate question is how well they align with the five pillars listed in the Biden Administration’s AI Bill of Rights. After all, regulation of artificial intelligence is still in its infancy. Enforcement actions that we can parse to better understand improper use of AI are rare, and case law rarer still. For lack of anywhere better to start, one might as well start with those five pillars the Biden Administration put forth as the foundation for future regulation.
Figure 1, below, is my best guess at how the Biden Administration’s pillars line up to Facebook’s pillars. As you can see, they line up quite well. Even more interesting is that Facebook published its pillars more than a full year before the White House did. (Which makes me wonder about who is really regulating whom, but we’ll debate that issue another day.)
You have to start somewhere with AI, and as every compliance professional knows, the best place to start is by defining your core values and ethical priorities. This is one plausible way to do that.
Other Ethical AI Statements
Another interesting example comes from Adobe, maker of graphic design software. Adobe caught my eye because it has a corporate responsibility page that clearly identifies one artificial intelligence risk directly relevant to the company’s business model: misinformation and other fake content, quite possibly created by Adobe’s own products.
Adobe then talks about various efforts it supports to combat fake content, such as the Content Authenticity Initiative (launched by Adobe in 2019) and the EU 2022 Code of Practice on Disinformation. It also points people to a separate Adobe Statement of AI Ethics.
That statement of AI ethics principles starts with three broad values: responsibility, accountability, and transparency. The document then elaborates on each one, and what the value means in practice.
For example, Adobe defines transparency as disclosure the following (among other things) to its customers:
- When an individual’s data will be collected for AI training, and what controls a user will have over the collection;
- How datasets are used in building AI models;
- How Adobe is testing for and resolving issues related to unfair bias.
The other example for today is Microsoft. Like Facebook, Microsoft has dedicated a part of its website to artificial intelligence, including a page that outlines the company’s principles for responsible AI development. Those principles are much the same as Facebook’s (and by extension, the Biden Administration’s), too:
- Fairness
- Reliability and safety
- Privacy and security
- Inclusiveness
- Transparency
- Accountability
Microsoft also unpacked what those principles mean in practice in a blog post the company published in May 2023, written by its “chief responsible AI officer,” Natasha Crampton.
Thoughts and Observations
So what are corporate ethics and compliance officers supposed to take away from these examples, especially for the vast majority who don’t work at giant technology companies thinking about AI every moment of the day?
First, we should appreciate that these lofty statements of responsible AI are really about defining governance and the control environment over artificial intelligence. Most organizations already have similar statements of ethical principles for whatever business they’ve been conducting all along; now you need to graft those ethical values — the foundation of your control environment, really; defining how senior leaders want the business to be run — onto the rapidly emerging use cases for AI.
Second, those ethical AI principles seem to arise from regulatory compliance risks that companies have faced for years. For example, when companies throw around words like “fairness” and “inclusion,” what they’re really saying is, “We don’t want sloppy use of AI to spawn discrimination lawsuits.” When they say “privacy” and “security,” they really mean they don’t want regulatory enforcement for violations of the GDPR or shareholder lawsuits for a cybersecurity meltdown.
In other words, perhaps we don’t need to overthink what these ethical AI principles should be, because they do have a certain common-sense nature to them. Look at what your business conduct risks have been all along, and do gather the right people within your enterprise to debate how artificial intelligence might twist those risks into new shapes — but ultimately, moral values tend to be fairly constant through time. If you have a strong leadership team that wants to behave ethically, laying down the foundations to do so in the era of AI shouldn’t tax your collective brainpower too much.
One Nitty-Gritty Detail
We should note that statements like these do expose a company to legal risk, such as enforcement from the Securities and Exchange Commission. Recall that when the SEC sued SolarWinds in 2023 over poor disclosure of cybersecurity risks, the agency specifically cited a “security statement” that SolarWinds had published for all the world to see — a statement that promised all manner of lofty cybersecurity protection, when in fact SolarWinds had been penetrated by Russian hackers.
Well, AI statements such as what Facebook, Adobe, and Microsoft publish to the world pose that same sort of disclosure risk for artificial intelligence. The questions how you can assure that your organization follows through on all those lofty goals outlined in your statement of ethical AI usage.
For example, go back to Adobe’s disclosures to customers about how it uses AI. The company says it will disclose (1) how datasets are used in building AI models; and (2) how Adobe is testing for and resolving issues related to unfair bias.
What if, down in the depths of Adobe’s software operations, those processes don’t work as described? Would that count as misleading investors? Because that’s the premise of the SEC’s lawsuit against SolarWinds; the agency claims that the company either knew it wasn’t living up to the goals in its security statement, or should have known that it wasn’t. (One could probably make an equally plausible case with the Federal Trade Commission suing a company for misleading statements about AI to customers rather than investors.)
To overcome this risk, you’ll need to put those lofty governance principles into practice. That will be the subject of a future post.