Compliance, AI, and Corporate Strategy
Compliance officers are always striving to articulate their role in the setting of corporate strategy. Over the weekend I stumbled across an excellent example of how compliance officers might do that — even though the article’s principal point was that corporate leaders shouldn’t define any strategy for everyone’s favorite issue of the day, artificial intelligence.
The article, appearing in the Wall Street Journal and written by business professor Joe Peppard, argued that most companies shouldn’t rush to adopt a strategy for AI because (a) the technology is still so new that most companies wouldn’t be able to figure out a smart strategy for it; and (b) even if they did, most companies lack the other technologies and personnel necessary to bring said AI strategy into force.
Peppard makes some compelling points. For example, say a manufacturer decides, “We’re going to use AI to predict maintenance needs and keep our production lines running more efficiently!” Well, you’d need tons of historical data about when and under what circumstances your production lines failed in the past. You probably don’t have that data. Even if you do, you’d need to store it in the cloud somewhere; and then install sensors and other monitoring technologies so the AI could predict future failures. You probably don’t have any of that set up either.
Given all those challenges, Peppard said, management teams would be better served by letting AI adoption evolve more naturally at your enterprise. Then came the crucial passage in his essay:
The priority shouldn’t be about building a top-down overarching AI strategy. This can come later. It is about encouraging employees to use AI tools, to experiment and try things out, and to pursue ideas organically rather than following “management direction.” It is also important that there are guardrails to ensure that any tool is used properly, responsibly, and in a way that doesn’t put the organization at risk. The best ideas are most likely to come from the bottom-up, by those engaging in their day-to-day work and supporting customers. Technology doesn’t drive change; people do.
A good idea — but also one that has ethics and compliance risk written all over it.
Compliance and AI Adoption
To be fair, Peppard sees that risk; hence the part about guardrails to ensure that AI is used properly and responsibly. My point is that corporate compliance officers should be the advisers helping management to design and erect those guardrails.
You have the necessary expertise in risk assessment, policy management, and control design. You also have the necessary experience, since ethics and compliance officers have spent years encouraging employees to think carefully about the right way to behave and to act in a manner that keeps the organization safe. All those things are exactly what companies need to embrace now as they find their way forward on artificial intelligence.
Indeed, the more I think about Peppard’s arguments, the more I like them. First, they have a certain libertarian flavor to them: management doesn’t know how best to run the day-to-day operations of the enterprise; the employees do. Employees certainly agree with that principle, and I bet most would love the idea that they can chart new ways that AI could help the business grow. That’s where innovation comes from.
Second, however: innovation won’t flourish unless employees are nudged to think about the compliance and ethics risks involved in what they’re trying to do. So management needs to support employee-led innovation around AI and the culture of ethics and compliance necessary to channel that innovation in the right directions. What compliance professional wouldn’t support that?
How would all this work in practice? We can think of a few examples.
Say your sales and marketing teams want to use AI to generate on-the-spot price quotes for products you offer, such as insurance or auto loans. Great idea, but AI can pick up bad habits and start discriminating against minorities, which violates state and federal anti-discrimination laws. It’s not the compliance team’s job to solve that problem directly; but it is your job to help the sales and marketing teams perceive that risk and then develop appropriate controls to avoid it. (Say, frequent audits of the AI’s price quotes, or careful testing at the front end to be sure the AI meets anti-discrimination criteria.)
Or say the HR team wants to use AI in hiring, to identify the most promising job applicants. Again, great idea — but will HR use AI to whittle 5,000 applicants down to 1,000, or 5,000 applicants down to four finalists? What must you disclose to applicants, at which part of the hiring process? What right of appeal to a human will you want to include? What about assurances that the AI doesn’t discriminate against women, minorities, older applicants, or immigrants?
There are compliance issues in that HR example, such as New York City requiring audits of AI systems in hiring. There are also ethical issues with no clear answer, such as whittling down 5,000 applicants to some “correct” smaller number. It’s your job (or at least should be your job, if management is wise) to get everyone thinking about what the right answers are.
Compliance-Aware Strategy & Culture
Ultimately what we’re talking about here is an organization’s ability to keep ethics and compliance risks in mind as it innovates its way forward. That’s true whether the issue is defining profound, bet-the-future strategies; or mundane, incremental improvements to daily workflows and routines. Lunging forward recklessly, without considering the ethics and compliance risks, is folly. Stepping forward deliberately, with everyone in agreement about the risks you’re taking, is the way to corporate success.
That’s been true for years, as companies went through globalization, digital transformation, green transformation, and every other sort of transformation. It’s true today, as we all embark on an AI transformation whose final form remains unclear. It’s always going to be true, that corporate success depends on a frank analysis of risk and integrity.
Hence I’m always in favor of compliance officers being part of the conversation.