When Bots Rip Apart Your Business
Corporate compliance officers are always thinking about how to make sure that your organization’s various stakeholder groups — employees, shareholders, customers, donors, business partners — all understand your corporate culture, values, and ethical priorities. Now we have a new stakeholder group threatening that goal: stakeholders that don’t actually exist.
So says a fascinating article from the Wall Street Journal that deserves attention from compliance leaders everywhere. It describes how bot networks on social media are dragging companies into online battles over inflammatory culture war issues. Disagreements that would normally simmer for a few days and then fizzle can now roar into huge conflagrations, prompting panicked changes that senior management would never normally make, all driven by online voices that are nothing more than AI algorithms designed to provoke conflict.
The example given in the WSJ article is Cracker Barrel and its misfire six weeks ago when rolling out a new corporate logo. As you might recall, Cracker Barrel unveiled a new, rather antiseptic logo on Aug. 19 to replace its more folksy “Uncle Herschel” logo that had been around since the 1970s.
At first, a few right-wing activists on Twitter complained that the new logo was somehow too woke and betrayed Cracker Barrel’s American roots. Starting on Aug. 20 they called for a boycott of Cracker Barrel. OK, those activists were actual people; that’s all fair.
As the day progressed, however, automated bot networks started posting about Cracker Barrel too; hundreds of them, all posting or retweeting automated complaints either for or against the new logo. By the end of the day, Twitter had roughly 400 posts about Cracker Barrel every minute. Seventy percent of tweets calling for a boycott bore telltale signs that they came from automated bot networks. Ultimately, social media analysts estimate, bots or suspected bots were responsible for half of all social media posts calling for a boycott.
That fictional pressure had real-world consequence. Cracker Barrel scrapped the new logo within days. Its share price dropped 8.7 percent in less than a week, and new CEO Julie Masino took withering criticism. Meanwhile, the company still hasn’t addressed its fundamental problem that customers aren’t going to Cracker Barrel; the company posted underwhelming results on Sept. 19, and forecast more glum numbers for the coming fiscal year.
Shredding Your Corporate Culture for Profit
So what does all this have to do with ethics and compliance? Potentially quite a lot.
What intrigued me most about the Cracker Barrel debacle was that the bots blowing the logo controversy far out of proportion made money doing so. There is now a whole economic infrastructure that supports this, so why wouldn’t online miscreants use AI and automation technology to amplify the revenue they can gain by putting your corporate reputation through the blender? It’s smart business.
The incentive works as follows. Verified users on Twitter that hit minimum requirements for followers and impressions can get paid based on the likes, replies, and shares their posts get from other verified users. Bots pretending to be humans can help to meet those minimum requirements and spread posts. That means more engagement, which means more money. The bots deliberately pursue divisive topics, since that’s more likely to trigger human responses.
Who’s actually doing this remains unclear. Experts’ best guess is that some of the responsible parties are lowlifes operating troll farms to make a quick buck. Others are probably Russian, Chinese, or North Korean operatives eager to undermine the traditional Western political order.
Either way, let’s be precise about the threat here. Social media platforms (above all, Twitter) are designing themselves to let automated bots drag your company into social media battles that might not otherwise happen, and then tear apart your company’s reputation for profit. There’s no way something like that does not affect your corporate culture.
For example, earlier this month we saw online activists doxxing people who supposedly made offensive statements about the assassination of Charlie Kirk, and demanding that the employers of those people fire them — except, sometimes the activists were doxxing innocent people by mistake. How do you investigate allegations like that, when they might be inherently unprovable? How do you discern whether the online pressure is real or artificial, when bots are working to inflame the supposed controversy?
What about when regulators add threats of investigation (looking at you, FCC chairman Brendan Carr and gullible conspiracy-monger President Trump) based on artificially ginned-up accusations? What communications do you deliver if you conclude that the allegation is false?
The evidence-driven response would be, “We know everyone seems to be talking about this, but actually that chatter isn’t real so we’re sticking with our decision.” Will that still work in a world based on AI-generated hysteria?
Prepare Your Culture for Battle
Now that an economic incentive exists for outsiders to sow discord among your stakeholders, using technology to amplify the speed and ferocity of those attacks; let’s think about how these attacks stress your corporate culture and what you could do to defend it.
First, the stress. That comes from breeding confusion and uncertainty about what your company “stands for,” and whether those judgments are right. Go back to the new Cracker Barrel logo: fundamentally, the bots (and the right-wing critics too) were questioning why the company was making the change.
As soon as people are questioning why you’re doing something, they’re questioning your motives, which are a reflection of your corporate business and ethical priorities. That’s how the culture comes under strain.
For example, was Cracker Barrel trying to reach new audiences so it could grow? Or was it failing to grow because it had abandoned its roots, and the new logo was the latest example of that?
Bots could pose the same inflammatory accusations about DEI, green energy, sacking a late-night TV host, speaking out against tariffs, or much more. The important thing to remember is that the bots don’t actually care about the divisive issue, the political ideology behind it, or what decision your business actually makes. They are not people, after all. But the actual humans who do make up your stakeholder groups will be plunged into turmoil.
Second, the defensive measures you can take to keep the AI-enhanced hounds at bay. Some are forensic in nature. For example, services exist to determine how much social media chatter comes from real people versus artificial bots. Other tools could help you determine whether a social media account supposedly associated with an employee truly does belong to him or her. (Although there’s no guarantee those tools will give a definitive answer.)
The more strategic strategy, however, is to make sure that your company’s corporate values and ethical priorities are clear and consistent. Communicate to employees; communicate to shareholder groups; communicate to customers and business partners. At all times, they should know what your company “stands for,” as imprecise as that phrase may be.
Not all of this menace is within a compliance officer’s ability to address, of course — but your company won’t be able to withstand these online onslaughts without a strong, clear, well-understood ethical culture; supported by an infrastructure of policies, procedures, audit capability, and level-headed leaders.
Work to put those things in place, because the bots could strike anytime.