AI is now a Corporate Affairs problem, but most boards haven't noticed

AI is now a Corporate Affairs problem, but most boards haven't noticed

When Diageo's global head of digital consumerism, Jim Wolff, told Advertising Week Europe that the company is working to make its brands "machine-readable" for large language models (LLMs), the room presumably filed it under marketing innovation.

While it’s undoubtedly innovative, it belongs somewhere else. It belongs in the boardroom, and specifically on the agenda of every Corporate Affairs director in a regulated industry.

Why machine-readable brands matter

Diageo's insight is correct. LLMs respond well, as Wolff put it, to "clear, consistent direction." When the company tested Baileys Irish Cream against AI systems, the models returned attributes like indulgence and premium quality but missed playfulness and creativity entirely.

The fix Diageo is pursuing covers imagery, metadata, crawlability and authoritative sourcing. Reasonable for a consumer brand, but nowhere near sufficient for a corporation whose reputation is also being formed in the minds of investors, parliamentarians, regulators and journalists, many of whom now use AI tools as a primary research interface.

The shift in how people search and decide

Research published in February 2025 found that 80 percent of consumers now rely on AI-written summaries for at least 40 percent of their searches, and that 60 percent of searches end without a click. What the user receives is drawn from whatever structured, consistent, factually dense material the model has been able to index and weight. If that material is thin, incomplete or contradictory, the model fills the gap with inference, and inference tends toward the generic or the damaging.

This is where the stakes for Corporate Affairs become acute. A consumer misled about whether Baileys is playful is mildly inconvenient. A regulator who queries an LLM about a company's environmental record or its track record on safety disclosures, and receives a hallucinated or outdated synthesis, creates something far harder to correct.

LLMs consistently prioritise corporate-owned sources and carry a long memory, surfacing older narratives months or years after publication. Reputational crises organisations believed they had put behind them are being revived inside AI responses, unprompted by any current coverage.

The end of human-first communications

Traditional corporate communications has always operated on the assumption that the audience is human and reads sequentially. The message is carried through human intermediaries who exercise their own judgement. As AI tools become the first point of contact for stakeholder research, that intermediation is changing. The machine does not read a company's side of the story charitably. It weights frequency, consistency, source authority and structural clarity. A company that has spent years building relationships but neglected to build a coherent, publicly accessible narrative infrastructure is now at a measurable disadvantage.

Diageo's chief innovation officer Mark Sandys told McKinsey last year that AI will not replace creative judgement but will expose the brands that lack a clear identity. The same logic applies to corporate identity, with considerably higher stakes.

A company that has never clearly articulated its regulatory positions, its scientific basis for product decisions, or its public interest case has no raw material for an LLM to draw on. When a journalist or a parliamentary researcher queries that company, the model returns a composite picture assembled from third-party coverage and whatever fragments of owned content happen to be crawlable. That picture is unlikely to be flattering, and it arrives without a right of reply.

What corporate affairs must build now

The work that Corporate Affairs must now undertake goes well beyond website copy or Wikipedia maintenance. It requires what might be called narrative infrastructure: structured, sourced, consistently maintained public content covering the full range of questions a model might be asked about the organisation. That means published policy positions with citations, and scientific or technical content that models can weight as authoritative. It means consistent messaging across every owned channel becausear. And it means regularly testing what the major models currently believe about your organisation and where those beliefs diverge from the truth.

A closing window of opportunity

Gartner has projected that traditional search engine volume will fall by 25 percent by 2026 as users migrate to conversational AI tools. The organisations that treat both as extensions of their existing communications model will pay for it in reputational exposure they didn’t see coming.

The window for building a structural position in how LLMs represent your organisation is open, but it will not stay open indefinitely. The companies that begin this work now, with the same rigour Corporate Affairs applies to regulatory engagement, will be difficult to displace. The ones that wait for the next crisis to make the case internally will find that the model has already formed its view.

Diageo has recognised this and is already acting. But the honest question for most Corporate Affairs directors is whether their organisation has even begun to ask the right questions.

The two-year reality

The practical window for establishing a structural position inside the major LLMs is approximately two years. After that, the models will be sufficiently trained and weighted that displacing an established narrative becomes exponentially harder. The organisations that move now will shape how they are represented.

Those who wait will inherit whatever the machine has already decided. If you want to understand where your organisation currently stands and what it would take to change it, SPQR can help.

Public belief shapes fast. Follow us on LinkedIn to keep up.