The Reputation You Didn’t Write
The absence of information is no longer protection
The Reputation You Didn’t Write
A wealth manager introduces a client into a co-investment discussion. The client is credible, well-capitalised, and known within their network. Before moving forward, someone on the other side runs a quick search. Not on Google, but on ChatGPT.
The answer comes back in seconds. It sounds confident. It references a legal issue from years ago, a Wikipedia entry, and a Reddit discussion. No one checks the sources. The deal doesn’t progress. No explanation is given. This is starting to happen more often. And in most cases, the advisor has no visibility that it happened at all.
From Search Results to Synthesised Narratives
For years, reputation online meant Google. You could see what appeared, assess sources, and form a view. That process is changing.
AI platforms do not return links. They return a single answer. Information from multiple sources is combined into one narrative. The user does not see where each piece comes from unless they go looking for it.
The scale of this shift is already significant. ChatGPT receives around 6 billion monthly visits. Google Gemini has roughly 750 million users. Gartner projects organic search traffic to fall by more than 50% by 2028 as this behaviour becomes standard. The practical implication is simple. For many professionals, the first impression of an individual is no longer a list of sources. It is a summary. And that summary is treated as fact.
Where That Summary Comes From
There is an assumption that AI platforms rely mainly on reliable, verified sources. That is only true when those sources exist. A Yext study of 6.8 million AI citations found that 86% come from controllable sources such as company websites and official profiles. But that figure depends entirely on those sources being available. For private individuals, they often are not.
When there is little first-party content, AI systems pull from whatever is available. Research shows that Wikipedia accounts for nearly half of ChatGPT’s most-cited sources. Separate analysis indicates Reddit appears in roughly 40% of responses. In addition, around 90% of cited pages sit beyond position 21 in traditional search rankings. In other words, the material shaping the narrative is often the material that would previously have been ignored.
“The model doesn’t understand authority in the way a human does,” says Tony McChrystal, Founder of Pavesen. “It understands availability. It draws from what it can find, not necessarily from what is most accurate.”
The 8-to-1 Imbalance
This becomes clearer when you look at the structure of the information itself. Pavesen’s research shows that, for many high-profile individuals, uncontrolled sources outnumber controlled ones by roughly eight to one. Controlled sources are limited. A company biography. A LinkedIn profile. Perhaps an interview.
Uncontrolled sources are not. Old articles, forum discussions, court records, third-party commentary, Wikipedia edits. These accumulate over time and remain accessible. AI systems do not separate these cleanly. They combine them. When one category outweighs the other by that margin, the outcome is predictable. The narrative reflects the largest pool of information, not the most accurate one.
The Privacy Problem
This is where the issue becomes more acute. Many high-net-worth individuals have deliberately kept a low profile. That has traditionally been a sensible approach. Less visibility meant less exposure.
AI changes that dynamic.
If there are no authoritative sources, the system does not stop. It fills the gaps with whatever exists. That often means outdated or low-context material. “The absence of information is no longer neutral,” McChrystal says. “It creates a vacuum that gets filled with whatever the model can access.” The result is that the individuals who have been most careful about privacy often have the least control over how they are represented.
Why This Matters in Practice
This is not just a theoretical issue. AI-generated summaries are increasingly used in early-stage due diligence. They are quick, accessible, and appear complete. In many cases, they are used before any formal checks take place. The problem is not that the information is always wrong. It is that it is incomplete in ways that are difficult to detect.
A single detail can introduce uncertainty. And in investment decisions, uncertainty tends to slow things down or stop them entirely. There is rarely a clear rejection. Opportunities simply do not move forward. From the outside, there is no indication that reputation played a role.
The Overlap with Cyber Risk
There is also a connection with cybersecurity that is often overlooked. Deloitte reports that 43% of family offices have experienced a cyberattack in the past two years. Omega Systems found that 83% are concerned about deepfake or impersonation risks.
When an incident occurs, the information rarely disappears. It becomes part of the available dataset. AI systems can then reference it repeatedly, long after the issue has been resolved. What starts as a technical problem becomes a reputational one.
A Risk That Sits Outside the Usual Framework
Wealth managers already monitor market risk, credit exposure, and regulatory issues. These are structured, measurable, and regularly reviewed. AI-generated reputation does not yet sit in that framework. But it has similar characteristics.
It can influence transactions. It can affect counterparties’ perceptions. It can change outcomes without being visible. That makes it difficult to manage unless it is actively considered.
A Different Way to Think About Reputation
The starting point is understanding how these systems behave. AI cannot cite what does not exist. If there are no authoritative sources, it will rely on uncontrolled ones. That is not a flaw. It is how the system works. The implication is that reputation is no longer defined only by what is published intentionally. It is also defined by what exists elsewhere, regardless of accuracy.
For advisors, this introduces a new layer of due diligence. Not just how a client is perceived by people, but how they are represented by systems that are increasingly shaping those perceptions. That shift is already underway. In many cases, it has already happened. The difference is that most people have not yet noticed.
Comments
There are no comments for this story
Be the first to respond and start the conversation.