Technology
How AI Search Reshapes Reputation Risk For Family Offices
.jpg)
There are all kinds of ways in which a family office's reputation can be harmed. AI poses fresh challenges, but addressed in the correct way, risks can be tackled, so the author of this article argues.
Tony McChrystal, founder of Pavesen, a London-based firm that advises family offices and high-profile individuals on digital reputation strategy, examines how AI-powered searches iare creating new vulnerabilities for privacy-focused wealth holders. The editors are pleased to share these opinions; the usual editorial disclaimers apply to views of guest writers. Email tom.burroughes@wealthbriefing.com and amanda.cheesley@clearviewpublishing.com if you have any questions or comments.
Nowadays, if a potential co-investor, journalist, or counterparty decides to research a family office principal, they may not do a traditional Google search. Instead, they look for answers on AI large language model (LLM) platforms such as ChatGPT, Perplexity, or Google Gemini and get a single, synthesised narrative. They don't receive a list of links to check, no chance to consider sources, just a very solid and authoritative answer that comes from whatever AI can find.
The scale of this shift is difficult to overstate. At its peak, ChatGPT attracted approximately six billion monthly visits, which makes it one of the most visited sites globally, whereas Google Gemini has over 750 million monthly users. Research by Semrush in 2025 indicates that AI-powered searches might be able to challenge or even surpass the economic value of a traditional Google search by 2027, partially due to the fact that AI search visitors convert 4.4 times more [than Google traditionally does]. Therefore, the first impressions created by AI have an outsized impact.
For family offices, these changes mean that they must react immediately. It is not only about the first page of Google results anymore, but about the story AI platforms tell, and the references they consider.
Where AI gets information from
A study done in October 2025 by Yext analysed 6.8 million AI
citations across ChatGPT, Gemini, and Perplexity, and found that
86 per cent of sources are from content that brands have control
over, first-party websites make up 44 per cent of citations, and
business listings make up another 42 per cent. Forums like Reddit
contribute just 2 per cent when location and query context are
applied.
It is comforting for businesses with a strong online presence. On the other hand, it is a totally different matter for private individuals and family offices who have purposely kept their online footprint to a minimum. If there are no first-party sources, AI platforms have no authoritative sources to cite.
This is what AI chooses to do instead. A study on 30 million AI citations by Profound shows that Wikipedia is nearly half of ChatGPT's most cited sources. A separate Semrush research of 150,000 citations across 5,000 keywords found Reddit mentioned in about 40 per cent of LLM-generated responses. The average Reddit post cited by AI platforms is about two years old, with some citations being as old as content published in 2019.
Most importantly, Semrush discovered that approximately 90 per cent of ChatGPT's citations are webpages that rank in the traditional search positions 21 and beyond. Traditional reputation strategies would consider content on page three or 10 of Google as irrelevant, yet this content may well be the main source AI uses to build its narrative.
The information imbalance
A recent research by Pavesen examined the information ecosystems
of top-tier business leaders and ultra-wealthy individuals to
find out which sources AI platforms use as their reference. Each
source was categorised as controlled (editorial authority rests
with the individual), semi-controlled (like Wikipedia, which can
be checked but not directly edited), or uncontrolled (including
news articles, Reddit threads, court records, books, activist
sites, and third-party social media posts).
The same trend appeared in every situation because uncontrolled sources outnumbered controlled sources by approximately eight to one. Most people only had two to five controlled sources (official website, company biography, LinkedIn profile, maybe an authorised interview) while the number of uncontrolled sources outside their sphere of influence was 30 to over 300.
The disproportion exists because AI platforms create unified narratives through their method of combining all sources. The system treats company biographies and Reddit threads as equal sources, although it gives greater importance to official websites. The AI-generated story derives from uncontrolled content because the individual had no role in creating or approving that material.
Why the damage deepens
The Pavesen analysis identified many patterns that increase the
risk to certain individuals. Sections on controversies in
Wikipedia articles were some of the most heavily cited AI source
material. In fact, for every individual analysed, AI systems
excessively used the parts of Wikipedia articles that were
related to controversies. This is in line with Wikipedia being
the most single source cited by ChatGPT.
The development of negative stories that AI systems produced for an extended time period resulted from comprehensive investigative reports. The popular books that expose corporate scandals provide AI platforms with a direct quote bank that standard news reporting fails to deliver. A book establishes an ongoing collection of exact negative details which AI systems can access while news articles fade from existence.
The attempt to manipulate Wikipedia content led to multiple documented cases where the strategy backfired. The uncovered paid editing activities of individuals and their representatives turned into an AI-citable scandal which people used to expose the original reputation issue.
The activist campaign websites build a permanent collection of negative content which they maintain in an organised manner. The activist websites that exist for specific purposes present structured negative content that AI platforms can easily search because they update their content regularly unlike news articles which disappear after a short time.
The overlapping threat
These AI-specific vulnerabilities coincide with a pre-existing
issue. The Deloitte Family Office Cybersecurity Report 2024
revealed that 43 per cent of family offices worldwide had been
victims of a cyberattack within the last 24 months. Those family
offices handling more than $1 billion in assets were found to be
significantly more susceptible to attacks (62 per
cent compared with 38 per cent for smaller
offices). Unfortunately, 31 per cent still lack a cyber incident
response plan, and the management of digital reputation is almost
non-existent in the governance of family offices.
The Omega Systems 2025 survey highlighted this issue again, showing that 83 per cent of family offices worry about deepfake campaigns being impersonated. However, only 60 per cent were confident that their employees would be able to recognise AI-based social engineering threats. A data breach turns into reputational harm.
Taking control of the narrative
The requirements of modern AI technologies need organisations to
acquire entirely new skill sets. The family office can start an
AI reputation audit: query ChatGPT, Google Gemini, Perplexity,
Claude, and Microsoft Copilot about the office and its
principals, note down the cited sources, and check the stories
against the verified, latest facts.
The solution for detected vulnerabilities requires two steps which include creating verified content that AI systems will recognise as authoritative and treating these sources as verified-domain biographies and published interviews recognised as thought-leadership pieces and media coverage from high-authority outlets.
The quality of the media matters. Reactive coverage such as that written without the subject's input will become a permanent AI background, irrespective of the truth. Proactive placement, where the subject shapes the story, makes sure that AI mirrors reality.
Gartner foresees that, by 2028, the number of organic visits to websites might drop 50 per cent, or even more with the growth of AI-first search. Families building a robust, authoritative digital presence at this moment are going to be winners.
The figures speak for themselves: 86 per cent of AI references can be controlled if they come from sources that are controlled and only exist. So, digital silence as a privacy strategy might result in AI characterising an individual through content that they neither have control over nor support.
Tony McChrystal
About the author
Tony McChrystal is the founder of Pavesen, a London-based
reputation management firm advising high-profile individuals,
family offices and C-suite executives on reputation risk and
digital footprint strategy.