Skip to main content

What AI Search Doesn’t Want You to Know

90% of AI Page One results don't make it into your customers' answers

· By Stephen Young · 3 min read

I've just run a simple experiment. The results will shock you. I asked ChatGPT (GPT-5 with thinking) 20 key questions about AI agent site-readability. It "found" 160 pages to answer those questions. I then asked it to visit each page to check that the page exists and that it matches the query. The results say plenty about what really goes on behind the curtain of AI search.


The Numbers Nobody Tells You About

Out of 160 URLs, only 12 could be checked conclusively. The rest, 148 out of 160, or 92.5 percent, failed verification because the sites blocked or complicated access.

Of those 12 that ChatGPT could check:

  • 2 did not exist or redirected elsewhere (fabrication rate: 16.7 percent).
  • 1 existed but did not answer the query.
  • 9 existed and matched the query.

That means most of the links looked real, but in practice could not be confirmed. And a noticeable chunk were simply fabricated. **


Why This Should Worry You

Most people trust AI search. When you get an answer with a neat list of links, you assume they’re the best ones. The assistant or agent looks confident and its answers speak to you with authority, but the reality is much messier. Pages are invented. Pages can’t be fetched. And agents often move ahead anyway, because their job is to give you an answer and keep the conversation flowing.

Here’s the hidden truth. In this test, I asked GPT-5 to visit each page and check if it really existed. In real AI search, assistants often collect page context to feed into their answer, not to validate the link. If your site sits among the 90 percent that can’t be reliably fetched, your content simply doesn’t make it into the assistant’s context window. That means your voice is excluded from the synthesis that users actually read.

Think about the consequence. Your competitors’ content gets included. Yours is invisible, even if it’s better and on Google's Page One. And you’ll never see that loss in your dashboards. Your product or service just won't be in the answer.


Two Failures, One Outcome

The test revealed two intertwined failures.

  1. Fabrication under pressure. Ask for many sources, and ChatGPT often produces URLs that don't exist. They look right, but they’re hallucinations.
  2. Verification collapse. Nine out of ten times in this test, the assistant couldn’t confirm the page because the site blocked or obscured it. That would leave the model trying to stitch answers together from the ten percent it can read.

Both failures matter. Fabrication chips at trust. Verification failure means the right content gets excluded. Together, they create AI answers that are polished on the surface but sub-optimal because they don't factor in all available context.


The Stakes for Your Brand

Your customers will not second-guess what they see. They’ll trust the AI answer. They’ll assume the sources are real and complete. But if your content can't be fetched cleanly, it’s cut out before the agent even starts writing. That means fewer citations, fewer mentions, and less presence in the conversations that shape choices.

This isn’t about SEO rankings or impressions. It’s about whether your content can even participate in the new layer of AI-driven answers. If assistants can’t fetch it, they won’t use it.


The Queries

For transparency, here are the 20 queries I used to test ChatGPT. If you're reading this article, you're probably asking many of these questions yourself.

  • optimise for Google AI Overviews
  • make website content accessible to AI agents
  • improve visibility in AI assistants
  • generative engine optimisation GEO
  • structured data for AI assistants
  • schema.org for AI Overviews
  • how to get cited by AI assistants
  • measure share of AI voice
  • prepare documentation for LLMs
  • RAG website content best practices
  • site architecture for AI retrieval
  • AI agent readiness audit
  • JSON-LD examples for product catalogues
  • sitemaps for AI assistants
  • optimise knowledge base for ChatGPT
  • reduce hallucinations using company site content
  • improve FAQ content for AI search
  • monitor share of AI answers over time
  • best practices for AI retrievability audits
  • optimise compliance documentation for AI agents

In almost every case, the majority of URLs provided could not be verified - and a large number were hallucinations. You can't rely on ChatGPT, Claude, Grok etc. to give you a definitive, well-researched answer for topics this new.


What’s Really Going On

The polished surface of AI search hides a messy reality. Behind the answers, assistants struggle with fabrications, redirects, paywalls, and JavaScript-heavy pages. They don’t always check if the link works. They often can’t. And when they can’t, your content may never be seen or cited, even if it’s exactly what the user needs.

That’s the hidden failure of AI search. The answers look so good that you trust them. But the process is fragile, and your content’s inclusion is far from guaranteed.


** All numeric results above come from the described test run.

About the author

Stephen Young Stephen Young
Updated on Sep 10, 2025