Skip to main content

What OpenAI’s “How People Use ChatGPT” paper tells us about the future of the agentic Web

AI Agents using websites is today’s biggest opportunity - for content providers and merchants.

· By Stephen Young · 4 min read

OpenAI’s new NBER paper on ChatGPT usage is a wake-up call for leaders.By July 2025, 18 billion messages were sent each week by 700 million users,” a reach that rivals the largest consumer platforms in record time.

The headline is not only the growth, it’s how people use AI.Nearly 80% of usage falls into Practical Guidance, Seeking Information, and Writing,” which are everyday tasks in every organisation.

Work is big, but home and personal use are now even bigger. The paper reports daily messages classed as non-work rose from 53% in June 2024 to 73% in June 2025, which means consumer behaviour is normalising around AI help for everything, not just jobs.

When people do use ChatGPT for work, writing dominates. In June 2025, writing accounted for about 40% of work-related messages, with two-thirds focused on editing and improving text you already have.

Some popular narratives don’t match the data. Only about 4.2% of all messages were about computer programming, and topics like companionship were a very small share.

Intent matters, and the mix is shifting. The authors estimate that “about 49% of messages are Asking, 40% Doing, and 11% Expressing,” and Doing is even more prominent within work usage.

Zooming in on work activities shows a clear pattern. Nearly half of all messages map to three Generalised Work Activities, Getting Information, Interpreting Information for others, and Documenting or Recording Information, which together account for 45.2% of usage.

Why this matters for your digital strategy

These findings point to a Web where agents, not only humans, do the first pass over your content. If most AI activity is asking questions, fetching facts, and turning knowledge into clear writing, then your website is no longer only a human experience, it is an input to automated reasoning and action, often without a human reading every page first.

In that world, your site either gets used by agents or it gets ignored. Agents will prefer content that is consistent, machine-navigable, and easy to transform into decisions, emails, briefs, FAQs, and forms, which mirrors the three dominant use cases identified in the paper.

This is also why writing quality and structure now drive discoverability. If writing is the top work task inside ChatGPT, then sites that supply concise, structured explanations, policy summaries, product attributes, and decision criteria will be over-represented in agent-generated outputs that users paste into documents, emails, or tickets.

It’s not just about content, it’s about actions. The Asking, Doing, Expressing taxonomy shows that people increasingly expect the model to do something, not just answer, which means your site should expose clear actions like “calculate”, “check”, “book”, or “submit”, with predictable inputs and outputs that an agent can call safely.

Three takeaways for leaders

  • Design for agents. Treat your site like a product surface for machines as well as people, so every key page has a single purpose, stable identifiers, and the facts an agent needs to answer a question or trigger a task, in a format it can parse.
  • Make information unambiguous. Reflect the GWA pattern by prioritising “Getting Information”, “Interpreting”, and “Documenting” flows, which means clean tables, definitions, process steps, and example outputs that align to common agent prompts.
  • Invest in writing as a capability. Since writing dominates work usage, ensure your pages include authoritative, concise copy plus canonical summaries that agents can reuse verbatim, reducing hallucination risk and improving downstream edits.

From “readable by people” to “usable by agents”

Agents favour clarity, structure, and grounded claims. Give every important concept a dedicated URL, publish machine-readable data alongside human copy, and include explicit constraints, eligibility rules, units, and cut-offs so agents don’t guess at business logic.

Agents need proof as well as prose. Where possible, cite your own sources, include version dates, and expose small JSON snippets or CSV downloads that confirm numbers and definitions, since the model can lift these into the user’s draft or decision tree.

Agents recognise repeatable workflows. If the same question appears often in support, sales, or operations, create a page that answers it in steps, shows a filled example, and provides a lightweight endpoint or structured form so the agent can complete the step on behalf of the user.

A quick checklist for the agentic Web

  • Stabilise identifiers. Use stable URLs, anchors, and schema so facts and actions are addressable by tools, not only visible to readers (reasonable inference).
  • Expose structure. Add tables, JSON-LD, and predictable headings so the same fact appears once, in one canonical place, with machine-readable context.
  • Publish exemplars. Include model-ready examples of emails, briefs, queries, and forms to match the dominant “Doing” and “Writing” behaviours in the paper.
  • Show constraints. Put dates, thresholds, and rules in explicit fields, not buried in paragraphs, so agents can reason and validate without guessing.
  • Instrument outcomes. Track which pages agents fetch and which actions follow, then tighten copy and structure around those high-value paths, just as you would for human funnels.

What changes next

The centre of gravity moves from search to synthesis. Because the three largest use cases are guidance, facts, and writing, the first answer your customer sees will often be a synthesis produced in an AI window, not a click to your site, so you must optimise for being selected and cited inside that synthesis, not only for rankings.

Content quality becomes operational, not just editorial. Teams will standardise definitions, maintain canonical data blocks, and ship small “how-to” endpoints that align to common agent tasks, which reduces friction in the Asking and Doing steps and improves user trust at the point of decision.

Trust signals will be measured by agents, not assumed by readers. The paper also shows satisfaction with interactions improved over time, which suggests models reward clearer inputs with better outcomes, so your goal is to make your site the easiest source to reuse correctly.

Final thought

If you make your website easy for agents to ask, do, and write with, you will win twice. You will appear more often in AI answers, and you will make work faster for your own teams who now draft, decide, and document with these tools every day.

About the author

Stephen Young Stephen Young
Updated on Sep 17, 2025