The ultimate guide to AEO: How to get ChatGPT to recommend your product | Ethan Smith (Graphite)
When chat models stop being curiosities and start answering customers
Across a long conversation about algorithms, citations, and incentives, Ethan Smith reframes a familiar problem: search is not dying, it is multiplying. What used to be a battle for blue links has become a contest to appear in short, authoritative answers assembled by large language models. That shift changes the rules of discovery and rewards a different set of behaviors — speed, breadth of citation, and an attention to the long conversational tail of user queries.
Why mentions matter more than single-page rank
Traditional ranking prizes a single canonical result. AEO — answer engine optimization — prizes ubiquity. Because modern answer layers summarize many sources, the algorithm is likelier to surface brands that appear repeatedly across video, forums, affiliates, and product write-ups. Ethan points to concrete evidence: companies like Webflow saw dramatically higher conversion rates from LLM-driven traffic, and early-stage ventures can appear in answers almost overnight if they are mentioned across those citation types.
The head and the long tail
The head remains important, but it behaves differently. A brand that occupies the top search result may not win the answer box if rivals are cited more often. Conversely, the long tail — highly specific, multi-step questions that only a conversation can generate — has expanded. That creates an opening for focused content and help articles that address use-case minutiae. Where search once punished fragmentation, chat-friendly discovery rewards being the sole or clearest explanation for a rare question.
Two tracks: on-site depth and off-site breadth
The path forward runs on two rails. On-site, Ethan advocates for topical pages that answer not only the main question but every plausible follow-up: features, integrations, usage patterns, and edge cases. Off-site, the goal is to increase citation density: YouTube and Vimeo videos, trusted media affiliates, and community platforms such as Reddit and Quora. Each citation class needs its own playbook — paid placement can be quick and decisive, while community-driven mentions require authenticity and patience.
Reddit as a credibility engine
Reddit stands out because it is both heavily cited and community-policed. Attempts to game it with fake accounts are quickly neutralized, and in the context of answer engines that human curation has outsized value. The tactical approach that works: participate as a real person, disclose affiliation, and provide helpful, specific answers — a handful of genuine comments can be far more valuable than a thousand spammy mentions.
Measure like a scientist
Ethan emphasizes experimentation and reproducibility. Track how often you appear across different LLMs, treat groups of questions as test and control cohorts, and only trust moves that replicate across runs. The relatively high variance of answers — and the multiplicity of surfaces such as ChatGPT, Gemini, Perplexity, and others — make careful measurement essential to avoid wasting effort on fashionable but ineffective tactics.
The peril and promise of automated content
The conversation turns urgent when it considers AI-generated volumes of derivative writing. Ethan’s research suggests that unedited, fully automated content does not reliably win in search or answer layers, and that feeding models derivatives of derivatives risks collapsing diversity of opinion (a phenomenon sometimes called model collapse). The healthier path is AI-assisted content: human expertise amplified by tooling, rather than replaced by it.
Hidden opportunity: help centers and support content
Perhaps the most under-discussed channel is support documentation. Help articles inherently answer follow-up questions about integrations, edge cases, and capabilities — the very queries chat users ask. Small structural changes, like moving support from a subdomain to a subdirectory and improving internal linking, make help content far more visible to the answer layer. In practice, community-sourced help that fills the long tail can become a primary citation source.
What this means for product and growth
The strategic takeaway is both simple and unsettling: you do not get a choice about whether your product appears in chat-driven discovery — the ecosystem will index and summarize it whether you like it or not. The pragmatic response is to choose participation over abdication: map the questions you want to answer, instrument answer-tracking, target the citation types that matter for your category, and run reproducible experiments to separate hype from impact. The work rewards clarity and specificity — the quieter craftsmanship of answering real user questions — more than the spray-and-pray tactics of earlier eras.
Many channels will continue to coexist, and each will evolve; the businesses that treat LLM-based answers as a new front in product-market fit will capture disproportionate value because chat traffic tends to be highly qualified. In that reality, being the best answer matters less than being the most reliably present answer across contexts — and that consistency often arises from disciplined measurement, community authenticity, and the humility to write something genuinely new.
Key points
- AEO combines LLMs with retrieval (RAG); optimize for citations rather than single-page rank.
- Early-stage companies can show up quickly by earning mentions across video, forums, and blogs.
- Chat-driven traffic often converts at substantially higher rates than traditional search traffic.
- Track share-of-voice across multiple LLM surfaces and treat questions as an experimentable asset.
- Reddit and user-generated content are trusted citations; authenticity beats scaled spam attempts.
- Help center pages and subdirectory support content are high-impact places to answer follow-ups.
- Run test/control experiments on groups of questions and reproduce results before scaling tactics.




