Query Fan-Out for Commerce: The Hidden Retrieval Layer

A shopper types "what's the best stroller for a 6-month-old who fits in a small SUV trunk." That single sentence does not get answered by one search. Behind the scenes, the AI assistant decomposes it into eight to twelve sub-queries, runs them in parallel, scores the passages it gets back, and assembles an answer. Most product pages are tuned for the original query and invisible to the sub-queries that actually do the work.
TL;DR: Query fan-out is the technique that AI search engines use to break one shopping prompt into multiple parallel sub-queries. Google has confirmed it powers AI Mode and ChatGPT uses similar decomposition. A Surfer SEO study of 173,902 URLs found that 68% of AI-cited pages do not rank in the top 10 organic results, but pages ranking for both the main query and fan-out sub-queries are 161% more likely to be cited. For commerce teams, that means optimizing for one head term is incomplete. You need passage-level visibility on the sub-queries your AI assistant actually fires.
What Is Query Fan-Out, In Plain Terms?
Query fan-out is the process by which an AI search system takes one user question and splits it into multiple narrower sub-queries, runs each one against an index, and synthesizes the results into a single answer. Google described it directly at I/O 2025 when launching AI Mode: the system "breaks down your question into subtopics" and "issues a multitude of queries simultaneously." Deep Research mode fires hundreds. Standard AI Mode typically fires eight to twelve.
ChatGPT does the same thing, even when it does not call it that. When a shopper asks GPT-5.5 a complex shopping question, the model plans the answer by listing the dimensions it needs (price band, use case, fit, ingredients, certifications), then retrieves passages for each. The exact mechanic is internal, but the visible behavior matches the Google pattern. Pages get evaluated at the passage level, not at the page level.
The shift from page-level ranking to passage-level retrieval is the most consequential change in search since mobile-first indexing. It explains a finding that surprises every SEO team that runs the numbers. According to a Surfer SEO study of 173,902 URLs, 67.82% of pages cited in AI Overviews do not rank in the top 10 organic results for the main query at all. They are getting pulled in because they answer specific sub-queries.
Key stat callout: 68% of AI-cited pages are not in the top 10 organic results. Pages that rank for the main query AND fan-out sub-queries are 161% more likely to be cited than pages ranking only for the main query (Surfer SEO, December 2025).
Why This Matters for Commerce
Commerce queries are the densest fan-out targets on the open web. A simple "best running shoes for flat feet under $150" already implies at least seven sub-queries: arch support type, pronation control, cushioning, brand options, recent reviews, price range, sizing reliability. A complex query for an appliance, a stroller, or a piece of skincare can easily fan out to twelve.
Each sub-query gets answered from a different passage. The product page that wins the "what is the best stroller" sub-query may lose the "fits a Subaru Forester trunk" sub-query because that detail is not on the page, or because it is buried in a review carousel the retrieval system never sees. The shopper sees one answer. The brand sees a partial citation, a competitor product card, or nothing. This is the surface generative engine optimization addresses.
The pattern also explains a frustration commerce teams report. They rank well for head terms in classic Google. Their AI traffic is flat or down. Classic SEO grades the page against one query, while AI search grades passages against many. A page winning on one query and silent on eleven others gets one citation slot at best, often zero, because the synthesizer prefers pages that reinforce the answer across multiple sub-queries.
How AI Decomposes a Commerce Query (Worked Example)
Here is what happens when a shopper sends "best fragrance-free moisturizer for sensitive skin under forty dollars" to AI Mode or ChatGPT. The model emits a plan that looks like this set of fan-out sub-queries:
| Sub-query | What it tests on your product page |
|---|---|
| What are top fragrance-free moisturizers? | Whether your title and primary description include "fragrance-free" |
| Best moisturizers for sensitive skin? | Whether your structured attributes mark sensitive skin compatibility |
| Hypoallergenic moisturizer ingredients to look for | Whether your ingredient list is parseable, not just an image |
| Moisturizer reviews in budget tier | Whether your price is in feed and structured data, not only the cart page |
| Dermatologist-recommended fragrance-free options | Whether endorsements or certifications are surfaced in copy |
| Comparison of CeraVe, Vanicream, La Roche-Posay | Whether you appear in third-party comparison content |
| What ingredients trigger sensitive skin? | Whether your "what to avoid" copy is on-page |
| Long-lasting moisturizer hydration claims | Whether claims are backed by reviewed attributes |
Most product pages answer two or three of these, mostly the head-term ones. The pages that get cited tend to answer six or more. That is the visibility delta. It is also the product-card-vs-mention gap that brands run into when they audit their AI surfaces. Mentioned 80% of the time, shown as a card with image and price 30% of the time, because the structured data the assistant needs to render a card is not where the assistant looks for it.
What This Means For Optimization
If passage-level retrieval is the rule and fan-out is the mechanism, then commerce optimization needs three new disciplines on top of classic SEO. The first is sub-query mapping. For every priority SKU, list the eight to twelve sub-queries an AI assistant would emit and grade your page against each. This is the same logic answer engine optimization uses for FAQ pages, applied to product detail pages.
The second is structured product data depth. A product page with five attributes wins one or two sub-queries by accident. A product page with thirty well-defined attributes wins six to eight on purpose. Structured product data is the input retrieval systems trust, because it is unambiguous, machine-parseable, and consistent across listings. Google's own guidance, as we covered in the product data infrastructure post, reflects this directly.
The third is passage-level monitoring. Track which sub-queries return your products and which do not, by category, by persona, by AI surface. Tools like our AI Readiness Report decompose category prompts into fan-out sub-queries and grade visibility against each. The point is not to win every sub-query. The point is to see which ones you lose, and decide which are worth fixing first.
This is also why classic SEO and AI search are converging in their mid-game and diverging in their endgame. Classic SEO still rewards a clean head-term page, and that traffic is still real. But citation share, the new currency that determines whether your brand is the one recommended in the AI's answer, is decided at the passage level, on sub-queries you may never have targeted as keywords.
What to Do This Week
- Pick five priority SKUs. For each, write down the head-term query a shopper would use. Then list the eight to twelve sub-queries that head term fans out into. Use Google AI Mode or ChatGPT to generate the list.
- Grade each SKU page passage by passage. For each sub-query, mark whether the answer is on the page in plain text, in structured data, in image alt text, or missing. Missing entries are your prioritized fix list.
- Audit attribute depth. Count the structured attributes on each page. Fewer than 15 is a meaningful gap. Aim for 25 to 30, including use cases, allergens, fit notes, certifications, compatible accessories, sizing.
- Run the same SKUs through an AI shopping assistant directly. Note where you appear as a product card, where you are mentioned by name, and where a competitor wins. The card-vs-mention gap is the easiest gap to fix because it usually traces to missing feed data.
- Set a passage-level monitoring baseline. Track citation rate by sub-query, not just head term. Re-measure monthly. The brands that win the next 18 months will operate with sub-query dashboards, not page-rank dashboards.
Frequently Asked Questions
Is query fan-out the same as query expansion?
No. Classic query expansion adds synonyms or close variants to a single search to recall more documents for the same intent. Fan-out decomposes a complex prompt into multiple distinct sub-queries that test different facets, retrieves passages for each, and synthesizes a single answer from many sources.
How many sub-queries does AI Mode or ChatGPT typically emit?
Google has stated AI Mode issues "a multitude of queries simultaneously" and Deep Research can fire hundreds. In practice, observed ranges for standard commerce prompts sit between eight and twelve sub-queries per top-level question, depending on complexity, domain, and the assistant's reasoning depth.
Does ranking on Google for the head term still matter?
Yes, but it is incomplete. The same Surfer SEO study found 51% of AI Overview citations rank for both the main query and at least one fan-out sub-query, while only 20% rank for the main query alone. Head-term ranking is necessary but not sufficient for citation share.
What happens if a sub-query has no answer on my product page?
The retrieval system either skips your page for that sub-query or pulls a passage from a third-party source like a review site, a comparison article, or a competitor product page. That third-party citation is what shapes the AI's final answer about your category, often without your brand in it.
Can structured data alone solve this?
No. It is necessary but not sufficient. Structured data raises your floor on factual sub-queries (price, fit, size, certifications). It does not help with narrative sub-queries like "is this good for sensitive skin in winter," which require well-written body copy and review content as well.
How is this different from traditional SEO?
Traditional SEO optimizes a page against one query, with ranking decided at the page level. AI search optimizes by retrieving passages against many sub-queries in parallel, with citation decided at the passage level. The unit shifts from "page" to "passage," and the metric shifts from "rank" to "citation rate."
The brands paying attention to fan-out today are running more sophisticated optimization than the ones still grading themselves on top-10 organic positions. The Surfer numbers are a forecast of how commerce queries get answered by 2027, and they reward teams that treat product pages as a portfolio of passages rather than one ranking surface.
How AI-ready are your products?
Check how ChatGPT, Google AI, and Perplexity evaluate any product page. Free score in 30 seconds.


