Your products are invisible to AI search. Here’s how to fix that

Feb 23, 2026 16 min read Written by Jason Jackson

Jason Jackson headshot
Jason Jackson

Lead Technical SEO Strategist

Read Jason’s full bio

For a decade, product visibility meant one thing: ranking on page one of Google. Then it became about conversion optimization — turning that visibility into revenue. Now, we’re entering a third era, one where AI agents decide what gets recommended before a customer ever sees a search result.

When a consumer asks ChatGPT, Gemini, or Perplexity to recommend a product, the response isn’t a list of ten blue links. It’s a curated, ranked recommendation of three to five products the AI has already evaluated and compared on the user’s behalf. Your product either makes that shortlist, or it doesn’t exist.

New research from the University of Illinois (U of I) quantifies the scale of this problem. The study, titled “Controlling Output Rankings in Generative Engines for LLM-Based Search,” (CORE), tested 3,000 products across 15 categories on four major LLMs, GPT-4o, Gemini 2.5, Claude, and Grok. The findings are stark: products at the bottom of search engine retrieval results had a 0% chance of appearing in the AI’s final recommendations. Zero. Not low, zero.

The question isn’t whether AI-powered search will reshape product discovery. It’s whether your product content is structured to survive the transition.

LLM-based search operates on a two-stage pipeline

Understanding how AI search actually works is the starting point. When a user submits a product query to a LLM, two distinct processes fire in sequence.

Stage one is retrieval. The LLM sends search queries to external engines, Google, Bing, Amazon, and collects the top results. This is where traditional SEO still matters. If your product doesn’t get retrieved, nothing else you do will matter. Your technical foundations, like crawlability, structured data, page speed,and  indexation, remain the gatekeeper.

Stage two is synthesis. The LLM takes the retrieved results and generates a ranked recommendation list. This is the new battleground. The research found that LLMs don’t just pass through the retrieval order. They actively re-rank products based on content signals, such as how information is structured, whether comparisons exist, or how reasoning is framed.

Here’s the truth: most brands have invested heavily in stage one and have done nothing for stage two. Their product pages rank in traditional search but get filtered out during the LLM’s synthesis process because the content doesn’t match what the models weigh during re-ranking.

Content structure, not just content quality, determines AI recommendations

The research from U of I tested three types of content optimization against LLM recommendation engines. The results reveal exactly what these models respond to.

Content typeDescriptionTop-1 success rateDetection risk
UnstructuredBasic product descriptions without comparative framing0% (baseline)N/A
Reasoning-basedStructured comparisons, feature analysis, logical frameworks80–85%Moderate (62%)
Review / experienceAuthentic comparison reviews, purchase narratives, use-case context78–88%Low (18%)

The gap between 0% and 80%+ is entirely a content structure problem. The products that reached the top of LLM recommendations didn’t have better reviews or higher ratings. They had content that gave the LLM what it needed to construct a recommendation rationale.

LLMs don’t recommend products. They recommend the products they can reason about most effectively. Your content needs to do the reasoning for them.

What does this mean in practice? Product pages need to shift from describing what a product is to explaining why it wins in specific comparison contexts. That means embedding competitive framing, use-case specificity, and quantitative differentiators directly into the content the LLM ingests during synthesis.

Review content is the most powerful, and most underutilized, AI visibility signal

The research data on review-style content is striking. Review-based optimization achieved Top-1 promotion rates between 78% and 88% across all four LLMs tested. More importantly, it scored a 4.6 out of 5.0 on human fluency evaluations — nearly identical to the 4.7 baseline. Human evaluators could only identify it as optimized 18.4% of the time, compared to a 12% false-positive rate on unmodified content.

The implication for eCommerce brands is direct: the quality and structure of your review content is now a ranking factor in AI search. Not in the traditional SEO sense, but in the sense that LLMs use review content as primary input when constructing product comparisons and generating recommendations.

Reviews that contain comparative context (“I switched from Brand X and noticed a 30% improvement in battery life”), specific use-case framing (“For a family of four, the 6-quart model was the right size”), and quantitative claims give LLMs the raw material to build a recommendation narrative. Reviews that say “Great product, 5 stars” contribute nothing to the synthesis stage.

The review quality audit every eCommerce brand should run

Categorize your existing reviews into four tiers. 

  • Tier A: Detailed reviews with competitor comparisons and quantitative claims. 
  • Tier B: Detailed reviews with use-case context but no comparisons. 
  • Tier C: Brief qualitative reviews (“love it, works great”). 
  • Tier D: Star-rating only.

Your AI visibility is directly correlated with the ratio of Tier A and B reviews to total reviews. If that ratio is below 15%, your products are providing LLMs with almost no usable synthesis material. The fix isn’t more reviews. It’s better-structured reviews prompted by specific questions in your post-purchase flows.

Different AI models respond to different content signals. Your strategy needs to account for all of them

One of the more nuanced findings in the research is that LLMs don’t all respond to the same content patterns. GPT-4o and Claude weight structured reasoning content more heavily — logical comparison frameworks, step-by-step feature analysis, and explicit ranking rationales. 

Gemini and Grok respond more strongly to review-style content — experience narratives, past-tense purchase stories, authentic-sounding evaluations.

LLMPrimary content signalStrategic implication
GPT-4oReasoning / structured comparisonsInvest in product page comparison sections and feature analysis frameworks
Gemini 2.5Review / experience narrativesPrioritize review quality programs and customer story content
ClaudeReasoning / structured comparisonsSame as GPT-4o; comparison frameworks and explicit differentiators
GrokReview / experience narrativesSame as Gemini; authentic review depth and purchase context

The bottom line is this: A single content strategy optimized for one model leaves you invisible on the others. The brands that win AI visibility will build both content layers — structured reasoning for GPT-4o and Claude and rich review ecosystems for Gemini and Grok — into every product page.

Structured data is the retrieval-stage gatekeeper. But most implementations are incomplete

The CORE research confirmed that retrieval order is the prerequisite for everything. Products that aren’t retrieved can’t be re-ranked, no matter how strong their content is. Structured data,  specifically product schema, review schema, FAQ schema, and organization schema, directly impacts whether a product enters the LLM’s candidate pool in the first place.

Most eCommerce implementations treat structured data as a checklist item: add basic product schema, include price and availability, move on. That’s insufficient for AI search. LLMs and the search engines feeding them parse the full depth of your structured data when constructing candidate pools. Products with comprehensive schema, including aggregate rating, detailed review objects, brand entities, material specifications, and competitive relationship properties, signal higher information density and get retrieved more consistently.

FAQ schema deserves particular attention. When users ask LLMs comparative product questions, the model looks for structured Q&A content during retrieval. Product pages with FAQ schema containing comparative questions (“How does this compare to [competitor]?”) create direct retrieval pathways for the exact queries that trigger LLM product recommendations.

Entity recognition is the long game, and it’s already paying dividends

LLMs rely on knowledge graph entities when making recommendations. Products and brands that exist as recognized entities in knowledge bases receive preferential treatment during the synthesis stage. The model has more context to work with, more confidence in its recommendation, and more structured relationships to reference.

Building entity recognition requires a multi-channel approach: comprehensive organization schema with sameAs properties linking to verified brand profiles, consistent presence across authoritative platforms (Google Business Profile, Wikidata, industry directories), and topical authority through deep content coverage within your product categories.

This isn’t a quick win. Entity building takes months to compound. But the brands investing now will have a structural advantage that’s difficult for competitors to replicate, especially as LLM-based search becomes the default discovery channel for more product categories.

Measuring AI visibility requires a new metric framework

Traditional rank tracking doesn’t capture AI recommendation visibility. A product can rank #1 in Google for a target keyword and still be absent from every LLM recommendation for the same query. These are now separate channels with separate optimization requirements and separate measurement needs.

The metric that matters is what we call LLM Visibility Score: the percentage of tracked product discovery queries where your product appears in the Top-5 recommendations across all major LLM platforms. Tracking this weekly, alongside model-specific breakdowns and competitive benchmarking, gives you the feedback loop needed to iterate on content and structured data strategies.

Simply put, if you aren’t measuring LLM recommendation presence today, you’re optimizing blind for a channel that’s already shaping purchase decisions.

The convergence of SEO and AI visibility is happening now

The research makes one thing clear: traditional SEO and AI visibility optimization are converging into a single discipline. Retrieval-stage optimization (technical SEO, structured data, crawlability) determines whether your products enter the candidate pool. Content-stage optimization (comparison frameworks, review quality, reasoning structure) determines where your products land in the final recommendation.

Brands that treat these as separate initiatives will find themselves optimizing for a search paradigm that’s already shifting beneath them. The ones that build integrated strategies, where every product page, every review prompt, every schema implementation serves both traditional search and AI synthesis, will own the recommendation slot when it matters most.

The question isn’t whether AI search will change product discovery. It’s whether your store will be visible when it does.

Ready to future-proof your product visibility? 

Codal’s SEO team builds AI-ready content architectures, structured data implementations, and visibility monitoring systems that position your products for both traditional search and LLM-based discovery. 

Let’s talk about where your product pages stand today — and what it takes to own the recommendation slot tomorrow. Contact us today to get started.

Source: Jin, H., Chen, R., Zhang, P., Luo, Y., Zeng, H., Luo, M., & Wang, H. (2026). Controlling Output Rankings in Generative Engines for LLM-based Search. University of Illinois at Urbana-Champaign. arXiv:2602.03608v1.

Jason Jackson headshot
Jason Jackson

Lead Technical SEO Strategist

Read Jason’s full bio
Jason Jackson headshot

Jason Jackson

Lead Technical SEO Strategist

Jason is a Lead Technical SEO Strategist at Codal, managing enterprise SEO strategy and implementation. He has over a decade of experience helping brands across industries, such as eCommerce, healthcare, and finance, achieve their digital growth goals.

3d shape symbolizing collaboration

Want more insights to fuel your digital strategy?

Explore our latest expertise on innovation, design, and technology, or connect with us directly to see how we can help accelerate your digital transformation.