Research · April 2, 2026 · 12 min read

How AI Engines Decide Which Brands to Cite

Original research using AP Index data reveals the factors that determine which brands AI engines recommend.

Key Findings

Methodology

From the Optagen AP Index, tracking 500+ brands across ChatGPT (GPT-4o), Claude (Claude 3.5 Sonnet), Perplexity (Sonar), and Gemini (1.5 Flash). Five dimensions: Discoverability, Navigability, Operability, Recoverability, Transparency. Data collected January through March 2026.

Finding 1: Structured Data is the #1 Citation Driver

Brands with comprehensive Schema.org markup (Organization, Product, Service, FAQ, HowTo) scored 64 on Discoverability, versus 20 for brands with none.

Structured Data LevelAvg Discoverability ScoreAI Citation RateBrands in Sample
Comprehensive (5+ schema types)6482%47
Moderate (2-4 schema types)4861%128
Basic (1 schema type)3339%186
None2017%152

Without Schema.org, AI systems infer from unstructured text, introducing errors. The highest-impact types were Organization, Product/Service, and FAQ. Brands adding these three saw Discoverability increase 18 points within one model update cycle.

Finding 2: Entity Clarity Determines Recommendation Quality

Entity clarity measures how unambiguously a model can identify and categorize a brand: consistent name, clear category, and distinct positioning.

Brands with high entity clarity received accurate descriptions 78% of the time. Brands with low entity clarity (ambiguous names, multiple business lines, overlapping positioning) were accurate only 31% of the time.

Common failures:

Finding 3: Trust Signals Transfer to AI

AI models weight authoritative sources more heavily when forming brand perceptions.

Trust signal density (count of analyst reports, major publication mentions, award citations, academic references) correlated with AX Score at r=0.72, second only to structured data.

Trust Signal DensityAvg AX ScoreRecommendation Confidence
High (50+ authoritative references)68AI recommends with high confidence
Medium (15-49 references)49AI recommends with caveats
Low (5-14 references)36AI mentions but does not recommend
Minimal (0-4 references)22AI omits or describes inaccurately

Investing in analyst relations, PR in authoritative publications, and industry recognition has a measurable impact on AI perception.

Finding 4: API Readiness is the Next Frontier

Only 41 of 513 brands (8%) offer any agent-accessible API. Those that do score 40% higher on Operability.

As agents move from information retrieval to task execution, API readiness becomes a differentiator. Brands with APIs enable agents to check pricing, configure trials, or retrieve real-time data.

Operability currently contributes 20% of the AX Score. Expect that weight to increase as agent capabilities mature.

Finding 5: Model Consistency Varies Wildly

The same brand can score dramatically differently across models. Cross-model variance:

MetricValue
Average cross-model score range (max - min)27 points
Brands with >30 point range38%
Brands with >50 point range11%
Most consistent model pairClaude + Perplexity (r=0.81)
Least consistent model pairChatGPT + Gemini (r=0.54)

A brand scoring 85 on Claude might score 45 on ChatGPT. Variance comes from differences in training data, retrieval, and recommendation algorithms. Optimizing for one model does not guarantee others.

Brands need cross-model monitoring. Optimizing only for ChatGPT leaves blind spots on Claude, Perplexity, and Gemini.

Implications for Brand Teams

  1. Structured data is not optional. Comprehensive Schema.org markup is the highest-ROI AI visibility investment.
  2. Clarify your entity. AI models must unambiguously identify who you are and what category you belong to.
  3. Invest in trust signals. Analyst relations, authoritative PR, and recognition impact AI recommendation confidence.
  4. Build APIs. Even a read-only API for product and pricing gives agents something to work with.
  5. Monitor across models. Use Signal Track for cross-model consistency tracking.

Methodology Details

Sample: 513 brands across 12 industry categories from the AP Index. Query sets: 5 standardized prompts per dimension (25 per brand per model) covering discovery, description, comparison, error handling, and explanation. Scoring: automated pattern matching and Claude-assisted evaluation on 0-20 per dimension. Period: January 6 to March 28, 2026. API availability determined by manual review and automated endpoint detection.

Get Your Brand's Data

See where you rank against 500+ brands. Download the full dataset.

Free Audit AP Index Download CSV