Key Findings
- Structured data is the #1 predictor. Comprehensive Schema.org markup scores 3.2x higher on Discoverability.
- Entity clarity determines accuracy. Clear entity definitions get 78% accurate recommendations, versus 31% for ambiguous brands.
- Trust signals transfer to AI. Third-party citations correlate at r=0.72 with AI recommendation confidence.
- Only 8% of brands have agent-accessible APIs, but those that do score 40% higher on Operability.
- Model consistency is poor. The same brand can score 85 on Claude and 45 on ChatGPT.
Methodology
From the Optagen AP Index, tracking 500+ brands across ChatGPT (GPT-4o), Claude (Claude 3.5 Sonnet), Perplexity (Sonar), and Gemini (1.5 Flash). Five dimensions: Discoverability, Navigability, Operability, Recoverability, Transparency. Data collected January through March 2026.
Finding 1: Structured Data is the #1 Citation Driver
Brands with comprehensive Schema.org markup (Organization, Product, Service, FAQ, HowTo) scored 64 on Discoverability, versus 20 for brands with none.
| Structured Data Level | Avg Discoverability Score | AI Citation Rate | Brands in Sample |
|---|---|---|---|
| Comprehensive (5+ schema types) | 64 | 82% | 47 |
| Moderate (2-4 schema types) | 48 | 61% | 128 |
| Basic (1 schema type) | 33 | 39% | 186 |
| None | 20 | 17% | 152 |
Without Schema.org, AI systems infer from unstructured text, introducing errors. The highest-impact types were Organization, Product/Service, and FAQ. Brands adding these three saw Discoverability increase 18 points within one model update cycle.
Finding 2: Entity Clarity Determines Recommendation Quality
Entity clarity measures how unambiguously a model can identify and categorize a brand: consistent name, clear category, and distinct positioning.
Brands with high entity clarity received accurate descriptions 78% of the time. Brands with low entity clarity (ambiguous names, multiple business lines, overlapping positioning) were accurate only 31% of the time.
Common failures:
- Ambiguous names (e.g., "Notion" could be the app or the concept)
- Category confusion across multiple categories
- Competitor conflation when messaging overlaps
- Outdated information from pivots that left legacy descriptions in place
Finding 3: Trust Signals Transfer to AI
AI models weight authoritative sources more heavily when forming brand perceptions.
Trust signal density (count of analyst reports, major publication mentions, award citations, academic references) correlated with AX Score at r=0.72, second only to structured data.
| Trust Signal Density | Avg AX Score | Recommendation Confidence |
|---|---|---|
| High (50+ authoritative references) | 68 | AI recommends with high confidence |
| Medium (15-49 references) | 49 | AI recommends with caveats |
| Low (5-14 references) | 36 | AI mentions but does not recommend |
| Minimal (0-4 references) | 22 | AI omits or describes inaccurately |
Investing in analyst relations, PR in authoritative publications, and industry recognition has a measurable impact on AI perception.
Finding 4: API Readiness is the Next Frontier
Only 41 of 513 brands (8%) offer any agent-accessible API. Those that do score 40% higher on Operability.
As agents move from information retrieval to task execution, API readiness becomes a differentiator. Brands with APIs enable agents to check pricing, configure trials, or retrieve real-time data.
Operability currently contributes 20% of the AX Score. Expect that weight to increase as agent capabilities mature.
Finding 5: Model Consistency Varies Wildly
The same brand can score dramatically differently across models. Cross-model variance:
| Metric | Value |
|---|---|
| Average cross-model score range (max - min) | 27 points |
| Brands with >30 point range | 38% |
| Brands with >50 point range | 11% |
| Most consistent model pair | Claude + Perplexity (r=0.81) |
| Least consistent model pair | ChatGPT + Gemini (r=0.54) |
A brand scoring 85 on Claude might score 45 on ChatGPT. Variance comes from differences in training data, retrieval, and recommendation algorithms. Optimizing for one model does not guarantee others.
Brands need cross-model monitoring. Optimizing only for ChatGPT leaves blind spots on Claude, Perplexity, and Gemini.
Implications for Brand Teams
- Structured data is not optional. Comprehensive Schema.org markup is the highest-ROI AI visibility investment.
- Clarify your entity. AI models must unambiguously identify who you are and what category you belong to.
- Invest in trust signals. Analyst relations, authoritative PR, and recognition impact AI recommendation confidence.
- Build APIs. Even a read-only API for product and pricing gives agents something to work with.
- Monitor across models. Use Signal Track for cross-model consistency tracking.
Methodology Details
Sample: 513 brands across 12 industry categories from the AP Index. Query sets: 5 standardized prompts per dimension (25 per brand per model) covering discovery, description, comparison, error handling, and explanation. Scoring: automated pattern matching and Claude-assisted evaluation on 0-20 per dimension. Period: January 6 to March 28, 2026. API availability determined by manual review and automated endpoint detection.
Get Your Brand's Data
See where you rank against 500+ brands. Download the full dataset.
Free Audit AP Index Download CSV