What is an AI Visibility Audit?
An AI visibility audit measures how ChatGPT, Claude, Perplexity, Gemini, and other agents perceive, describe, and recommend your organization.
When a procurement team asks Claude to compare CRM vendors, the response shapes a purchasing decision. Your AI visibility determines whether you appear and whether the information is accurate.
The audit produces an AX Score: a 0-100 rating across five dimensions.
The Five Dimensions of Your AX Score
Discoverability (0-20)
Whether your brand appears when agents search your category. Requires Schema.org markup, authoritative source references, and consistent entity definitions.
Navigability (0-20)
Whether AI systems understand your site structure and locate product information. Poor Navigability means agents find your homepage but cannot reach pricing, features, or documentation.
Operability (0-20)
Whether your brand offers APIs or programmatic interfaces. Operability is the difference between being recommended and being implemented.
Recoverability (0-20)
Whether AI systems recover when encountering incomplete or contradictory content. High Recoverability provides clear redirects and fallback information.
Transparency (0-20)
Whether AI systems can accurately describe what your brand does and its limitations. Transparent brands receive more confident recommendations. Opacity leads to hedging or omission.
Reading Your Score Report
Your report includes an overall score, a letter grade, and a per-dimension breakdown.
The overall score sums your five dimension scores (each 0-20). It is the single number you track over time and compare against competitors.
Your letter grade:
| Score Range | Grade | What It Means |
|---|---|---|
| 70-100 | Good | AI agents can reliably find, describe, and recommend your brand |
| 50-69 | Fair | Agents know you exist but present incomplete or inconsistent information |
| 30-49 | Poor | Significant gaps in AI perception; competitors are likely favored |
| 0-29 | Critical | Your brand is essentially invisible to AI agents or actively misrepresented |
The dimension breakdown shows your strengths and weaknesses. Most brands find one or two dimensions pull their score down dramatically.
What a Good Score Looks Like
From AP Index data on 500+ brands:
- Average AX Score: 42/100. Most brands perform poorly.
- Only 12% of brands score above 70. The "Good" tier is exclusive.
- 34% of brands score below 30. More than a third are Critical.
- Top performers average 82.
- Discoverability is the strongest average (52).
- Transparency is the weakest (38).
The gap between AI-visible brands and invisible ones is widening. Organizations that invest in AX today are building a compounding advantage.
Common Findings
1. Missing or Incomplete Structured Data
The most common finding. Over 70% of brands lack comprehensive Schema.org markup. Without it, agents infer from unstructured text, leading to inaccurate descriptions. Adding Organization, Product, Service, and FAQ schemas is the highest-impact fix.
2. Poor Entity Clarity
Agents struggle with ambiguous brand names or positioning that overlaps with competitors. Brands with clear entity definitions score significantly higher on Discoverability and Transparency.
3. No Agent-Accessible APIs
Only 8% of audited brands offer programmatic access. This caps Operability scores. As agents move from information retrieval to task execution, brands without APIs will be structurally disadvantaged.
4. Inconsistent Information Across Sources
When your website, LinkedIn, and third-party reviews disagree, agents cannot determine which source is authoritative. This impacts Recoverability and Transparency.
5. Thin or Missing Trust Signals
Agents weight third-party validation heavily. Brands without analyst coverage, awards, named testimonials, or authoritative backlinks receive less confident recommendations.
What to Do With Your Results
Your action plan depends on where you fall in the scoring spectrum:
| Grade | Priority Actions | Timeline |
|---|---|---|
| Critical (0-29) | Implement basic Schema.org markup. Fix entity clarity issues. Ensure your homepage clearly states what you do and for whom. Add FAQ structured data. | 2-4 weeks |
| Poor (30-49) | Expand structured data coverage. Build authoritative third-party references. Standardize brand information across all digital properties. Create a machine-readable product/service catalog. | 1-3 months |
| Fair (50-69) | Optimize for specific weak dimensions. Consider API development for Operability. Invest in content that helps agents explain your differentiation. Monitor drift across AI models. | 2-4 months |
| Good (70+) | Focus on maintaining consistency. Monitor for hallucinations and drift. Build agent-native workflows. Expand trust signal coverage. Target competitive gaps. | Ongoing |
The first step is establishing a baseline. AI perception changes as models update, your presence evolves, and competitors improve. Monitoring through Signal Track catches regressions early.
Get Your AX Score
Find out where your brand stands in 60 seconds. Free, no account required.
Free Audit View Benchmarks Continuous Monitoring