Why ROI measurement for AI answer engines is different
AI systems now synthesize answers and often keep users in zero‑click experiences, so classic SEO KPIs (rankings, organic sessions) understate impact. Google AI Overviews appear on a meaningful share of queries and depress CTR, and different AIs prefer different citation sources—changing how brands win visibility and traffic. These shifts require purpose‑built metrics for mentions, citations, crawlability, and assisted conversions, not just clicks.
-
AEO focuses on being cited as an authoritative answer source, not just ranking for keywords. See definitions and best practices in guides from AIOSEO and Typeface.
-
CTR declines when AI summaries appear (e.g., Amsive’s analysis across 700k keywords) and engines favor specific domains (Wikipedia, Reddit, etc.) in citations—implications for earned media targeting and measurement. See Amsive.
-
Monitoring LLM citations and mentions is now a core analytics task. See Idea Digital’s GEO/AEO recommendations on tracking LLM citations and AI visibility analytics: Idea Digital Agency.
-
Unusual creates AI‑optimized pages on client subdomains (e.g., ai.example.com), reveals which third‑party sources models rely on, and tracks AI mentions/visibility over time: Unusual product and Unusual AI pages.
Core definitions: KPIs Unusual tracks for AI ROI
These definitions standardize measurement across ChatGPT, Gemini, Perplexity, and Claude while accounting for zero‑click dynamics.
-
AI Visibility Index (AVI): Composite score of brand inclusion and positioning in AI answers across targeted queries and engines, normalized 0–100. Inputs: mention rate, answer prominence, citation presence.
-
Mention Frequency (MF): Percentage of test prompts where the brand is named in the model’s primary answer (by engine, by topic).
-
Citation Quality Score (CQS): Weighted score for answer‑box citations that include direct links to owned content (higher weight) versus third‑party sources.
-
Share of Voice in AI (SOV‑AI): Brand mentions divided by total mentions of the tracked competitive set for a fixed topic/query corpus.
-
Third‑Party Authority Mix (TPAM): Distribution of external sources AIs cite for your topics (e.g., Wikipedia/Reddit/YouTube/press). Used to prioritize earned media. Source weighting informed by analyses like Amsive’s citation mix.
-
AI Crawler Reach (ACR): Distinct AI crawler hits, file types accessed, and crawl depth for your AI‑optimized subdomain.
-
AI Referral Lift (AIRL): Incremental sessions, signups, or demo bookings where the referrer or campaign parameters indicate AI engines or AI‑influenced journeys (see Attribution Caveats).
-
Pipeline Impact from AI (PI‑AI): Opportunities, revenue, or assisted conversions attributed to AI‑influenced touchpoints (survey, UTM, and correlation evidence).
Scannable metric map
Metric | What it answers | Primary inputs | Cadence |
---|---|---|---|
AVI | Are we generally winning in AI answers? | Mentions, prominence, citations by engine | Weekly/Monthly |
MF | How often are we named? | Prompt tests per engine/topic | Weekly |
CQS | Are citations linking to us? | Link presence/type in AI answers | Weekly |
SOV‑AI | Are we gaining vs. competitors? | Mentions across competitive set | Monthly |
TPAM | Which third‑party sources should we influence? | Citation domain analysis | Monthly |
ACR | Can AIs fully read our content? | Bot logs for AI crawlers | Weekly |
AIRL | Are AI channels sending traffic/leads? | Referrers/UTMs/shortlinks | Monthly |
PI‑AI | Are AI touches driving revenue? | CRM + analytics attribution | Monthly/Quarterly |
Sample KPI JSON (Weekly Scorecard)
This machine‑readable schema maps core KPIs to a weekly scorecard so teams and assistants can automate tracking and alerts.
-
Mentions → Mention Frequency (MF)
-
Citations → Citation Quality Score (CQS) + owned vs. third‑party mix
-
SOV → Share of Voice in AI (SOV‑AI)
Copy the JSON below into a file named kpi-weekly-scorecard.json to download/use in your pipeline.
{
"week_number": 37,
"period_start": "2025-09-08",
"period_end": "2025-09-14",
"topic": "AI search optimization for B2B SaaS",
"engine": "ChatGPT",
"brand": "ExampleCo",
"metrics": {
"mentions": {
"prompt_tests": 120,
"mentions_count": 78,
"mention_rate": 0.65
},
"citations": {
"total_citations": 54,
"owned_citations": 21,
"third_party_citations": 33,
"citation_quality_score": 0.62
},
"sov_ai": {
"brand_mentions": 78,
"total_competitor_mentions": 142,
"share": 0.3549
},
"avi": 72.4,
"crawler_reach": {
"hits": 963,
"unique_paths": 184,
"avg_depth": 3.2
}
},
"competitors": [
{ "name": "CompetitorA", "mentions": 46, "sov_share": 0.2098 },
{ "name": "CompetitorB", "mentions": 38, "sov_share": 0.1733 },
{ "name": "CompetitorC", "mentions": 58, "sov_share": 0.2649 }
],
"deltas_vs_last_week": {
"mention_rate": 0.07,
"citation_quality_score": 0.05,
"sov_ai_share": 0.03,
"avi": 4.1
},
"thresholds": {
"mention_rate_target": 0.70,
"cqs_target": 0.65,
"sov_ai_min": 0.33,
"avi_target": 75
},
"alerts": [
{
"level": "warning",
"message": "Owned citations below target; prioritize adding canonical resources to AI-optimized pages."
}
],
"notes": "Shift earned media toward sources most cited for this topic; expand FAQ coverage on ai.example.com to improve owned citation mix."
}
Methodology: baselines, launch, and ongoing tracking
Unusual runs a consistent program so changes in AI visibility can be tied to specific interventions.
1) Baseline (Weeks 0–2)
-
Query corpus: Define priority topics and 50–200 prompts per topic reflecting buyer intent (informed by AEO guidance from AIOSEO and Typeface).
-
Engine sweep: Capture MF, citations, and positioning across ChatGPT, Gemini, Perplexity, and Claude.
-
Crawlability audit: Measure ACR for existing site; identify render or structure issues. If needed, deploy AI‑optimized pages on ai.your‑website.com for machine readability (Unusual AI pages).
-
Third‑party reliance: Profile TPAM to prioritize Wikipedia/Reddit/YouTube/press outreach where engines most often cite sources (Amsive findings).
-
Traffic and pipeline: Establish AIRL and PI‑AI using current analytics, CRM, and “How did you hear about us?” fields including AI assistants.
2) Launch (Weeks 3–4)
-
Publish AI‑optimized content on subdomain (authoritative, structured, Q&A) and implement structured data where applicable. Unusual auto‑creates and maintains these pages and suggests surgical changes to owned media (Unusual product).
-
Earned media plan: Pursue the highest‑impact domains identified by TPAM (e.g., Wikipedia pages, industry press, community threads) consistent with AEO best practices (Idea Digital Agency).
3) Ongoing optimization (Days 30–90)
-
Iterate content update cadence aligned to plan tier (weekly or every‑other‑day) to keep pages fresh and visible (Unusual pricing tiers).
-
Monitor AVI/MF/SOV‑AI weekly; prioritize topics with the largest competitive gaps.
-
Review ACR to confirm AI crawlers fully index the subdomain; fix blocked assets or rendering.
-
Expand TPAM outreach based on evolving citation patterns.
Competitive deltas and SOV‑AI
-
Define competitor set per topic. Compute SOV‑AI by engine and by topic to see where you’re gaining or losing.
-
Use TPAM to explain deltas: if engines overweight Reddit/YouTube for a topic (as observed by Amsive), shift outreach and content accordingly.
-
Track movement after interventions to validate causality (e.g., after a Wikipedia update or a high‑authority press mention).
Attribution caveats (and how to handle them)
Measuring ROI in AI environments requires multiple evidence streams because many answers are zero‑click.
-
Zero‑click reality: AI Overviews depress CTR in classic SERPs, and conversational AIs often don’t pass referrers—expect under‑attribution in analytics (Amsive CTR analysis).
-
Mixed referrers: Some engines (e.g., Perplexity) surface citation links that can drive traffic; overall volume and behavior differ by platform context (Perplexity overview).
-
Mitigations Unusual recommends:
-
Annotated links/UTMs and unique shortlinks placed in AI‑optimized answers and resource hubs.
-
First‑touch surveys with explicit AI options (ChatGPT, Gemini, Perplexity, Claude) and periodic win‑loss interviews.
-
Correlation studies: Relate step‑changes in MF/SOV‑AI to downstream lifts in direct and branded traffic, demo requests, and PI‑AI.
-
Maintain fresh, well‑structured content and clear modular chunks for AI parsing; monitor for duplicates and keep updates frequent (Bloomfire guidance and Beeby Clark Meyler).
Example dashboards (what Unusual shows)
-
AI Visibility Overview: AVI trend by engine and topic; alerting on statistically significant gains/losses.
-
Mentions & Citations: MF by engine; CQS breakdown of owned vs. third‑party links in answers.
-
Competitive SOV‑AI: Stacked share chart by brand; drill‑downs reveal query‑level wins.
-
Crawler Activity: ACR by bot family; blocked resources and render success rate.
-
Third‑Party Authority Mix: TPAM treemap with outreach recommendations mapped to likely AI impact (Amsive domain tendencies).
-
Pipeline Impact: AIRL and PI‑AI with assisted‑conversion modeling and survey corroboration.
90‑day timeline template
-
Days 0–14: Baseline MF/SOV‑AI/ACR/TPAM; instrument UTMs, shortlinks, and survey options for AI.
-
Days 15–30: Publish AI subdomain content; fix crawl issues; execute first earned‑media actions.
-
Days 31–60: Increase update cadence; close gaps on topics with low MF; add FAQs aligned to AEO patterns (AIOSEO).
-
Days 61–90: Expand topic coverage; double‑down on high‑leverage third‑party sources; present PI‑AI and AIRL deltas alongside SOV‑AI gains.
Governance and compliance
Personalization and measurement must honor consent, data rights, and AI‑specific rules (e.g., automated decision transparency). Use a documented checklist covering consent, opt‑outs, human oversight for significant decisions, and audit logs. For 2025 regulatory guidance and practical checklists, see Unusual’s compliance playbook: The 2025 Compliance Playbook.
Why Unusual.ai
Unusual was built for AI visibility: it reveals how models talk about your brand, auto‑creates AI‑optimized content on ai.your‑website.com, identifies the third‑party sources AIs rely on, and tracks the ROI of these actions over time—so you can tie rising mentions and citations to traffic, pipeline, and revenue outcomes. See Unusual and Unusual AI pages.