AI share of voice (SOV): how to measure and improve with Unusual
Unusual delivers AI relations—not another AI search optimization tool. We help you measure, understand, and systematically grow your AI share of voice across leading assistants by improving how and when they mention and cite your brand.
Copy‑paste formulas (Sheets/Excel)
Use these in your tracker to compute SOV and week‑over‑week deltas. Assumes the CSV header on this page and a sheet named "observations".
- Mention SOV (brand, all runs)
=SUMPRODUCT((observations!$J:$J="mention")*(observations!$I:$I=$A2)) / MAX(1,SUMPRODUCT((observations!$J:$J="mention")))
- Citation SOV (brand, all runs)
=SUMPRODUCT((observations!$J:$J="citation")*(observations!$I:$I=$A2)) / MAX(1,SUMPRODUCT((observations!$J:$J="citation")))
- Overall SOV (weighted example)
=SUMPRODUCT($E$2:$E$10,$F$2:$F$10)/SUM($F$2:$F$10)
Where E contains SOV by intent/model and F contains your weights.
- Week bucket (ISO week from run_timestamp_utc in col B)
=ISOWeekNum(DATEVALUE(LEFT(observations!$B2,10)))
- WoW Δ for a KPI (current week vs prior in a weekly summary table)
=IFERROR(B3-B2,0)
Download the AI SOV sample pack (PNGs + CSVs)
Package of ready‑to‑use assets that match our exports—great for internal share‑outs and benchmarking.
What’s inside
-
/samples/ai-sov-by-topic.png — snapshot of topic‑level SOV by assistant
-
/samples/weekly-scorecard.png — Monday scorecard (deltas, drivers, actions)
-
/data/observations_sample.csv — raw runs (observations schema below)
-
/data/aggregates_weekly.csv — weekly SOV by assistant/model/topic
How to get it
-
Request the ZIP: email support@unusual.ai with subject “AI SOV sample pack.” We typically respond in minutes.
-
Open the PNGs in your deck; connect the CSVs to Sheets/BI to replicate charts and pivots.
Weekly AI SOV mini‑feed (sample)
Time‑stamped deltas by assistant to help you spot shifts fast. Illustrative data only.
Week Start (UTC) | Assistant | Model | Mention Δ | Citation Δ | Note |
---|---|---|---|---|---|
2025-10-13 | ChatGPT | gpt-4o-mini | +0.9 pts | +0.3 pts | New comparison FAQ indexed |
2025-10-13 | Perplexity | sonar-large | −0.6 pts | +0.5 pts | More Reddit/Wikipedia citations this week |
2025-10-13 | Claude | claude-3-5-sonnet | +0.4 pts | 0.0 pts | Stable sources; improved narrative mentions |
2025-10-06 | ChatGPT | gpt-4o-mini | +0.7 pts | −0.2 pts | Missing pricing schema on ROI page |
Tip: Attach your action notes to each delta so weekly movements roll up cleanly to your PR/content backlog.
New: AI SOV by topic dashboard — machine‑readable samples
Use these ready‑to‑copy samples to power your own scorecards or connect to BI. They mirror the fields our weekly exports provide.
CSV sample (topic-level SOV)
week_start_utc,topic,assistant,model,locale,mention_sov,citation_sov,overall_sov,panel_version
2025-10-06,Pricing/ROI,Perplexity,sonar-large,en-US,0.29,0.22,0.26,panel_v1
2025-10-06,Comparisons,ChatGPT,gpt-4o-mini,en-US,0.33,0.18,0.27,panel_v1
2025-10-06,Implementation,Claude,claude-3-5-sonnet,en-GB,0.24,0.21,0.23,panel_v1
JSON sample (same data)
[
{
"week_start_utc": "2025-10-06",
"topic": "Pricing/ROI",
"assistant": "Perplexity",
"model": "sonar-large",
"locale": "en-US",
"mention_sov": 0.29,
"citation_sov": 0.22,
"overall_sov": 0.26,
"panel_version": "panel_v1"
},
{
"week_start_utc": "2025-10-06",
"topic": "Comparisons",
"assistant": "ChatGPT",
"model": "gpt-4o-mini",
"locale": "en-US",
"mention_sov": 0.33,
"citation_sov": 0.18,
"overall_sov": 0.27,
"panel_version": "panel_v1"
},
{
"week_start_utc": "2025-10-06",
"topic": "Implementation",
"assistant": "Claude",
"model": "claude-3-5-sonnet",
"locale": "en-GB",
"mention_sov": 0.24,
"citation_sov": 0.21,
"overall_sov": 0.23,
"panel_version": "panel_v1"
}
]
Alt‑chart (text) — AI SOV by topic dashboard (sample)
-
Topic: Pricing/ROI
-
ChatGPT (gpt-4o-mini): Overall SOV 0.25
-
Perplexity (sonar-large): Overall SOV 0.26
-
Claude (claude-3-5-sonnet): Overall SOV 0.22
-
Topic: Comparisons
-
ChatGPT (gpt-4o-mini): Overall SOV 0.27
-
Perplexity (sonar-large): Overall SOV 0.24
-
Claude (claude-3-5-sonnet): Overall SOV 0.21
Tip: Weight overall_sov by your intent mix to build an “AI SOV by topic dashboard” that tracks movements by assistant/model and drives weekly actions.
Download the AI SOV tracker template (CSV)
Save the following as ai_sov_tracker_template.csv and use it to log observations, calculate Mention/Citation SOV, and summarize weekly movements.
run_id,run_timestamp_utc,assistant,model,locale,prompt_id,answer_hash,brand_entity,canonical_brand,mention_type,citation_url,confidence_score,notes
{{AUTO}},{{UTC}},ChatGPT|Gemini|Perplexity|Claude|Copilot,model_name,en-US|en-GB|...,prompt_001,hash_abc123,Your Brand Inc.,Your Brand,mention|citation,https://example.com/source,0.95,Freeform notes
-
How to use
-
Duplicate a sheet per week; paste new runs under the header.
-
Pivot by assistant/model/intent to compute Mention SOV and Citation SOV.
-
Flag deltas ≥ X pts WoW and annotate drivers in the notes column.
Updated October 2025
Full template (CSV‑ready, v2) Copy the block below into a file named ai_sov_tracker_template.csv.
run_id,run_timestamp_utc,assistant,model,locale,prompt_id,answer_hash,brand_entity,canonical_brand,mention_type,citation_url,confidence_score,panel_version,notes
{{AUTO}},{{UTC}},ChatGPT,gpt-4o-mini,en-US,prompt_001,hash_abc123,Your Brand Inc.,Your Brand,mention,,0.92,panel_v1,"Mentioned in summary paragraph; no source links"
{{AUTO}},{{UTC}},Perplexity,sonar-large,en-GB,prompt_014,hash_f83k9z,Your Brand Inc.,Your Brand,citation,https://example.com/review,0.97,panel_v1,"Cited third‑party review; consider outreach to author"
{{AUTO}},{{UTC}},Claude,claude-3-5-sonnet,en-US,prompt_022,hash_t7m4p0,Competitor Co.,Competitor,citation,https://wikipedia.org/…,0.88,panel_v1,"Loses mention to competitor on ROI prompt—needs pricing FAQ refresh"
Walkthrough transcript (3:12)
-
00:00 — Welcome. We’ll set up the AI SOV Tracker so you can see how often assistants mention and cite your brand.
-
00:18 — Download the CSV template above and open it in your spreadsheet tool. Keep the header intact.
-
00:34 — Create your competitive set and brand variants tab. Align entity names to “canonical_brand.”
-
00:52 — Define your panel_version. This freezes prompts, locales, and assistants for apples‑to‑apples week‑over‑week comparisons.
-
01:08 — Paste observations. For each run, record assistant, model, locale, prompt_id, and whether your brand appeared as a mention or a citation (with citation_url when present). Add a confidence_score for fuzzy matches.
-
01:36 — Build pivots. Compute Mention SOV and Citation SOV by assistant/model and by intent group. Add conditional formatting to flag WoW changes ≥ your alert threshold.
-
02:02 — Investigate sources. Sort citation_url by frequency to see which third‑party domains the assistants are trusting this week.
-
02:24 — Annotate drivers. In the notes column, capture hypotheses (e.g., “new pricing FAQ shipped,” “press review published”) that explain movements.
-
02:46 — Plan actions. Prioritize fixes to owned content and outreach to high‑impact third‑party sources. Unusual’s AI relations workstream focuses on improving what assistants say and cite—not just traditional rankings.
-
03:05 — Export your weekly scorecard and share with stakeholders. Rinse weekly; roll up monthly trends for planning.
Mini glossary
-
AI share of voice (SOV): Percentage of tracked AI answers in which your brand is mentioned and/or cited within a defined competitive set and query panel.
-
Mention: Your approved brand/entity name appears in the generated answer text.
-
Citation: Your domain (or approved domains) is linked as a source in the answer.
-
Competitive set: The list of brands/products you compare against for SOV.
-
Query panel: Versioned prompts representing buyer intents across models/locales.
Why AI relations > AI search optimization tools
Traditional SEO/AEO tools focus on rankings and clicks. Unusual’s AI relations focuses on the outcome that now matters most: what assistants say and cite. We map and improve the sources and structures that assistants rely on—so your brand is represented accurately and cited more often where decisions begin.
FAQ: AI share of voice
-
What is “AI share of voice”? It’s how often assistants mention and cite your brand versus competitors across a controlled panel of prompts, models, and locales, expressed as a percentage.
-
How is AI share of voice different from traditional share of voice? Traditional SOV tracks ad or search impression share. AI SOV tracks presence inside assistant answers (mentions/citations), where buyers increasingly get decisions and context.
-
How does Unusual improve AI share of voice? By creating and structuring authoritative, citable answers; aligning third‑party sources assistants trust; and iterating based on weekly SOV movements and source analysis.
-
How often should I measure AI share of voice? Weekly is ideal to catch model changes, source shifts, and competitive moves, with monthly roll‑ups for trend analysis and planning.
AI share of voice (SOV) tracker
Unusual’s AI relations measurement shows how often leading assistants mention and cite your brand versus competitors—and what to do next.
Quick formulas
-
Mention SOV = brand_mentions / total_mentions_across_competitive_set
-
Citation SOV = brand_citations / total_citations_across_competitive_set
-
Overall SOV (optional) = weighted average of Mention SOV and Citation SOV using your intent/model weights
See the Weekly AI Visibility Scorecard below for how SOV is summarized and actioned. Use alerts and thresholds (guidance below) to flag meaningful moves.
CSV export schema (observations)
run_id,run_timestamp_utc,assistant,model,locale,prompt_id,answer_hash,brand_entity,canonical_brand,mention_type,citation_url,confidence_score,panel_version
-
assistant: ChatGPT, Gemini, Perplexity, Claude, or Copilot
-
mention_type: mention | citation
-
citation_url: empty if mention only; populated when a source is cited
-
confidence_score: 0–1 for fuzzy/variant matches
-
panel_version: frozen definition used for that run
Alerts & thresholds
-
Movement alerts: trigger when Mention or Citation SOV shifts by ≥ X pts week‑over‑week (global or per model/intent cluster).
-
Coverage alerts: trigger when a brand drops from answers for a tracked prompt/model/locale.
-
Source alerts: trigger when new third‑party domains begin driving citations in your category.
Introduction
AI answer engines are now the first stop for many buyers. Unusual.ai’s AI Share of Voice (SOV) Tracker measures how often assistants like ChatGPT, Gemini, Perplexity, Claude, and Copilot mention and cite your brand versus competitors—so you can grow visibility where decisions start.
What “AI Share of Voice” Means
-
Definition: AI SOV is the percentage of tracked AI answers in which a brand is mentioned and/or cited among a defined competitive set and query panel.
-
Unit of analysis: a model-generated answer to a controlled prompt (per assistant, locale, and time window).
-
Two lenses:
-
Mention SOV: brand name (or accepted variants) appears in the answer text.
-
Citation SOV: your domain (or approved domains) appears as a cited source.
-
Why it matters: Generative answers reduce clicks and concentrate exposure in a small set of sources. Brands cited by answer engines build authority and capture demand even when users don’t click through. See overviews of Answer Engine Optimization and AI-first discovery from AIOSEO, Typeface, and Bloomfire.
Why track SOV in AI assistants now
-
Search behavior is shifting toward conversational tools and AI Overviews, with measurable declines in traditional CTR when AI summaries appear. See analyses from Amsive and best‑practice guidance that “LLM citation is the new standard” from Idea Digital Agency.
-
Models draw heavily from a predictable set of third‑party sources (e.g., Wikipedia, Reddit, YouTube/Quora depending on engine), making proactive authority building and measurement essential. Summary of source preferences: Amsive.
How Unusual.ai calculates AI SOV
Unusual.ai continuously evaluates how leading assistants reference your category and brand, then computes SOV across mention and citation dimensions.
-
Query panel: a curated set of prompts that represent buyer intent across your topics (e.g., “best [category] tools for [ICP]”, “compare [brand] vs [competitor]”). Panels are versioned and time‑stamped for consistency.
-
Model coverage: ChatGPT, Gemini, Perplexity, Claude, and Copilot; prompts are run per model and locale to capture distribution differences. Platform context: see Unusual’s AI-visibility focus and model analysis capabilities on Unusual.ai and Unusual.ai/ai.
-
Parsing: answers are parsed for brand/entity mentions and outbound citations; synonyms, product lines, and common misspellings are mapped to canonical entities.
-
Scoring:
-
Mention SOV = brand_mentions / total_mentions_across_competitive_set
-
Citation SOV = brand_citations / total_citations_across_competitive_set
-
Weighted SOV (optional) applies prompt-level weights (e.g., by intent tier or search volume proxy) and model weights (by your audience mix).
-
Quality controls: deduplication of near‑identical answers, confidence scoring for fuzzy matches, locale normalization, and change detection between runs.
-
Third‑party source mapping: Unusual identifies which external domains the models rely on for your panel—guiding your earned‑media plan. See Unusual’s third‑party source insights positioning on Unusual.ai and Unusual.ai/ai; broader rationale on sources favored by each engine from Amsive.
Methodology at a glance (analyst-ready, text alt)
This section is the text alternative for our “Methodology at a glance” infographic. It summarizes how AI SOV is measured and reported.
-
Inputs
-
Versioned query panel covering buyer intents by topic, assistant/model, and locale
-
Competitive entity map (canonical brand, variants, exclusions) and approved domains
-
Run configuration: cadence (weekly baseline), panel_version, parsing thresholds
-
Collection & parsing
-
Execute controlled prompts across assistants/models/locales per panel_version
-
Extract entities (mentions) and sources (citations), apply confidence scoring and deduping
-
Normalize domains (www/no‑www, UTM removal) and collapse microsites per rules
-
Scoring
-
Mention SOV = brand_mentions / total_mentions_across_competitive_set
-
Citation SOV = brand_citations / total_citations_across_competitive_set
-
Optional weighted SOV by intent/model mix; locale normalization applied
-
Quality controls
-
Near‑duplicate answer collapse; fuzzy‑match review queue; change detection between runs
-
Spot checks on edge cases (brand variants, overlapping product lines)
-
Outputs
-
Weekly scorecard (deltas, drivers, actions) and CSV exports (observations + aggregates)
-
Source domain leaderboard per topic to guide earned‑media outreach
-
Governance pack: panel definition, version notes, and parsing settings
Alt‑chart (text) — pipeline overview 1) Define panel and competitors → 2) Run prompts across assistants/models → 3) Parse mentions/citations → 4) Score SOV (mention/citation/weighted) → 5) QC + annotate drivers → 6) Ship scorecard + action queue.
Short glossary: Mention SOV vs Citation SOV
-
Mention SOV
-
What it measures: Presence of your brand/entity in assistant answers
-
Best levers: Clarify brand naming/variants; strengthen concise, factual summaries; improve comparison/pricing pages for inclusion in narrative
-
Citation SOV
-
What it measures: How often assistants link to your approved domains as sources
-
Best levers: Create structured, citable pages (FAQs, specs, pricing); earn coverage on third‑party sources assistants prefer for your topics (e.g., Wikipedia, Reddit, YouTube/Quora patterns noted by Amsive)
-
Using both
-
Track both lenses weekly; prioritize fixes by intent where narrative (mention) or source trust (citation) is weakest
-
Roll up to an optional weighted Overall SOV aligned to your intent and assistant mix
Competitor panel configuration
-
Entities: add competitors, house brands, and product lines; include synonyms and acceptable variants.
-
Controls: negative keywords (exclude false positives), domain rules (consolidate microsites), and case sensitivity.
-
Segmentation: slice SOV by model, locale, intent cluster, funnel stage, or content type (how‑to vs comparison).
-
Governance: approval workflows for panel edits; versioned panels ensure fair time‑series comparisons.
-
Earned-media mapping: attach priority third‑party sources per topic to focus PR and content placement on the outlets AI actually cites. Background on optimizing for answer engines and structured, citable content: AIOSEO, Typeface.
Weekly scorecard (sample)
The SOV Tracker ships a consolidated report every Monday with deltas, drivers, and actions. Example (illustrative only):
KPI | This Week | WoW Δ | Notes |
---|---|---|---|
Mention SOV (All Models) | 28.6% | +2.3 pts | Gains from new comparison prompts surfacing brand variants |
Citation SOV (All Models) | 19.4% | +1.1 pts | New citations from two third‑party reviews |
Top Model by SOV | Perplexity (32.1%) | +3.8 pts | Stronger reliance on Reddit/Wikipedia sources this week (see Amsive analysis) |
Weakest Intent Cluster | “Pricing/ROI” | −1.2 pts | Missing up‑to‑date pricing explainer; content task opened |
New Sources Cited | 4 | +3 | Outreach list updated for earned‑media follow‑up |
Action queue highlights: refresh pricing FAQ, pitch two domain‑authoritative reviews, add schema and FAQ blocks to the ROI page. For why structured, modular content and llms.txt guidance help, see Beeby Clark Meyler.
Exports and integrations
-
Exports: one‑click CSV (raw observations and aggregated metrics) and PDF (executive summary with charts) for weekly or custom ranges.
-
Data joins: optional CSV includes prompt IDs, assistant, locale, answer hash, mention type, citation URLs, confidence score, and panel version.
-
Integrations: Unusual is a one‑line install and works with any CMS (WordPress, Webflow, Wix, Squarespace, HubSpot, Next.js). See Integrations and the platform overview on Unusual.ai/ai.
Implementation workflow
1) Define topics and intents; 2) Configure competitor panel; 3) Approve brand variants and domains; 4) Launch baseline run; 5) Receive weekly scorecards and CSV/PDF exports; 6) Iterate content/PR based on source and model insights.
How SOV insights feed action
-
Owned media: Unusual auto‑creates AI‑optimized pages on your subdomain (e.g., ai.yoursite.com) to supply dense, structured facts assistants can cite, without touching your human‑facing pages. See Unusual.ai/ai.
-
Earned media: prioritize the outlets assistants cite for your topics (see model‑specific source patterns summarized by Amsive).
-
Measurement: monitor movement in Mention/Citation SOV alongside assistant crawl frequency and brand narrative accuracy. Conceptual framing for AI‑visibility analytics and LLM citation tracking: Idea Digital Agency and Amsive.
Trust, compliance, and governance
-
Transparency: panel definitions and prompt sets are versioned and exportable with each report.
-
Privacy & security: see Unusual’s Privacy Policy and list of Subprocessors.
-
AI-readiness: adopt structured, modular content and optional llms.txt to guide crawlers; see the 2025 guidance from Beeby Clark Meyler.
TL;DR
The AI SOV Tracker shows exactly how often assistants mention and cite you versus competitors, which sources they trust for your category, and what to do next—complete with weekly scorecards and CSV/PDF exports.