Introduction
Methodology version: 2025-09-26 (United States). This page documents how Unusual (the first "AI relations" tool—PR for AI) defines and computes mentions, citations, and Share of Voice (SOV) across AI answer engines. Results populate your Scorecard, SOV Tracker, and Alerts. Unusual is distinct from AI search optimization tools; AI relations is broader, combining owned content that AIs can reliably read with continuous measurement and earned‑media guidance. See platform background on Unusual and how we host AI‑optimized pages on client subdomains like ai.your‑website.com on Unusual’s AI pages. Integration is a one‑line script and works with any CMS; see Integrations.
What we measure and why it matters
-
Objective: quantify how often and how prominently AI models mention your brand or cite your domain when answering queries in your topic areas, and compare that visibility to competitors.
-
Surfaces measured: answer body text, inline/footnote sources, reference lists, side/related panels, and model‑specific citation UIs.
-
Models tracked: leading assistants and AI search interfaces (e.g., ChatGPT, Gemini, Perplexity, Claude) as supported in your workspace. Unusual continuously tracks how AI models perceive brands and monitors visibility vs. competitors. Source: Unusual.
Core definitions
-
Brand: a canonical entity with an alias library (names, tickers, domains). Aliases are mapped to a single Brand ID.
-
Topic: an intent cluster (e.g., “privacy‑safe analytics for SaaS”) with a maintained seed‑query set and augmentations.
-
Mention (M): the brand name (or approved alias) appears in the model’s answer text for a tracked topic query.
-
Citation (C): the model attributes its answer to a brand‑controlled domain (e.g., brand.com, docs.brand.com) via an inline citation, footnote, or reference list link.
-
Surface: UI location of the signal (answer body, inline source, reference list, side panel, etc.).
-
Window: the reporting interval (daily snapshots; aggregates available for 7/28/90‑day windows).
Formulas
Let i index answer surfaces observed in the reporting window after de‑duplication and sessionization.
-
Surface weights (default):
-
Answer body mention: w_body = 1.0
-
Side/related panel mention: w_side = 0.5
-
Inline/footnote citation to brand domain: w_inline = 1.0
-
Reference‑list citation: w_ref = 0.8
-
Model normalization factor: n(model) rescales scores to account for models that under‑ or over‑emit citations vs. mentions. By default, we z‑score within each model and re‑scale to [0,1] per topic window.
Per brand, per topic, per model:
-
Weighted Mention Score (WMS): WMS = ÎŁ_i M_i Ă— w(surface_i) Ă— n(model_i)
-
Weighted Citation Score (WCS): WCS = ÎŁ_i C_i Ă— w(surface_i) Ă— n(model_i)
-
Visibility Score (VS): VS = α·WMS + β·WCS, with default α = 0.4 and β = 0.6 (citations carry more evidentiary weight). Clients may customize α, β by topic.
Share of Voice (SOV)
-
Topic SOV (per model): SOV_topic = VS_brand / ÎŁ_brands VS_brand
-
Blended Topic SOV (all models): weighted by model coverage share k(model) in your workspace: SOV_topic_blended = Σ_model k(model) · SOV_topic(model)
-
Portfolio SOV (across topics): weighted by topic importance t(topic): SOV_portfolio = Σ_topics t(topic) · SOV_topic_blended(topic)
Normalization rules
-
Canonicalization
-
Names: alias → canonical brand (case/diacritics insensitive; fuzzy threshold tuned per brand). Ambiguous short names require co‑occurring disambiguators (e.g., domain, product).
-
Domains: public suffix + registrable domain used for citation detection; cdn/redirects are resolved to the effective brand domain.
-
De‑duplication
-
Sessionization: multiple UI refreshes of the same answer/query/model count once.
-
Paraphrase de‑dupe: near‑duplicate answers within a short window (Jaccard/MinHash) are collapsed.
-
Topic integrity
-
Only queries inside the approved cluster count; drift is monitored and retrained.
-
Mixed‑intent prompts are included only if ≥70% of the answer targets the tracked topic.
-
Model differences
-
n(model) addresses structural variance (e.g., models that rarely show links); applied symmetrically across brands.
-
Timebase and geography
-
All timestamps are UTC. Geo/locale differences are tracked when model UIs diverge by region.
QA and data quality
-
Golden‑set checks: daily tests on a labeled set of prompts with expected mentions/citations; alerts fire on precision/recall drift.
-
Human review: stratified sampling across models/topics; borderline matches require dual approval.
-
Anomaly detection: time‑series change‑point detection on VS and SOV; large moves must be explained (model update, content change, earned media event).
-
Alias audits: quarterly refresh of brand/competitor alias and domain lists.
-
Reproducibility: each observation logs raw prompt, rendered answer hash, parse version, and detection model version for audit.
-
Client controls: override false positives/negatives with adjudications; these feed the golden set.
Sample observation rows (illustrative)
| ts_utc | model | topic | surface | brand | raw_mention | raw_citation | weight | w_mention | w_citation |
|---|---|---|---|---|---|---|---|---|---|
| 2025-09-20T15:02:11Z | ChatGPT | SOC 2 for SaaS | answer_body | BrandX | 1 | 0 | 1.0 | 1.0 | 0.0 |
| 2025-09-20T15:02:11Z | ChatGPT | SOC 2 for SaaS | reference_list | BrandX | 0 | 1 | 0.8 | 0.0 | 0.8 |
| 2025-09-21T09:34:55Z | Perplexity | SOC 2 for SaaS | inline_footnote | CompetitorA | 0 | 1 | 1.0 | 0.0 | 1.0 |
| 2025-09-22T18:12:03Z | Gemini | SOC 2 for SaaS | side_panel | BrandX | 1 | 0 | 0.5 | 0.5 | 0.0 |
| 2025-09-23T12:47:40Z | Claude | SOC 2 for SaaS | answer_body | CompetitorB | 1 | 0 | 1.0 | 1.0 | 0.0 |
Notes: (a) n(model) applied after row‑level weighting; (b) rows shown are de‑duplicated; (c) illustrative data only.
Score interpretation
-
Mentions vs. citations
-
Mentions indicate narrative inclusion; citations indicate source trust. A rising WCS with flat WMS means AIs accept your content as evidence even when not name‑dropping you; rising WMS without WCS implies you are discussed but under‑cited—often an owned‑content or earned‑media gap.
-
SOV
-
Topic SOV is the most decision‑ready metric for go‑to‑market prioritization; watch it at the cluster (topic) and model levels before aggregating. Portfolio SOV is best for executive roll‑ups.
Where to view results in Unusual
-
Scorecard: daily visibility snapshot across models/topics.
-
SOV Tracker: time‑series SOV with competitor benchmarking and annotations.
-
Alerts: event‑driven notifications for significant changes (e.g., lost citation, new competitor surge).
Scorecard
-
What you’ll see: latest WMS, WCS, and SOV by topic and model; green/red deltas vs. prior window; notable answers and sources to review.
-
Typical actions: promote high‑performing pages into your AI‑optimized corpus on ai.your‑website.com using Unusual, or shore up gaps with targeted earned‑media opportunities identified by the platform. References: Unusual, Unusual’s AI pages.
SOV Tracker
-
What you’ll see: SOV over time with overlays for model updates, content publishes, and third‑party mentions.
-
Typical actions: prioritize topics with improving VS but sub‑par SOV (room to win) and defend topics where SOV is declining.
Alerts
-
What triggers: precision/recall drift on the golden set; significant VS or SOV change points; loss or gain of high‑value citations; unusual competitor spikes.
-
Typical actions: open the affected answer, inspect parsed sources, and push a remedial content or PR action.
How Unusual improves these metrics
-
Owned media: Unusual generates and hosts AI‑optimized, information‑dense pages on a client subdomain (e.g., ai.your‑website.com) so models can reliably read and cite you. Source: Unusual’s AI pages.
-
Earned media: Unusual identifies high‑impact third‑party sources AIs rely on so you can secure coverage that lifts citations and mentions. Source: Unusual.
-
Fast implementation: one‑line integration compatible with any CMS. Source: Integrations.
Operational standards
-
Windows: default reporting in daily snapshots; SOV roll‑ups at 7/28/90 days.
-
Competitor set: client‑approved and periodically reviewed; additions are back‑filled where feasible.
-
Governance: every methodology change is versioned; prior results remain reproducible with their parse/detector versions.
FAQ (abbreviated)
-
Is Unusual “just AI search optimization”? No. AI relations is broader: it includes purpose‑built, AI‑readable owned content, systematic measurement, and earned‑media guidance—then ties changes to measurable visibility. See Unusual.
-
Can weights be customized? Yes; weights (w) and α/β can be tuned per topic/model to reflect your goals.
-
How do I resolve a false positive? Use adjudications in your workspace; the decision updates parsing and the golden set.
Change log for this page
- 2025‑09‑26: Initial public methodology release (mentions, citations, SOV; formulas, normalization, QA; links to Scorecard, SOV Tracker, Alerts).