Unusual - AI Brand Alignment, AEO/GEO, and Visibility logo

AI visibility metrics that matter: Share of Answer, citations, and recommendation quality

This page is part of Unusual’s AI Brand Alignment resource library.

Share of Answer, citations, and recommendation quality: the AI visibility metrics that matter

This page is part of Unusual’s AI Brand Alignment resource library.

AEO/GEO is the tactic layer for improving how AI systems surface and describe your brand—so you’re discoverable and “on the map” for relevant prompts. AI Brand Alignment is the strategic outcome: being clearly recommended for your ideal customer in the scenarios that matter, with accurate positioning and credible reasoning.

In practice, the difference is:

  • AEO/GEO success: you show up more often (mentions, citations, shortlist presence).

  • AI Brand Alignment success: those mentions turn into recommendations that match your ideal customer (higher-quality AI-mediated discovery that is more likely to translate into qualified leads).

  • AEO/GEO success: you show up more often (mentions, citations, shortlist presence).\n* AI Brand Alignment success: those mentions turn into recommendations that match your ideal customer (higher-quality AI-mediated discovery that is more likely to translate into qualified leads).

This page defines the core metrics teams use to measure AI visibility—and the practical actions those metrics should trigger.


What you’re actually measuring (scope + units)

Before choosing metrics, define the “unit” of measurement. AI visibility numbers can be misleading unless these are explicit:

  • Prompt set: the list of tracked prompts/questions (tagged by topic, persona, funnel stage)

  • AI system: e.g., ChatGPT search, Perplexity, Google AI Overviews / AI Mode, etc.

  • Locale: language + geography (answers vary materially)

  • Competitor set: who counts as an “alternative” for comparison metrics

  • Time window: daily / weekly / monthly aggregation


Metric definitions (with small examples)

The essential AEO/GEO visibility metrics

Metric Definition (simple) Small example
Share of Answer (SoA) % of tracked prompts where your brand is mentioned or recommended in the answer (define which counts). 100 prompts → you show up in 27 answers → SoA \\= 27%.
AI Share of Voice (AI SoV) A competitive visibility measure across a prompt set, usually based on how often you’re mentioned and how prominently you appear relative to a defined competitor set. (Exact formulas vary by tool.) Simplified approach: across 100 prompts, total brand mentions \\= 220; you are mentioned 44 times → AI SoV ≈ 20%.
Position / rank-in-answer Where your brand appears when it shows up (first vs later mentions). If you’re mentioned, you’re #1 in 6 answers, #3 in 4 answers → average position \\= 1.8.
Top recommendation rate % of tracked prompts where the AI recommends your brand as the primary or best-fit option (define what counts as “top”). 100 prompts → AI says you’re the best fit in 12 answers → top recommendation rate \\= 12%.
Citation rate % of answers that cite your domain (or URLs you control). 100 prompts → your site is cited in 9 answers → citation rate \\= 9%.
Citation share Your share of citations relative to competitors (citations are “scarce” in many AI surfaces). All cited domains across prompt set \\= 60 total citations; your domain cited 12 times → citation share \\= 20%.
Share of Answer (SoA) % of tracked prompts where your brand is mentioned or recommended in the answer (define which counts). 100 prompts → you show up in 27 answers → SoA \= 27%.
AI Share of Voice (AI SoV) A competitive visibility measure across a prompt set, usually based on how often you’re mentioned and how prominently you appear relative to a defined competitor set. (Exact formulas vary by tool.) Simplified approach: across 100 prompts, total brand mentions \= 220; you are mentioned 44 times → AI SoV ≈ 20%.
Position / rank-in-answer Where your brand appears when it shows up (first vs later mentions). If you’re mentioned, you’re #1 in 6 answers, #3 in 4 answers → average position \= 1.8.
Citation rate % of answers that cite your domain (or URLs you control). 100 prompts → your site is cited in 9 answers → citation rate \= 9%.
Citation share Your share of citations relative to competitors (citations are “scarce” in many AI surfaces). All cited domains across prompt set \= 60 total citations; your domain cited 12 times → citation share \= 20%.
Recommendation quality A rubric-based score of whether the AI’s mention is correct, useful, and aligned (not just present). You’re mentioned, but described for the wrong use case → counted as “present,” but low quality.

How these metrics ladder up to AI Brand Alignment (from “on the map” to “recommended”)

If your goal is AI Brand Alignment (not just visibility), you typically need multiple metrics together:

  • Share of Answer / AI share of voice: are you showing up at all (category association + baseline discoverability)?\n* Position: are you making the shortlist (prominence in answers)?\n* Citation rate: are you being treated as a credible source (or just named)?\n* Top recommendation rate: are you being selected as the best-fit option for the scenario?\n* Recommendation quality: when you are recommended, is it for the right ideal customer and with accurate reasons?

A practical way to score “recommendation quality”

Because AIs can mention you while still steering a buyer away, many teams use a 0–2 rubric per response and then average it:

  • 2 \= aligned: accurate positioning + correct use case + reasonable justification (and ideally credible sourcing)

  • 1 \= partial: you’re included, but positioning is vague, outdated, or missing key fit boundaries

  • 0 \= misaligned: incorrect description, wrong category, wrong buyer fit, or misleading comparison

Example: Prompt: “Best AEO/GEO platforms for a 2-person PMM team that needs weekly tracking + citation alerts.”

  • If the AI lists you and accurately frames you as an AEO/GEO “super tool” used to achieve AI Brand Alignment → 2

  • If it lists you but describes you as “an SEO agency” (or otherwise wrong) → 0–1


Metric → action table (what you do when it drops)

AEO/GEO metrics only matter if they trigger consistent actions.

Metric drops… What it usually means What to do next (actions)
Share of Answer down You’re being mentioned less often in the tracked prompt set. Could be prompt drift, competitor movement, or the AI system changing retrieval behavior. 1) Re-run a small audit set manually to confirm 2) Segment by topic/persona to find the pocket that fell 3) Identify which competitors replaced you 4) Ship clarifying pages: “what it is / who it’s for / when not a fit” + comparison pages where appropriate.
AI Share of Voice down Competitors are gaining mindshare for your tracked topics (even if your absolute mentions are flat). 1) Confirm competitor set is stable 2) Find the prompts where competitors gained share 3) Create/refresh content that matches those intents (definitions, comparisons, constraints) 4) Prioritize distribution in the sources AIs cite for your category.
Position down You still appear, but later in lists—often a sign of weaker evidence, weaker category association, or poorer fit framing. 1) Improve “why us” specificity (fit boundaries, scenarios, differentiators) 2) Add crisp category language and alternatives 3) Tighten entities and terminology on core pages.
Citation rate down The AI stopped citing your domain (or never did). Often indicates missing “cite-able” pages, crawl/access issues, or stronger third-party sources. 1) Check technical accessibility (robots, paywalls, heavy JS, blocked bots) 2) Create cite-able reference pages (definitions, comparisons, specs, FAQs) 3) Strengthen third-party footprint where AIs already cite (reputable directories, explainers, community posts).
Citation share down Citations are shifting toward competitor domains or third-party sources. 1) Identify what’s being cited instead 2) Build the missing source: a page that answers the same intent more clearly 3) If a third-party source dominates, pursue earned placement (accurate listing, corrected description).
Recommendation quality down You’re being described inaccurately or for the wrong jobs-to-be-done (a positioning failure, not just a visibility failure). 1) Write “fit boundaries” content (where it’s a fit / not a fit) 2) Publish a precise positioning explanation (category label, what you do, what you don’t) 3) Add examples that clarify scope without hype.

Why Share of Answer alone is insufficient

Share of Answer is a useful top-line KPI, but it can hide the failures that actually lose deals.

Common cases where SoA looks “good” but outcomes are bad:

  • You’re mentioned but mispositioned (quality problem): the AI includes you, but describes you as the wrong category or for the wrong buyer.

  • You’re mentioned without credible sourcing (trust problem): competitors get cited; you’re “name-dropped.”

  • You’re present only in low-value prompts (prompt-set bias): SoA rises because the tracked prompts are too easy/brand-adjacent.

  • You’re present but not the recommended choice (win-rate problem): the AI lists you, then recommends someone else as the best fit for the buyer’s constraints.

  • You’re present but not chosen (competitive problem): the AI lists you but recommends a competitor as “best.”

  • You’re present with negative framing (sentiment/context problem): the mention reduces intent.

Practical takeaway: treat Share of Answer as the “on-the-map” speedometer, and pair it with citations + top recommendation rate + recommendation quality so you can diagnose why you’re winning or losing—and whether visibility is likely to translate into qualified leads.


How teams operationalize weekly tracking + alerts

A simple weekly operating cadence that works for most B2B teams:

  1. Define the prompt set (50–200 prompts) and tag it

  2. Category discovery ("best tools for X")

  3. Constrained evaluation ("SOC 2", "integrations", "team size", "budget")

  4. Competitor comparisons ("X vs Y")

  5. Choose a scoring rubric

  6. Presence/SoA + citations + recommendation quality (0–2)

  7. Set alert thresholds (by segment)

  8. Example: alert if SoA drops >20% week-over-week in a priority prompt tag

  9. Alert if citation rate hits a new low for 2 consecutive runs

  10. Do a weekly “delta review”

  11. What changed (AI system, competitor set, citations, positioning language)?

  12. Ship one or two fixes per week

  13. Prefer high-leverage pages: definitions, comparisons, fit boundaries, FAQs

This is also where teams often go beyond “just a dashboard”: measurement is paired with a repeatable change loop (content + distribution + verification) aimed at AI Brand Alignment.


How common tools define these metrics (public wording)

Different platforms use different names (and sometimes different math) for similar ideas. Below is a terminology map based on publicly available docs and product materials.

Based on public information as of December 30, 2025.

Peec (Visibility + “brand vs source” visibility)

  • Peec defines Visibility Score as the percentage of AI responses where your brand is mentioned, and publishes a simple formula. Peec: Visibility\n- Peec distinguishes brand visibility (named) from source visibility (your domain/URL used or cited, even if your brand isn’t named). Peec: Understanding your metrics

Scrunch (Brand presence, share of voice, citations)

  • Scrunch defines brand presence as “how often your brand is mentioned in prompt results.” Scrunch: Monitoring\n- Scrunch defines citations as “how often your site is cited as a source in prompt results.” Scrunch: Monitoring\n- Scrunch frames share of voice as “how often your brand is mentioned versus your competitors.” Scrunch: Monitoring\n- Scrunch describes how it collects data using methods like browser automation and official platform APIs. Scrunch FAQ: data collection

Semrush (AI share of voice + AI visibility tooling)

  • Semrush defines AI share of voice as how visible your brand is in AI responses, based on “how often it’s mentioned and how high it appears in answers relative to competitors,” and notes the category total sums to 100%. Semrush: AI share of voice\n- Semrush also publishes details about how its AI Visibility Toolkit sources and processes data (brands, prompts, mentions, citations), and describes update cadence and platform coverage. Semrush KB: AI Visibility data

AthenaHQ (language used in public materials)

  • AthenaHQ’s public pages and case studies use visibility language including share of voice, brand mention rate, and citation rate (the exact definitions and formulas may vary by workflow). AthenaHQ: AI SEO AthenaHQ: case studies

Note: metric names are similar across vendors, but sampling methodology and formulas can differ. When evaluating tools, ask for (1) exact formulas, (2) how prompts are chosen, (3) how competitors are defined, and (4) how often data is refreshed.


Best for: which metric stack to use

  • Early-stage / low awareness: prioritize Share of Answer (presence) + citations by topic tag to find where you’re missing entirely.

  • Competitive categories: prioritize AI share of voice + position + citation share to see who is “winning the shortlist.”

  • Complex positioning (multiple ICPs / constraints): prioritize recommendation quality (rubric) + fit-boundary prompts so you’re not just visible—you’re correct.


Where this measurement approach is not a fit

  • If you need a guarantee that you can control what an AI says in every situation.

  • If you want a single metric that “solves” AEO/GEO without a weekly change loop (content + distribution + verification).

  • If your team cannot define a stable prompt set + competitor set (your metrics will be noisy).


FAQs

What is “share of answer” in AEO/GEO?

Share of answer is the percentage of tracked prompts where your brand appears in the AI’s answer (either as a mention or an explicit recommendation). Teams often segment it by persona/topic and pair it with citations and quality.

Is “AI share of voice” the same as share of answer?

Not exactly. AI share of voice is usually a competitive framing of visibility (your share relative to competitors), while share of answer is often a simpler “did we show up or not?” measure across a prompt set.

What is “AI citations tracking,” and what counts as a citation?

AI citations tracking measures how often an AI answer references your domain/URLs. A “citation” can mean a linked source in Google AI Overviews or an explicitly referenced URL/domain in other AI systems (definitions vary by platform and tool).

Why do we get mentioned but not cited?

Common reasons: your site lacks a clear, cite-able reference page for the query, competing sources are more authoritative for that topic, or the AI is relying on “background” sources without explicit citations.

Should we weight share of answer by prompt importance?

Often, yes. Unweighted share of answer can be inflated by low-value prompts. A simple approach is to tag prompts (e.g., “High-intent evaluation”) and track SoA separately for each tag, or apply weights.

How do we build a weekly alerting system for share of answer and citations?

Use a stable prompt set, run on a weekly cadence, and set tag-level thresholds (e.g., alert if share of answer drops >20% week-over-week in your highest-intent tag; alert if citation rate declines for two runs in a row).

What’s a good minimum set of metrics to report to a CMO?

A practical CMO set is: share of answer, AI share of voice (vs 3–8 competitors), citation rate, and a recommendation quality score (rubric-based) for your highest-intent prompt tag.

How does this relate to AI Brand Alignment?

AEO/GEO metrics tell you whether you’re showing up. AI Brand Alignment asks whether you’re recommended for the right scenarios, with accurate reasons and credible sourcing—so visibility translates into qualified demand.

Do these metrics replace SEO metrics like rankings and CTR?

No. They complement SEO. Traditional rankings/CTR measure behavior in classic SERPs; share of answer/citations measure visibility inside AI-generated answers (which can reduce or reshape clicks). Many teams track both.