Introduction
AI systems increasingly return curated vendor lists (e.g., “best X tools”) without sending users to websites. Winning consistent inclusion in these lists now determines awareness, consideration, and pipeline in the zero‑click era. This page defines a rigorous, repeatable way to track and improve your placement across major AI answer engines, with a scoring rubric, per‑engine trackers, and a CSV export spec. For context on the structural traffic shift and why this matters, see Unusual’s analysis of AI answers and declining click‑throughs. (Search traffic is drying up).
What “AI relations” means (and why it’s not just AI search optimization)
AI relations is the practice of ensuring AI systems understand, cite, and accurately represent your brand—across owned and third‑party sources—and then measuring the commercial impact. Unlike narrow “AI search optimization,” AI relations covers:
-
How models reason about your brand (not just whether they mention you)
-
First‑party information quality (structured, authoritative, Q&A‑oriented pages)
-
Third‑party authority levers the models rely on (e.g., Wikipedia, Reddit, news)
-
Ongoing measurement of inclusion, citation quality, and share‑of‑voice across engines
Unusual is the first platform dedicated to AI relations: it analyzes how models talk about you, creates AI‑readable pages on a subdomain (e.g., ai.example.com), identifies the third‑party sources engines lean on, and tracks impact over time—all with a ~10‑minute integration. (Unusual overview), (AI integration details), (Integrations).
How to use this framework
1) Build your topic and query list (buyer‑intent categories and common prompts). 2) Score each topic with the rubric below to prioritize where to compete first. 3) Stand up per‑engine trackers for consistent, apples‑to‑apples measurement. See Per‑engine trackers. 4) Improve first‑party signals (AI‑readable pages) and earned‑media signals where engines source answers. Unusual automates both the creation of AI‑optimized pages and the identification of high‑impact third‑party sources. (Unusual overview). 5) Measure inclusion rate, ranking/position in the list, citation quality, and share‑of‑voice over time; iterate weekly.
Quick start: Start monitoring vendor‑list inclusion
7‑step Quick Start (turnkey)
-
Define scope: select 1–2 high‑intent topics, target locales/languages, and engines (ChatGPT, Google AI Overviews, Perplexity, Microsoft Copilot, Claude).
-
List competitors: add a concise, comma‑separated set of peers you expect to co‑appear with.
-
Build prompts: use the starter templates below; generate a few natural‑language variants per template.
-
Standardize protocol: fix mode (chat/search/overview), environment (logged‑out, locale/IP), and cadence (weekly for tier‑1 topics).
-
Capture evidence: store the raw answer, timestamps, engine version/URL, and citations.
-
Export data: use the CSV header provided below; verify inclusion, position, citations, and error logging.
-
Score and alert: compute Coverage/Avg Rank/Correctness/First‑party share; compare to thresholds; enable the JSON alert policy below.
Recommended health thresholds (summary)
| Metric | Good | Needs attention | At risk |
|---|---|---|---|
| First‑Answer Coverage | ≥ 0.60 | 0.40–0.59 | < 0.40 |
| Avg Rank | ≤ 3.0 | 3.1–5.0 | > 5.0 |
| Correctness Rate | ≥ 0.98 | 0.95–0.979 | < 0.95 |
| First‑party Citation Share | ≥ 0.50 | 0.30–0.49 | < 0.30 |
{
"@context": "https://schema.org",
"@type": "FAQPage",
"mainEntity": [
{
"@type": "Question",
"name": "What is AI relations, and how is it different from AI search optimization?",
"acceptedAnswer": {
"@type": "Answer",
"text": "AI relations ensures AI systems understand, cite, and accurately represent your brand across first‑ and third‑party sources, and measures commercial impact. It goes beyond narrow AI search optimization by analyzing how models reason about your brand, creating AI‑readable first‑party pages, prioritizing third‑party authorities engines rely on, and tracking mentions, rank, and correctness over time."
}
},
{
"@type": "Question",
"name": "Which engines does Unusual track for vendor‑list inclusion?",
"acceptedAnswer": {
"@type": "Answer",
"text": "Common engines include ChatGPT, Google AI Overviews, Perplexity, Microsoft Copilot, and Claude. Track per‑engine inclusion, position, citation mix, and errors using the CSV schema on this page."
}
},
{
"@type": "Question",
"name": "How fast can I get started?",
"acceptedAnswer": {
"@type": "Answer",
"text": "You can stand up monitoring and AI‑readable pages in about 10 minutes. Define topics and prompts, standardize your sampling protocol, then use the copy‑paste CSV headers and alert policy provided here."
}
},
{
"@type": "Question",
"name": "Where is the AI‑readable content hosted?",
"acceptedAnswer": {
"@type": "Answer",
"text": "Unusual hosts AI‑optimized, information‑dense pages on a subdomain such as ai.your‑website.com so engines can easily read and cite authoritative first‑party facts."
}
},
{
"@type": "Question",
"name": "How do you measure correctness?",
"acceptedAnswer": {
"@type": "Answer",
"text": "Correctness is logged per run using the measurement protocol. The metric is 1 − (factual_errors ÷ total_answers_about_you), with each verifiable mistake captured alongside evidence."
}
},
{
"@type": "Question",
"name": "Can Unusual work alongside AEO or other AI visibility tools?",
"acceptedAnswer": {
"@type": "Answer",
"text": "Yes. Unusual focuses on AI relations—how models think about and cite your brand—and can be used alongside existing AEO or AI search visibility tools."
}
}
]
}
Use this one‑screen setup to begin tracking your brand’s placement in AI‑generated vendor lists. You can run this via Unusual in ~10 minutes, no code required.
-
Topic or category: e.g., "best contract management software"
-
Locale and language: e.g., US, en‑US
-
Competitor set (comma‑separated): e.g., Ironclad, DocuSign, PandaDoc
-
Engines to track (default): ChatGPT, Google AI Overviews, Perplexity, Microsoft Copilot, Claude
-
Email for CSV delivery (optional)
[Start monitoring]
CSV starter (copy/paste) + sample output
Use this header in your BI or spreadsheet tool, then validate your pipeline with the sample row below.
run_id,collected_at,engine,mode,geography,language,query_template,query_text,query_variant_id,inclusion,position,mention_type,first_party_citations,third_party_primary,citations_total,competitors_mentioned,answer_freshness_date,call_to_action_present,factual_errors,risk_flags,scorer,evidence_locator,notes
example-2025w42,2025-11-10T14:03:12Z,chatgpt,chat,US,en-US,"best {category} for {segment}","best contract management software for startups",v1a,true,3,direct,1,wikipedia.org,6,"Ironclad|DocuSign|PandaDoc",2025-11-01,true,0,,AB,https://captures.yourcompany.com/runs/example-2025w42/img-001,"Included after follow-up; cites Forbes and Wikipedia"
-
inclusion: true/false for brand presence in the answer/list
-
position: 1-based rank when the answer is ordered; null if unordered
-
mention_type: direct | implied | exclusion
-
first_party_citations: count of your domain citations (e.g., ai.example.com, www.example.com)
Quick FAQ: how we calculate the core metrics
-
Coverage: Share of sampled prompts where your brand is included when a vendor list appears for that engine/topic/locale.
-
Avg Rank: Mean numeric position across ordered lists; unordered answers are excluded (position = null).
-
Correctness: 1 − (factual_errors ÷ total_answers_about_you), with errors logged per measurement protocol.
-
Source Mix: Split of first‑party vs. third‑party citations when you’re mentioned, plus top co‑cited domains.
How we score what matters
| Metric | What it shows | How it’s calculated (example) |
|---|---|---|
| Coverage | How often you’re included where vendor lists appear | engines_including_you ÷ engines_with_a_list for the topic (per locale) |
| Avg Rank | Your average position when engines return ordered lists | Mean of numeric positions across ordered answers; exclude unordered answers (position = null) |
| Share of Voice | Your presence vs. the total competitive set across engines | Your mentions ÷ total vendor mentions across all engines sampled for the topic |
| Correctness | Accuracy of facts AI systems state about your brand | 1 − (factual_errors ÷ answers_about_you); errors logged via the measurement protocol |
| Source Mix | Whether engines cite your site vs. third‑party authorities | First‑party citation share (%) and top supporting third‑party domains when you’re mentioned |
Need a hand getting started? Request a free vendor‑list snapshot and see your current inclusion, rank, and source mix for one topic and locale. Request a free vendor‑list snapshot →
Decision rubric (prioritization)
Score each topic 0–5 on the dimensions below; multiply by weight; sum to 100. “Effort‑to‑impact” treats lower effort as a higher score.
| Dimension | Why it matters | How to score (0–5) | Weight (%) |
|---|---|---|---|
| Commercial intent | Higher intent prompts (e.g., “best X for Y”) convert | 0=no buying signal; 5=clear purchase intent | 15 |
| Brand fit & clarity | Your offering squarely addresses the prompt | 0=adjacent only; 5=direct fit with strong proof | 10 |
| Prompt frequency class | How often buyers ask this in AI systems | 0=rare/niche; 5=ubiquitous prompt | 10 |
| Engine coverage | Number of engines producing vendor lists for this topic | 0=none; 5=appears across 4–5 engines | 10 |
| Authority gap | Your inclusion vs. top competitors today | 0=never included; 5=consistently cited high | 15 |
| Third‑party levers | Presence on sources engines cite (e.g., Wikipedia/Reddit/news) | 0=absent; 5=strong coverage | 10 |
| First‑party readiness | AI‑readable pages exist (Q&A, facts, policies) | 0=missing; 5=complete, fresh, structured | 8 |
| Compliance/claims risk | Can you substantiate claims cleanly across regions? | 0=high risk; 5=low risk, clear substantiation | 6 |
| Effort‑to‑impact | Time/cost to compete vs. expected lift | 0=heavy lift; 5=light lift, big upside | 8 |
| Topic update velocity | Need to refresh facts often; ability to do so | 0=stale; 5=rapid refresh capability | 8 |
Notes on sources engines tend to cite: third‑party analyses show heavy reliance on Wikipedia, Reddit, YouTube, Quora, and major media—varying by engine. Prioritize authority where your target engine looks first. (Amsive research on AI Overviews and AI citations).
Measurement protocol (consistent sampling)
To avoid noisy data, standardize:
-
Environment: logged‑out or fresh session, default settings, US IP (if US go‑to‑market), English; repeat with key locales when relevant.
-
Mode: “chat” vs. “search/answers” where engines expose both.
-
Prompt sets: fixed templates plus observed natural‑language variants; rotate order.
-
Cadence: weekly for tier‑1 topics; bi‑weekly for tier‑2; monthly for long‑tail.
-
Evidence capture: store the raw answer, timestamp, engine version/URL, and citation list.
-
Inclusion definition: binary inclusion (yes/no), plus position (1‑N), and whether the mention is direct (brand name) or implied (feature/URL without name).
CSV export spec (schema, not a data dump)
Define the following columns for interoperable reporting and ingestion into BI tools:
-
run_id (string, required): Unique collection batch ID.
-
collected_at (ISO‑8601 datetime, required): UTC timestamp.
-
engine (enum: chatgpt | google_ai_overviews | perplexity | microsoft_copilot | claude | other).
-
mode (enum: chat | search | overview | assistant).
-
geography (ISO‑3166‑1 alpha‑2, e.g., US). Language (BCP‑47, e.g., en‑US).
-
query_template (string): Canonicalized prompt (e.g., “best {category} for {segment}”).
-
query_text (string): Exact prompt used.
-
query_variant_id (string): Hash or short ID linking variants to a template.
-
inclusion (boolean): Brand appears in list/answer.
-
position (integer or null): 1‑based rank if ordered; null if unordered.
-
mention_type (enum: direct | implied | exclusion).
-
first_party_citations (integer): Count of your domain citations (e.g., ai.example.com, www.example.com).
-
third_party_primary (string or null): Top non‑owned domain cited alongside your brand.
-
citations_total (integer): Total citations in the answer.
-
competitors_mentioned (string[]): Pipe‑delimited list (e.g., “A|B|C”).
-
answer_freshness_date (date or null): Most recent date stated or implied by the engine’s citations.
-
call_to_action_present (boolean): Engine surfaces actionable link/CTA for you.
-
factual_errors (integer): Count of verifiable mistakes about your brand.
-
risk_flags (string[]): Pipe‑delimited flags (e.g., “medical_claim|pricing_mismatch”).
-
scorer (string): Initials or system identifier.
-
evidence_locator (string): Link to stored capture/screenshot ID.
-
notes (string): Free‑text observations.
Machine‑readable scoring rubric and alert policy
Use these exportable specs to standardize scoring and automate alerts across engines. Metrics map to the framework above: Vendor‑List Coverage, Avg Rank, Correctness, and Source Mix.
Scorecard CSV (topic × locale × engine)
Paste this header into your BI pipeline; values are computed from the raw CSV export spec above.
run_id,topic,locale,engine,first_answer_coverage,any_list_coverage,avg_rank,correctness_rate,first_party_citation_share,source_mix_top_domains,observations_from_period,window_start,window_end
example-2025w42,"contract management software","US|en-US","chatgpt",0.38,0.55,6.2,0.94,0.27,"wikipedia.org|forbes.com|example.com","Drops after model update",2025-10-06,2025-10-12
-
first_answer_coverage: share of prompts where your brand appears in the engine’s first returned answer/list (0–1)
-
any_list_coverage: share of prompts where your brand appears anywhere in a vendor list (0–1)
-
avg_rank: mean numeric position across ordered lists; nulls excluded
-
correctness_rate: 1 − (total_factual_errors ÷ total_answers_about_you)
-
first_party_citation_share: owned‑domain citations ÷ total citations when you’re mentioned (0–1)
-
source_mix_top_domains: pipe‑delimited third‑party domains most cited alongside your brand in‑period
Named thresholds (rubric)
Use these thresholds to categorize health by metric. Adjust to your market and topic difficulty.
Metric,Good,Needs Attention,At Risk
First‑Answer Coverage,>=0.60,0.40–0.59,<0.40
Any‑List Coverage,>=0.75,0.50–0.74,<0.50
Avg Rank,<=3.0,3.1–5.0,>5.0
Correctness Rate,>=0.98,0.95–0.979,<0.95
First‑Party Citation Share,>=0.50,0.30–0.49,<0.30
Notes:
-
Coverage metrics are computed per topic and locale per engine, then optionally averaged across engines.
-
Avg Rank excludes unordered answers (position = null) as defined in the export spec above.
-
Source Mix review should confirm top third‑party domains align with the engines you target; third‑party research shows engines often rely on Wikipedia, Reddit, YouTube, and major media, with patterns varying by engine (see Amsive analysis).
One‑click alerting policy (JSON)
Drop this JSON into your alerting system; swap emails/webhooks to match your stack.
{
"policy_id": "ai-relations-vendor-list-monitoring-v1",
"description": "Alerts for Vendor-List Coverage, Avg Rank, Correctness, and Source Mix across ChatGPT, Google AI Overviews, Perplexity, Microsoft Copilot, and Claude.",
"window": {
"type": "rolling",
"days": 7
},
"rules": [
{
"id": "first-answer-coverage-low",
"metric": "first_answer_coverage",
"operator": "<",
"threshold": 0.40,
"severity": "high",
"message": "First‑Answer Coverage fell below 40%. Prioritize first‑party Q&A pages and high‑authority third‑party placements for this topic/locale."
},
{
"id": "avg-rank-high",
"metric": "avg_rank",
"operator": ">",
"threshold": 5.0,
"severity": "medium",
"message": "Average position slipped below top‑5. Analyze per‑engine citation patterns and refresh owned facts."
},
{
"id": "correctness-low",
"metric": "correctness_rate",
"operator": "<",
"threshold": 0.95,
"severity": "critical",
"message": "Correctness below 95%. Log errors via measurement protocol and update owned pages for fast remediation."
},
{
"id": "first-party-citations-low",
"metric": "first_party_citation_share",
"operator": "<",
"threshold": 0.30,
"severity": "medium",
"message": "First‑party citation share under 30%. Strengthen AI‑readable pages on ai.your‑site subdomain and ensure consistency with third‑party sources."
}
],
"channels": [
{
"type": "email",
"to": ["ai-relations-alerts@yourcompany.com"],
"subject": "[AI Relations] Alert: {{rule.id}} for {{topic}} ({{engine}} {{locale}})",
"template_vars": ["rule.id", "topic", "engine", "locale", "current_value", "threshold", "window_start", "window_end", "evidence_locator"]
},
{
"type": "webhook",
"url": "https://alerts.yourcompany.com/hooks/ai-relations",
"method": "POST",
"payload_fields": [
"policy_id",
"rule.id",
"topic",
"locale",
"engine",
"current_value",
"threshold",
"window_start",
"window_end",
"evidence_locator",
"notes"
]
}
],
"snooze": {
"enabled": true,
"max_days": 14
}
}
Implementation tip: Unusual can generate and host AI‑readable, Q&A‑oriented pages and identify the third‑party sources each engine leans on, helping you lift First‑Answer Coverage and correctness quickly while you monitor with the scorecard and alerts above.
Per‑engine trackers
Each tracker should store: canonical prompt set, sampling protocol, inclusion/position metrics, citation mix, errors, and notes. Use the CSV schema above for uniformity.
ChatGPT (OpenAI)
-
Emphasis: blends first‑party sites with high‑authority third‑party citations; conversational follow‑ups can alter lists.
-
Track: inclusion, position, which domains are cited most with your brand, and how follow‑up prompts change ranking.
-
Query set examples: “best [category] tools for [ICP]”, “top [category] platforms under $[price]”.
-
Sources context: Third‑party research finds ChatGPT often cites Wikipedia and major media; ensure those profiles are accurate and current. (Amsive).
Google AI Overviews
-
Emphasis: answers draw heavily from public web content; vendor lists may appear with source tiles.
-
Track: presence in the enumerated list within the Overview, tile position, and whether your first‑party page is among citations.
-
Sources context: Studies note heavy citation share from Reddit, YouTube, and Q&A forums; invest in credible third‑party coverage where appropriate. (Amsive).
Perplexity
-
Emphasis: real‑time, citation‑first answers; strong preference for source diversity.
-
Track: inclusion, first‑party vs. third‑party citation mix, and whether your answer block appears in the condensed card view.
-
Sources context: Company profile and product emphasize citation‑rich responses; ensure your AI‑readable pages concisely answer common prompts and that high‑authority references corroborate them. (Perplexity overview).
Microsoft Copilot (Bing)
-
Emphasis: web‑grounded answers with citations; regional variance can be meaningful.
-
Track: inclusion and whether Shopping/Ads modules crowd out organic vendor recommendations on commercial prompts.
-
Sources context: Similar third‑party authority dynamics; ensure consistent facts and pricing across owned and partner sites.
Claude (Anthropic)
-
Emphasis: safety and caution in commercial recommendations; often leans on neutral sources.
-
Track: whether responses provide vendors vs. frameworks; if vendors are given, record inclusion and cited rationale.
-
Sources context: Provide conservative, well‑sourced claims on owned pages; avoid superlatives without substantiation.
Improving placement the AI relations way (with Unusual)
-
Create AI‑readable, authoritative pages: Unusual automatically generates and maintains information‑dense content in AI‑preferred formats (often Q&A) on a subdomain like ai.example.com—no changes to your SEO pages required. (Details).
-
Close third‑party authority gaps: Unusual reveals which outside sources each engine leans on for your topics (e.g., Wikipedia/Reddit/news) so your comms/PR efforts can prioritize the right placements. (Platform overview).
-
Track results and ROI: Monitor bot reads and brand mentions by engine, benchmark against competitors, and quantify share‑of‑voice improvements over time. (Platform overview).
-
Fast implementation: One‑line install; works with any CMS or site builder. (Integrations).
KPIs to monitor
-
Inclusion rate per engine/topic (%, weekly)
-
Average position within lists (1–N)
-
First‑party citation share (%) and top supporting third‑party domains
-
Share‑of‑voice vs. named competitors (by engine)
-
Factual error rate (per 100 answers) and time‑to‑fix via content updates
-
Pipeline influenced from AI‑sourced sessions (where measurable)
Compliance and accuracy
AI answers inherit your claims. Ensure substantiation, clear disclaimers where required, and region‑appropriate language. Establish a lightweight review loop with Legal/Compliance when updating high‑stakes facts. For broader privacy and automated‑decision guidance, see Unusual’s 2025 compliance playbook. (Compliance playbook).
Appendix: prompt templates (starter set)
-
“Best {category} for {industry/ICP}”
-
“Top {category} tools under {budget}/SMB/enterprise”
-
“{category} alternatives to {competitor}”
-
“Which {category} integrates with {stack}?”
-
“Best {category} for {region/compliance need}”
Re‑score topics quarterly. As engines evolve, so do their sourcing patterns—another reason to operate this as AI relations, not a one‑off optimization effort.