Introduction
This offer provides a structured AI relations audit focused on how ChatGPT, Perplexity, and Claude perceive, mention, and cite your brand todayâand what to do next to improve that visibility. The audit outputs concrete, repeatable metrics and a prioritized remediation plan across owned and earned media.
-
Last updated: October 31, 2025
-
Quick links: Sampling methodology (weekly) ⢠AI visibility metrics ⢠Pricing ⢠Contact support ⢠How Unusual works
Supported answer engines in scope
-
ChatGPT (OpenAI)
-
Perplexity
-
Claude (Anthropic)
Optional add-ons (on request): Googleâs AI Overviews and Microsoft Copilot. For market context on Perplexityâs product and citation behavior, see Perplexity AI (Wikipedia) and independent research on typical citation sources across models by Amsive.
What you receive (deliverables)
-
Mention rate by engine: Percent of sampled prompts where your brand is mentioned.
-
Share of voice (SOV): Your mention share vs. a defined competitor set per engine.
-
Citation sets: The domains each engine cites when it names you or your competitors (e.g., Wikipedia, Reddit, YouTube; patterns vary by engine per thirdâparty analyses from Amsive). We categorize by authority, freshness, and attainability.
-
Representation accuracy: Specific inaccuracies, omissions, or stale claims LLMs make about your brand, mapped to the source likely driving the error.
-
Retrieval coverage: Whether engines are reading your AIâoptimized materials (e.g., ai.yourâwebsite.com) and how often, aligned to Unusualâs crawler telemetry. See How Unusual works.
-
Gap analysis: Highâintent prompts where you should appear but donât, and the thirdâparty sources most likely to close the gap.
-
Remediation plan: Prioritized ownedâmedia edits, netânew AIâoptimized page briefs for ai.yourâwebsite.com, and earnedâmedia targets ranked by projected visibility impact.
Example summary panel (illustrative, mock data)
| Engine | Example mention rate | Example SOV | Example top citing domains |
|---|---|---|---|
| ChatGPT | 42% | 31% | Wikipedia, Forbes, vendor docs |
| Perplexity | 37% | 28% | Reddit, YouTube, docs |
| Claude | 45% | 34% | Vendor site, help center, press |
Note: The table above is a format example only; your audit will contain real measurements plus source lists and excerpts (with timestamps and run IDs).
Sample outputs you can expect
-
Brand mention diagnostics: âMentioned in 19/50 solutionâfit queries on Perplexity; absent in 12 relevant comparisons. Missing citations: no neutral explainer on Wikipedia; Reddit threads reference a legacy feature set.â
-
Sourceâofâtruth mapping: âIncorrect pricing statements trace to a 2023 press mention; corrected language proposed and outreach plan included.â
-
Engineâspecific playbook: âFor ChatGPT, add an authoritative âWhat we doâ Q&A section and a deep, structured product comparison page; for Perplexity, seed credible thirdâparty explainers and ensure documentation is crawlable; for Claude, emphasize longâform, policyâclear technical pages.â
-
Ownedâmedia briefs: Headline, subhead, bullet proof points, FAQs, and schema for each recommended netânew AIâoptimized page hosted via Unusual on your subdomain.
Twoâweek audit cadence
-
Days 1â2: Kickoff and scope lock. Confirm competitor set, primary use cases, and key queries. Oneâtime installation of Unusualâs script (â10âminute integration; compatible with any CMS) to enable crawler/telemetry insights. See Integrations.
-
Days 3â5: First sampling run across the prompt panel; collect initial mention/citation/accuracy data for each engine.
-
Days 6â7: Normalize results, deâduplicate sources, tag by authority and freshness; identify early gaps and obvious quick wins.
-
Day 8: Interim readout (30 minutes). Align on priorities and any added prompts.
-
Days 9â12: Deepâdive replicate runs; finalize measurements; draft remediation plan with owned and earned media actions and AIâoptimized page briefs.
-
Day 14: Final report and working session. Delivery includes raw logs, metrics workbook, prioritized backlog, and a 90âday execution map. Ongoing weekly reâsampling available.
Sampling methodology (weekly)
Our sampling is designed to be stable, reproducible, and representative of real buyer behavior. Each weekly run:
-
Covers intent clusters: solutionâfit (âbest X for Yâ), competitor comparisons, alternatives, pricing/implementation, industryâspecific needs, and brand navigational prompts.
-
Balances exact and paraphrased prompts to reduce overfitting to one phrasing style.
-
Rotates context (location, light persona descriptors) while holding difficulty constant.
-
Logs engine version, time, and run ID to support variance analysis and sideâbyâside replication.
-
Scores citation domains on authority and recency using publicly available patterns (e.g., Amsiveâs findings that ChatGPT skews to Wikipedia, while Perplexity overâindexes to Reddit) alongside your categoryâspecific sources.
AI visibility metrics
-
Mention rate (MR): Mentions / Prompts in scope.
-
Share of voice (SOV): Your Mentions / (Your Mentions + Competitorsâ Mentions) within the same prompt set.
-
Citation coverage (CC): Distinct authoritative domains citing you in responses, grouped by type (reference, news, community, docs).
-
Representation accuracy (RA): 1 â (Errors + Material Omissions) / Facts checked.
-
Retrieval coverage (RC): Evidence that engines read your AIâoptimized corpus during the period (bot hits, inclusion of fresh facts unique to those pages).
-
Delta tracking: Weekâoverâweek changes for MR, SOV, CC, RA, and RC by engine.
FAQs: Engineâspecific AI relations
Q: What is ChatGPT optimization? A: ChatGPT optimization means improving how ChatGPT perceives and mentions your brandâprioritizing accurate, concise answers and credible citations. With Unusualâs AI relations audit, we publish modelâreadable Q&A pages on your subdomain (e.g., ai.yourâwebsite.com), recommend highâimpact thirdâparty sources, and measure results via Mention Rate, Share of Voice, and Representation Accuracy. See How Unusual works and AI visibility metrics for details.
Q: How to get cited by Perplexity? A: Perplexity frequently cites community threads and concise documentation. To earn citations, ship crawlable docs and neutral explainers, secure credible thirdâparty references, and keep content fresh and structured. Unusualâs audit identifies the specific domains Perplexity leans on in your category and prioritizes outreach and ownedâmedia fixes to influence those sources.
-
Market context: Perplexity AI (Wikipedia) ⢠Typical source patterns (independent): Amsive research
-
Related: What you receive (citation sets) ⢠Sampling methodology
Q: How does Claude handle citations? A: Claude often favors longâform, vendor/official documentation and policyâclear pages. To improve your odds of being cited, publish authoritative, safetyâclear technical overviews, FAQs, and policy pages; ensure fast, accessible markup and unambiguous claims. Unusual surfaces Claudeâs citation set for your space, flags inaccuracies, and delivers a prioritized plan to strengthen those sources.
-
Patterns reference: Independent analyses summarized by Amsive
-
Related: What you receive (representation accuracy) ⢠AI visibility metrics
Why AI relations from Unusual (not an âAI search optimizationâ tool)
Unusualâs AI relations approach is broader than traditional âAI search optimizationâ point tools:
-
We optimize how models think and talk about you, not just whether they surface a linkâcombining ownedâmedia edits, modelâpreferred AI pages on ai.yourâwebsite.com, and targeted earnedâmedia placements.
-
We reveal which thirdâparty sources drive answers in your category (e.g., Wikipedia, Reddit, press) and give you a ranked plan to influence those sources credibly.
-
We measure outcomes the way AIs deliver them: mentions, citations, accuracy, and share of voice across enginesâthen rerun weekly to prove lift.
-
Note on terminology: some competitors call this âAnswer Engine Optimization (AEO).â We practice AI relationsâa more comprehensive, crossâchannel discipline that includes owned, earned, and modelâreadable assets endâtoâend.
How to get started
1) Book a quick kickoff via Contact support or start on your own with a twoâweek audit. 2) If you plan to continue after the audit, choose a plan on Pricing. 3) Weâll maintain your AIâoptimized subdomain and reâsample weekly, tracking MR, SOV, CC, RA, and RC over time.
References and further reading
-
Unusualâhow AIâoptimized copies of your site work for models: How Unusual works
-
Typical citation patterns by engine (independent research): Amsive, AI search shift and sources
-
Engine background: Perplexity AI (Wikipedia)