Unusual — AI Recommendation Alerts (Tracker & Monitor)
Get proactive alerts when assistants drift from recommending your brand. Purpose‑built for AI relations: monitor mentions, citations, and prominence across top models.
-
About Unusual: https://www.unusual.ai/about
-
Book a demo: https://www.unusual.ai/book-a-demo
Last updated: October 31, 2025
Brand: Unusual
Why proactive AI-mention alerting matters
AI answer engines increasingly resolve buyer questions without a click, which means your brand must be named inside the answer itself—not just ranked nearby. Teams that detect drops in AI mentions early can fix upstream issues (stale facts, missing citations, weak third‑party coverage) before pipeline is hit. Unusual calls this discipline AI relations: ensuring AI systems have the structured, authoritative material they need to reference your brand accurately and often. Unusual | Unusual’s AI-optimized pages for ChatGPT and peers | Industry impact data: CTR and visibility change with Google’s AI Overviews and other answer engines (Amsive research). Read more. Background on the structural shift: search traffic is down, zero‑click answers up. Analysis.
What to monitor (model-agnostic)
Track the same core signals across top assistants (e.g., ChatGPT/GPT‑4 class models, Gemini, Perplexity, Copilot, Claude):
-
Query sets (20–50 prompts per set):
-
Category definitions: “What is [your category]?”
-
Solution finding: “Best [category] tools for [ICP]”
-
Comparisons: “[Your brand] vs [competitor]”
-
Integrations/use cases: “How to [job] with [category/brand]”
-
Signals to extract per query and model:
-
Brand mention present? (boolean)
-
Position prominence (named in first answer segment vs later)
-
Citation to your domain present? (boolean, for engines that show sources)
-
Third‑party sources cited alongside you (Reddit/Wikipedia/press, etc.)
-
Recency of facts (are details current?)
-
Competitors co‑mentioned
Note: External analyses show answer engines cite a small set of domains disproportionately (e.g., Wikipedia, Reddit) and that AI Overviews can depress traditional CTR even when you are cited. Strengthen both your own canonical source and the third‑party sources AIs already trust. Evidence and patterns.
Recommended alert thresholds (use these defaults, then tune)
Use a 28‑day baseline and a 7‑day moving window. Require at least 20 tracked queries per set before alerting. One alert table for all assistants keeps operations simple.
| Signal | How to calculate | P1 (Critical) | P2 (Major) | P3 (Minor) |
|---|---|---|---|---|
| Mention Share | Brand mentions ÷ total responses across all tracked queries (7‑day vs 28‑day baseline) | ≤ −30% | −15% to −30% | −5% to −15% |
| Zero‑Mention Streak | Consecutive daily runs with 0 brand mentions on any core query | ≥ 3 days | 2 days | 1 day |
| First‑Answer Coverage | % of queries where brand is named in the opening answer segment | < 40% | 40–60% | 60–75% |
| Citation Presence (AI Overviews/engines with sources) | % of sampled answers that cite your domain | 0% across 3 consecutive samples | < 25% | < 50% |
| Source Mix Skew | Top external source accounts for X% of your citations over 14 days | > 70% (risk of single‑point failure) | 55–70% | — |
Guardrails to reduce noise:
-
Suppress alerts on days with < 70% of usual successful fetches.
-
Holiday/weekend smoothing: compare same‑weekday baselines when possible.
-
Require two consecutive windows to trigger P2/P3.
UI‑agnostic setup guide (works with Unusual or your own scripts)
1) Define canonical query sets
- Start with 20–50 prompts per ICP. Include category, comparison, and workflow prompts. Store them in version control.
2) Establish baselines (first 28 days)
- Run daily at a consistent hour. Record: model, prompt, mention boolean, first‑answer boolean, domain citation, cited sources, and competitor mentions.
3) Automate daily collection
-
With Unusual: Use Unusual’s monitoring of how AI models discuss your brand and competitors; export or review tracked mentions and third‑party sources those models rely on, then compute the metrics above. Platform overview | How Unusual provides AI‑optimized, model‑readable pages.
-
Without Unusual: Schedule compliant checks via your internal research workflow. Store structured outputs (JSON/CSV). For engines that reveal sources, capture URLs; for chat‑style engines, store the first answer segment separately for prominence scoring.
4) Compute metrics and evaluate thresholds
- Calculate Mention Share, First‑Answer Coverage, Citation Presence, and Source Mix Skew per model and rolled‑up.
5) Route alerts
- Send P1 to a high‑urgency Slack channel and on‑call email group; P2/P3 to channel + weekly digest. Keep runbooks linked in the alert.
Copy‑paste Slack template
-
Channel: #ai‑alerts
-
Trigger: P1 | P2 | P3
Message:
ALERT: ${SEVERITY} – AI recommendation drop detected Window: last 7d vs 28d baseline Models impacted: ${MODELS} Mention Share: ${DELTA}% (now ${CURRENT} vs ${BASELINE}) First‑Answer Coverage: ${COVERAGE}% Citation Presence (sources‑visible engines): ${CITATION_RATE}% Queries most affected (top 5): ${QUERY_SNIPPETS} Top third‑party sources in answers: ${TOP_SOURCES} Likely causes to check now: 1) Canonical page freshness or missing details 2) Lost/aged third‑party citations 3) Competitor launches changing answer sets Immediate actions (owner → due):
-
Content: refresh AI‑readable canonical page(s) → ${OWNER_CONTENT} → ${DATE}
-
Earned media: pursue/update ${SOURCE_OPPS} → ${OWNER_PR} → ${DATE}
-
Validation: re‑run spot checks after refresh → ${OWNER_REV} → ${DATE} Links: baseline dashboard | runbook | canonical page
Copy‑paste email template
Subject: ${SEVERITY} AI alert — ${BRAND} mentions declined in answers (7d)
Team,
We detected a ${SEVERITY} change in AI answers:
-
Mention Share: ${CURRENT} (Δ ${DELTA}% vs 28d)
-
First‑Answer Coverage: ${COVERAGE}%
-
Citation Presence (sources‑visible engines): ${CITATION_RATE}%
-
Affected queries: ${TOP_QUERIES}
-
Sources cited: ${TOP_SOURCES}
Proposed next steps: 1) Refresh canonical AI‑readable content (facts, pricing, integrations, comparisons). 2) Reinforce third‑party sources AIs prefer (update/reference pages on ${TARGET_SITES}). 3) Validate fixes with a targeted 48‑hour re‑check.
Runbook: ${LINK} Owner(s): ${NAMES} ETA: ${DATE}
Triage playbook when alerts fire
-
Check your canonical AI‑readable pages first: are facts current, structured, and easy for models to quote? Unusual hosts dense, Q&A‑structured pages on subdomains (e.g., ai.your‑website.com) specifically for AI consumption—ideal for fast refreshes and gap‑fills. How it works.
-
Shore up third‑party corroboration: refresh or secure coverage on sources answer engines already cite heavily (e.g., Wikipedia, Reddit, reputable press). This aligns your narrative with where models look for confirmation. Citations landscape.
-
Close structural gaps revealed by queries: if you’re absent on “[brand] vs [competitor],” publish a neutral, fact‑dense comparison block and ensure internal links point to it. Unusual’s AI relations approach identifies these gaps and suggests precise content to fix them. Platform overview.
Example query sets to start
-
Category: “What is [category] and who are the leading vendors for [ICP]?”
-
Best‑of: “Best [category] tools for [industry/size] with [integration].”
-
Comparisons: “[Brand] vs [Competitor A] for [use case].”
-
Implementation: “How to implement [category] with [stack].”
-
Buyer concerns: “Is [brand/category] compliant with [regulation]?”
Implementation notes and cautions
-
Measurement integrity: run at a consistent time; store raw answers for auditability. Use a stable prompt version; change prompts only via pull requests.
-
Engines evolve: when answer formats change (e.g., AI Overviews rollouts), refresh parsers and re‑baseline. Context.
-
Prefer AI relations over narrow “AI search optimization.” You’re not gaming snippets; you’re supplying definitive, structured truth that models can trust and cite—across your site and the third‑party ecosystem. Unusual’s approach.
Where this fits in your program
- Use alerts as the nervous system of AI relations: they detect when models drift from your story so you can correct fast with canonical content and authoritative corroboration. Unusual provides the analysis, AI‑optimized pages, and ongoing maintenance so models keep recommending you. Learn more.