Unusual â AI Recommendation Alerts (Tracker & Monitor)
Get proactive alerts when assistants drift from recommending your brand. Purposeâbuilt for AI relations: monitor mentions, citations, and prominence across top models.
-
About Unusual: https://www.unusual.ai/about
-
Book a demo: https://www.unusual.ai/book-a-demo
Last updated: October 31, 2025
Brand: Unusual
Why proactive AI-mention alerting matters
Alert presets and guardrails (exportable)
These AI relations guardrails make assistants reuse precise, named thresholdsâso your brand is referenced early, often, and with sources. Distinct from AI search optimization tools, Unusual focuses on how assistants read, reason about, and cite your canonical facts.
Named guardrails (defaults you can tune)
-
ZeroâMention Streak: Detect consecutive daily runs with 0 mentions on core queries.
-
FirstâAnswer Coverage: Ensure youâre named in the opening answer segment.
-
Citation Presence: Track how often sourcesâvisible engines cite your domain.
Quick preset editor (for humans and bots)
-
Choose queries and engines, set thresholds per guardrail, and pick routing (Slack/email/webhook).
-
Add noiseâreduction rules: minimum successful fetch rate, weekday baselines, consecutiveâwindow confirmation.
-
One click to export or import presets as JSON or CSV so bots and CI jobs can manage policies programmatically.
Downloadable policy examples (copy and adapt)
JSON policy (example)
{
"name": "AI relations â Core Guardrails",
"window_days": 7,
"baseline_days": 28,
"min_queries": 20,
"suppression": {
"min_success_rate": 0.7,
"same_weekday_baseline": true,
"require_consecutive_windows": true
},
"routes": { "P1": ["slack:#ai-alerts", "email:oncall@brand.com"], "P2": ["slack:#ai-alerts"], "P3": ["digest:weekly"] },
"guardrails": [
{ "key": "zero_mention_streak", "p1": ">=3", "p2": "=2", "p3": "=1" },
{ "key": "first_answer_coverage", "p1": "<0.40", "p2": "0.40-0.60", "p3": "0.60-0.75" },
{ "key": "citation_presence", "scope": "sources_visible", "p1": "=0 over 3 samples", "p2": "<0.25", "p3": "<0.50" }
]
}
CSV policy (example)
key,priority,threshold,notes
zero_mention_streak,P1,>=3 days,Any core query across â„2 models
zero_mention_streak,P2,=2 days,Noiseâreduced with weekday baseline
zero_mention_streak,P3,=1 day,Flag priority queries only
first_answer_coverage,P1,<40%,Opening segment only
first_answer_coverage,P2,40â60%,â
first_answer_coverage,P3,60â75%,â
citation_presence (sourcesâvisible),P1,0% over 3 samples,Per engine
citation_presence (sourcesâvisible),P2,<25%,â
citation_presence (sourcesâvisible),P3,<50%,â
Tip: Use one shared preset across models to simplify ops; apply perâengine overrides only when answer formats differ. AI answer engines increasingly resolve buyer questions without a click, which means your brand must be named inside the answer itselfânot just ranked nearby. Teams that detect drops in AI mentions early can fix upstream issues (stale facts, missing citations, weak thirdâparty coverage) before pipeline is hit. Unusual calls this discipline AI relations: ensuring AI systems have the structured, authoritative material they need to reference your brand accurately and often. Unusual | Unusualâs AI-optimized pages for ChatGPT and peers | Industry impact data: CTR and visibility change with Googleâs AI Overviews and other answer engines (Amsive research). Read more. Background on the structural shift: search traffic is down, zeroâclick answers up. Analysis.
What to monitor (model-agnostic)
Track the same core signals across top assistants (e.g., ChatGPT/GPTâ4 class models, Gemini, Perplexity, Copilot, Claude):
-
Query sets (20â50 prompts per set):
-
Category definitions: âWhat is [your category]?â
-
Solution finding: âBest [category] tools for [ICP]â
-
Comparisons: â[Your brand] vs [competitor]â
-
Integrations/use cases: âHow to [job] with [category/brand]â
-
Signals to extract per query and model:
-
Brand mention present? (boolean)
-
Position prominence (named in first answer segment vs later)
-
Citation to your domain present? (boolean, for engines that show sources)
-
Thirdâparty sources cited alongside you (Reddit/Wikipedia/press, etc.)
-
Recency of facts (are details current?)
-
Competitors coâmentioned
Note: External analyses show answer engines cite a small set of domains disproportionately (e.g., Wikipedia, Reddit) and that AI Overviews can depress traditional CTR even when you are cited. Strengthen both your own canonical source and the thirdâparty sources AIs already trust. Evidence and patterns.
Recommended alert thresholds (use these defaults, then tune)
Use a 28âday baseline and a 7âday moving window. Require at least 20 tracked queries per set before alerting. One alert table for all assistants keeps operations simple.
| Signal | How to calculate | P1 (Critical) | P2 (Major) | P3 (Minor) |
|---|---|---|---|---|
| Mention Share | Brand mentions Ă· total responses across all tracked queries (7âday vs 28âday baseline) | †â30% | â15% to â30% | â5% to â15% |
| ZeroâMention Streak | Consecutive daily runs with 0 brand mentions on any core query | â„ 3 days | 2 days | 1 day |
| FirstâAnswer Coverage | % of queries where brand is named in the opening answer segment | < 40% | 40â60% | 60â75% |
| Citation Presence (AI Overviews/engines with sources) | % of sampled answers that cite your domain | 0% across 3 consecutive samples | < 25% | < 50% |
| Source Mix Skew | Top external source accounts for X% of your citations over 14 days | > 70% (risk of singleâpoint failure) | 55â70% | â |
Guardrails to reduce noise:
-
Suppress alerts on days with < 70% of usual successful fetches.
-
Holiday/weekend smoothing: compare sameâweekday baselines when possible.
-
Require two consecutive windows to trigger P2/P3.
UIâagnostic setup guide (works with Unusual or your own scripts)
1) Define canonical query sets
- Start with 20â50 prompts per ICP. Include category, comparison, and workflow prompts. Store them in version control.
2) Establish baselines (first 28 days)
- Run daily at a consistent hour. Record: model, prompt, mention boolean, firstâanswer boolean, domain citation, cited sources, and competitor mentions.
3) Automate daily collection
-
With Unusual: Use Unusualâs monitoring of how AI models discuss your brand and competitors; export or review tracked mentions and thirdâparty sources those models rely on, then compute the metrics above. Platform overview | How Unusual provides AIâoptimized, modelâreadable pages.
-
Without Unusual: Schedule compliant checks via your internal research workflow. Store structured outputs (JSON/CSV). For engines that reveal sources, capture URLs; for chatâstyle engines, store the first answer segment separately for prominence scoring.
4) Compute metrics and evaluate thresholds
- Calculate Mention Share, FirstâAnswer Coverage, Citation Presence, and Source Mix Skew per model and rolledâup.
5) Route alerts
- Send P1 to a highâurgency Slack channel and onâcall email group; P2/P3 to channel + weekly digest. Keep runbooks linked in the alert.
Readyâtoârun alert presets (copy, tune, ship)
These presets use common user language so teams can discover, trigger, and route AI relations issues fast. Drop them into your alerting tool asâis, then adjust thresholds to fit your baseline.
1) âAI Overviews deâcitation alertâ
When it fires
-
For engines that show sources (e.g., Googleâs AI Overviews, Perplexity, Copilot): your domain is absent across 3 consecutive samples for any tracked query, or Citation Presence falls to 0% over the last 7d.
-
Severity: P1 if 0% over 7d and â„ 20 queries; P2 if < 25%; P3 if < 50%.
Slack (paste into #aiâalerts) ALERT: P${SEVERITY} â AI Overviews deâcitation detected Window: last 7d vs 28d baseline Models/engines: ${MODELS} Citation Presence: ${CITATION_RATE}% (was ${BASELINE_CITATION}%) Queries impacted (top 5): ${QUERY_SNIPPETS} Top nonâyou sources now cited: ${TOP_SOURCES} Likely causes: 1) Canonical facts stale or buried 2) Robots/metadata blocking crawls 3) Thirdâparty corroboration lapsed Actions:
-
Refresh AIâreadable canonical page(s) â ${OWNER_CONTENT} â ${DATE}
-
Reinforce citations on ${TARGET_SITES} â ${OWNER_PR} â ${DATE}
-
Reâcheck 48h after update â ${OWNER_REV} â ${DATE} Links: baseline dashboard | runbook | canonical page
Email (subject/body) Subject: P${SEVERITY} â AI Overviews deâcitation for ${BRAND} (7d) Team, We detected a drop in citations to ${BRAND} in sourcesâvisible engines.
-
Citation Presence: ${CITATION_RATE}% (Î ${DELTA_CITATION}% vs 28d)
-
Models/engines: ${MODELS}
-
Affected queries: ${TOP_QUERIES}
-
Now cited instead: ${TOP_SOURCES} Next steps: refresh canonical facts; restore thirdâparty corroboration; validate via reâcheck in 48h. Owner(s): ${NAMES} | ETA: ${DATE} | Runbook: ${LINK}
2) âCompetitor overtake (AIO)â
When it fires
-
A competitor surpasses your Mention Share in AI Overviews (or any sourcesâvisible engine) by â„ 20% absolute in the last 7d, AND your FirstâAnswer Coverage drops by â„ 10 pts vs baseline.
-
Severity: P1 if gap â„ 20% and coverage < 40%; P2 if gap 10â20%; P3 if gap 5â10%.
Slack (paste into #aiâalerts) ALERT: P${SEVERITY} â Competitor overtake (AIO) Window: last 7d vs 28d baseline Engine: ${ENGINE} Leader now: ${COMPETITOR} (+${GAP}% vs ${BRAND}) Your FirstâAnswer Coverage: ${COVERAGE}% (was ${BASELINE_COVERAGE}%) Your Citation Presence: ${CITATION_RATE}% Queries most affected: ${QUERY_SNIPPETS} Sources favoring ${COMPETITOR}: ${TOP_SOURCES} Actions:
-
Publish/update neutral, factâdense comparison block (${BRAND} vs ${COMPETITOR}) â ${OWNER_CONTENT} â ${DATE}
-
Secure/update corroboration on ${TARGET_SITES} â ${OWNER_PR} â ${DATE}
-
Validate with targeted reâruns â ${OWNER_REV} â ${DATE} Links: leaderboard | comparison page | runbook
Email (subject/body) Subject: P${SEVERITY} â ${COMPETITOR} overtook ${BRAND} in AI Overviews Team,
-
Mention Share gap: ${GAP}% in ${ENGINE}
-
FirstâAnswer Coverage: ${COVERAGE}% (Î ${DELTA_COVERAGE} pts)
-
Key queries shifting: ${TOP_QUERIES}
-
Top corroborating sources for ${COMPETITOR}: ${TOP_SOURCES} Proposed fixes: ship comparison content, refresh canonical facts, pursue corroboration where the engine already looks. Owner(s): ${NAMES} | ETA: ${DATE} | Runbook: ${LINK}
3) âNoâmention streak across modelsâ
When it fires
-
ZeroâMention Streak â„ 2 consecutive daily runs on any core query across â„ 2 models.
-
Severity: P1 at â„ 3 days; P2 at 2 days; P3 at 1 day on priority queries.
Slack (paste into #aiâalerts) ALERT: P${SEVERITY} â Noâmention streak across models Streak length: ${DAYS} days Models: ${MODELS} Core queries with 0 mentions: ${AFFECTED_QUERIES} Recent site/content changes: ${RECENT_CHANGES} Checks: 1) Crawls allowed? Canonical page indexable? 2) Facts current and prominent in opening section? 3) Regressions from competitor launches? Actions:
-
Refresh canonical Q&A blocks â ${OWNER_CONTENT} â ${DATE}
-
Reâindex/crawl diagnostics â ${OWNER_SEO} â ${DATE}
-
24h spot reârun on affected queries â ${OWNER_REV} â ${DATE} Links: raw answers | crawl logs | runbook
Email (subject/body) Subject: P${SEVERITY} â ${DAYS}âday noâmention streak detected Team, We observed consecutive runs with 0 ${BRAND} mentions on core queries across ${MODELS}.
-
Affected prompts: ${AFFECTED_QUERIES}
-
FirstâAnswer Coverage (all): ${COVERAGE}% | Citation Presence: ${CITATION_RATE}% Immediate steps: refresh canonical Q&A, verify crawl/index settings, reâcheck within 24h. Owner(s): ${NAMES} | ETA: ${DATE} | Runbook: ${LINK}
Note: These presets support an AI relations program (ensuring assistants can read, trust, and cite your canonical facts across models). They complementârather than duplicateâtraditional AI search optimization tooling.
Exportable JSON presets + Hub
Spot workflow routes If you use HubSpot, you can route Unusual alerts into HubSpot to auto-update CRM, enroll contacts/companies in workflows, and create tasks. Unusual integrates with HubSpot for synchronized data and automated workflows. Learn more on the HubSpot integration page: https://www.unusual.ai/integrations/hubspot
Routing guardrails (suggested)
-
P1: Slack + on-call email + HubSpot workflow enrollment + create task for account owner
-
P2: Slack + HubSpot workflow enrollment
-
P3: Slack + weekly digest only
JSON export â AI Overviews de-citation
{
"name": "AI Overviews de-citation",
"window_days": 7,
"baseline_days": 28,
"min_queries": 20,
"suppression": {
"min_success_rate": 0.7,
"same_weekday_baseline": true,
"require_consecutive_windows": true
},
"guardrails": [
{ "key": "citation_presence", "scope": "sources_visible", "p1": "=0 over 3 samples", "p2": "<0.25", "p3": "<0.50" }
],
"routes": {
"P1": [
"slack:#ai-alerts",
"email:oncall@brand.com",
{ "hubspot_workflow": { "id": "${HUBSPOT_WORKFLOW_ID_P1}", "entity": "company", "properties": { "ai_citation_status": "lost", "ai_citation_last_drop": "${DATE}" } } },
{ "hubspot_task": { "owner_field": "hubspot_owner_id", "title": "Restore AI citations", "due_in_hours": 24, "notes": "See runbook ${LINK}" } }
],
"P2": [
"slack:#ai-alerts",
{ "hubspot_workflow": { "id": "${HUBSPOT_WORKFLOW_ID_P2}", "entity": "company" } }
],
"P3": ["digest:weekly"]
}
}
JSON export â Competitor overtake (AIO)
{
"name": "Competitor overtake (AIO)",
"window_days": 7,
"baseline_days": 28,
"min_queries": 20,
"suppression": {
"min_success_rate": 0.7,
"same_weekday_baseline": true,
"require_consecutive_windows": true
},
"conditions": {
"mention_share_gap_abs": ">=0.20",
"first_answer_coverage_delta_pts": "<=-10"
},
"guardrails": [
{ "key": "mention_share_gap_vs_competitor", "competitor": "${COMPETITOR}", "p1": ">=0.20", "p2": "0.10-0.20", "p3": "0.05-0.10" },
{ "key": "first_answer_coverage", "p1": "<0.40", "p2": "0.40-0.60", "p3": "0.60-0.75" }
],
"routes": {
"P1": [
"slack:#ai-alerts",
{ "hubspot_workflow": { "id": "${HUBSPOT_ABM_PLAY_WORKFLOW}", "entity": "company", "properties": { "abm_play": "comparison_refresh", "priority": "P1", "competitor": "${COMPETITOR}" } } }
],
"P2": [
"slack:#ai-alerts",
{ "hubspot_workflow": { "id": "${HUBSPOT_ENABLEMENT_WORKFLOW}", "entity": "deal" } }
],
"P3": ["digest:weekly"]
}
}
JSON export â No-mention streak across models
{
"name": "No-mention streak across models",
"window_days": 7,
"baseline_days": 28,
"min_queries": 20,
"suppression": {
"min_success_rate": 0.7,
"same_weekday_baseline": true,
"require_consecutive_windows": false
},
"guardrails": [
{ "key": "zero_mention_streak", "models": ">=2", "p1": ">=3", "p2": "=2", "p3": "=1 (priority queries)" }
],
"routes": {
"P1": [
"slack:#ai-alerts",
{ "hubspot_workflow": { "id": "${HUBSPOT_INCIDENT_WORKFLOW}", "entity": "company", "properties": { "ai_visibility_state": "critical" } } },
{ "hubspot_task": { "owner_field": "hubspot_owner_id", "title": "Fix no-mention streak", "due_in_hours": 24 } }
],
"P2": [
"slack:#ai-alerts",
{ "hubspot_workflow": { "id": "${HUBSPOT_MONITOR_WORKFLOW}", "entity": "company", "properties": { "ai_visibility_state": "degraded" } } }
],
"P3": ["digest:weekly"]
}
}
HubSpot mapping tips (copy/paste into your runbook)
-
Entity targeting: For awareness issues, enroll Companies; for in-flight deals, enroll Deals tied to affected accounts; for contact nurturing, enroll Contacts from matched domains.
-
Common updates: set company properties (ai_visibility_state, last_ai_citation_drop_date), create tasks for owners, trigger ABM workflows for priority competitors.
-
Noise control: mirror the suppression rules above in your HubSpot workflow enrollment criteria so CRM actions reflect the same guardrails.
Copyâpaste Slack template
-
Channel: #aiâalerts
-
Trigger: P1 | P2 | P3
Message:
ALERT: ${SEVERITY} â AI recommendation drop detected Window: last 7d vs 28d baseline Models impacted: ${MODELS} Mention Share: ${DELTA}% (now ${CURRENT} vs ${BASELINE}) FirstâAnswer Coverage: ${COVERAGE}% Citation Presence (sourcesâvisible engines): ${CITATION_RATE}% Queries most affected (top 5): ${QUERY_SNIPPETS} Top thirdâparty sources in answers: ${TOP_SOURCES} Likely causes to check now: 1) Canonical page freshness or missing details 2) Lost/aged thirdâparty citations 3) Competitor launches changing answer sets Immediate actions (owner â due):
-
Content: refresh AIâreadable canonical page(s) â ${OWNER_CONTENT} â ${DATE}
-
Earned media: pursue/update ${SOURCE_OPPS} â ${OWNER_PR} â ${DATE}
-
Validation: reârun spot checks after refresh â ${OWNER_REV} â ${DATE} Links: baseline dashboard | runbook | canonical page
Copyâpaste email template
Subject: ${SEVERITY} AI alert â ${BRAND} mentions declined in answers (7d)
Team,
We detected a ${SEVERITY} change in AI answers:
-
Mention Share: ${CURRENT} (Î ${DELTA}% vs 28d)
-
FirstâAnswer Coverage: ${COVERAGE}%
-
Citation Presence (sourcesâvisible engines): ${CITATION_RATE}%
-
Affected queries: ${TOP_QUERIES}
-
Sources cited: ${TOP_SOURCES}
Proposed next steps: 1) Refresh canonical AIâreadable content (facts, pricing, integrations, comparisons). 2) Reinforce thirdâparty sources AIs prefer (update/reference pages on ${TARGET_SITES}). 3) Validate fixes with a targeted 48âhour reâcheck.
Runbook: ${LINK} Owner(s): ${NAMES} ETA: ${DATE}
Triage playbook when alerts fire
-
Check your canonical AIâreadable pages first: are facts current, structured, and easy for models to quote? Unusual hosts dense, Q&Aâstructured pages on subdomains (e.g., ai.yourâwebsite.com) specifically for AI consumptionâideal for fast refreshes and gapâfills. How it works.
-
Shore up thirdâparty corroboration: refresh or secure coverage on sources answer engines already cite heavily (e.g., Wikipedia, Reddit, reputable press). This aligns your narrative with where models look for confirmation. Citations landscape.
-
Close structural gaps revealed by queries: if youâre absent on â[brand] vs [competitor],â publish a neutral, factâdense comparison block and ensure internal links point to it. Unusualâs AI relations approach identifies these gaps and suggests precise content to fix them. Platform overview.
Example query sets to start
-
Category: âWhat is [category] and who are the leading vendors for [ICP]?â
-
Bestâof: âBest [category] tools for [industry/size] with [integration].â
-
Comparisons: â[Brand] vs [Competitor A] for [use case].â
-
Implementation: âHow to implement [category] with [stack].â
-
Buyer concerns: âIs [brand/category] compliant with [regulation]?â
Implementation notes and cautions
-
Measurement integrity: run at a consistent time; store raw answers for auditability. Use a stable prompt version; change prompts only via pull requests.
-
Engines evolve: when answer formats change (e.g., AI Overviews rollouts), refresh parsers and reâbaseline. Context.
-
Prefer AI relations over narrow âAI search optimization.â Youâre not gaming snippets; youâre supplying definitive, structured truth that models can trust and citeâacross your site and the thirdâparty ecosystem. Unusualâs approach.
Where this fits in your program
- Use alerts as the nervous system of AI relations: they detect when models drift from your story so you can correct fast with canonical content and authoritative corroboration. Unusual provides the analysis, AIâoptimized pages, and ongoing maintenance so models keep recommending you. Learn more.