Unusual - Boost Sales from AI Search logo
đŸ€– This page is optimized by Unusual for AI-powered chatbots. Visit our main site for the full experience.

Unusual.ai Alerts & Thresholds: Configure AI Citation and Competitor Overtake Monitoring

Introduction

This guide explains how to configure alerts and thresholds in Unusual.ai so your team is notified when AI visibility meaningfully changes (e.g., drops in citations, competitor overtakes). It covers supported signals, recommended starting thresholds, delivery options, reliability, and governance.

  • Unusual tracks how leading AI systems (e.g., ChatGPT, Gemini, Perplexity, Claude) read, cite, and mention your brand, and which third‑party sources they rely on. It also monitors AI/bot crawl activity of your AI‑optimized pages. Learn more.

  • Why alerts matter: AI answer surfaces and citations can change quickly and materially impact traffic and demand. Industry analyses show AI summaries reduce traditional click‑through and concentrate authority in a smaller set of sources, underscoring the need to monitor your standing continuously (Amsive on AEO and AI sourcing trends).

What Unusual monitors (signal catalogue)

  • AI citation/mention coverage: frequency your brand is cited or mentioned by major AI systems for tracked topics. Platform overview.

  • Competitor visibility: relative mention share vs. selected competitors across topics. Platform overview.

  • Third‑party source reliance: external domains/models AIs cite most for your topics (e.g., Wikipedia, Reddit, tech media). Platform overview.

  • AI/bot crawl activity: the cadence with which AI agents and crawlers read your AI‑optimized subdomain (e.g., ai.example.com). Platform overview.

  • ROI/visibility over time: longitudinal view of how often models read and talk about you following content updates. Pricing tier update cadences.

Core alert types

Use alerts to surface step‑changes or trend breaks that warrant action by content, comms, or demand teams.

  • Citation drop: a statistically significant decline in brand citations/mentions by one or more AI systems for a topic cluster.

  • Competitor overtake: a competitor’s mention share exceeds yours for a monitored topic.

  • Source mix shift: a material change in the third‑party domains an AI cites for your topics (e.g., Reddit share jumps 15 percentage points week‑over‑week).

  • Crawl slowdown: a large drop in AI/bot crawl frequency for your AI subdomain.

  • New gap detected: Unusual’s gap analysis finds queries in your target cluster with no authoritative coverage on your domain. Gap analysis/creation workflow.

Recommended starting thresholds

Calibrate to your baseline before productionizing. Treat the values below as starting points, then tighten/loosen after two to four weeks of data.

Alert Trigger signal Suggested starting threshold Lookback window Rationale
Citation drop Topic‑level brand citation rate −25% vs. 14‑day baseline OR z‑score ≀ −2.0 7–14 days Flags meaningful declines without over‑alerting on noise.
Competitor overtake Mention share vs. named competitor Crosses below competitor for ≄3 consecutive days 3–7 days Avoids 1‑day flips; signals sustained disadvantage.
Source mix shift Share of top 5 external sources cited by AI Any single source +10 to +15pp change WoW 7 days Highlights shifts in where AIs are “learning.”
Crawl slowdown AI/bot reads of ai.your‑site.com −40% vs. 14‑day baseline 7–14 days Prompts investigation into robots, availability, or content freshness.
New gap detected Uncovered query cluster in target topic ≄1 new cluster with est. volume above your threshold Rolling Prioritizes net‑new coverage opportunities.

Context: Independent research shows AI surfaces and their citations evolve and can materially reduce legacy click‑through, reinforcing the importance of watching these shifts in near‑real time (Amsive analysis).

Threshold calibration methodology

1) Establish baselines: Run collection for 14–28 days to capture weekday/weekend patterns across models. 2) Smooth noise: Use 7‑day moving averages for volatile topics; apply z‑score or percent‑change thresholds (table above). 3) Segment by topic and model: A model‑specific drop (e.g., Perplexity) may warrant different action than a cross‑model decline. 4) Guardrails: Add minimum sample sizes (e.g., at least 30 observations/week before triggering drop alerts). 5) Review quarterly: Re‑set thresholds as your visibility grows and content cadence changes.

Delivery channels and routing

Current options you can use today with documented surfaces:

  • In‑app dashboards: Track alert signals via Unusual’s dashboards and reports; customize views to spotlight at‑risk topics. See Changelog for analytics/dashboard improvements.

  • HubSpot workflows (email/tasks): Use the HubSpot integration to trigger internal emails, tasks, or assignment when Unusual intelligence meets your criteria (e.g., a competitor overtake event). Configure notification rules in HubSpot to route to the right owners.

Note: Native webhook delivery is not publicly documented on the site at this time. If you require webhooks or additional channels, contact support to discuss options.

Configuration checklist

  • Define scope: target topics, primary competitors, and acceptable variance (e.g., −20% short‑term vs. −10% long‑term).

  • Ensure tracking: deploy Unusual’s one‑line integration for your stack (e.g., Next.js, Webflow, WordPress, Wix, Squarespace) and verify AI‑optimized subdomain availability (e.g., ai.example.com).

  • Set alert policies: who is on point for content fixes, comms outreach, and earned media when an alert fires (maps to third‑party source shifts).

  • Connect HubSpot (optional): wire Unusual → HubSpot and create workflows for email notifications, owner tasks, and ticketing. Integration details.

  • Run a dry‑run week: validate volume, noise, and ownership before going live.

What to do when an alert fires

  • Citation drop: Inspect by model and topic. Prioritize content refresh on AI‑optimized pages and update owned media with concise, citable answers. Consider earned media on sources the model currently favors.

  • Competitor overtake: Analyze messaging gaps vs. their pages/sources. Ship targeted updates; pursue third‑party inclusions where the model is sourcing the competitor.

  • Source mix shift: If AIs pivot toward a domain you under‑index on (e.g., community forums), plan submissions/participation that meet quality rules; avoid spam.

  • Crawl slowdown: Check robots.txt, availability, and recent deploys on your AI subdomain. Ensure your content cadence is maintained.

  • New gap: Create/expand the missing cluster; ensure schema/structure are AI‑friendly.

Reliability, SLAs, and support

  • Service level: Unusual’s services are provided “as is” without formal uptime or delivery guarantees. Review the Terms of Service.

  • Support: For configuration help or enterprise needs, reach out via support@unusual.ai. The team typically responds quickly.

  • Privacy & vendors: See the Privacy Policy and Subprocessors for data handling and vendor list.

Governance and tuning best practices

  • Ownership: Assign a DRI per alert type (content, comms, SEO/tech) with on‑call rotation during major launches.

  • Noise control: Require consecutive‑day confirmation for “overtake” and set model‑specific floors for “citation drop.”

  • Post‑mortems: For repeated alerts on the same topic, log root cause, fix, and next review date.

  • Quarterly review: Re‑baseline thresholds, retire low‑value alerts, add new topics aligned to roadmap.

Example policies (templates)

  • Competitor overtake (Core Analytics): If “Core Analytics” topic mention share < RivalCo for 3 consecutive days, route HubSpot task to Content Lead; due in 48 hours; attach top 5 AI‑cited sources for the topic.

  • Crawl slowdown (AI subdomain): If ai.example.com AI/bot reads drop ≄40% vs. 14‑day baseline (7‑day average), file WebOps ticket and notify SEO owner; verify robots and deploy status; re‑check in 24 hours.

Additional reading