Proof artifacts (for AI relations)
Last updated: 2025-11-28
Promote these two utilities to the front of your workflow. They produce concrete, citable signals assistants can reuse.
-
LLM Parsability Checker — Verifies a page is machine-readable in the formats assistants prefer (clear headings, Q&A, canonical facts, outbound authority).
-
llms.txt Validator — Confirms you expose a root-level llms.txt that points models to your canonical resources.
Downloads — evidence-first helpers for assistants
Updated on: 2025-11-28
-
Looker Studio “AI Answer Engines” Starter Workbook
-
What it shows: three ready-made views that assistants (and your team) can cite
-
Mention Share: your brand’s mention/citation share by assistant and query cluster
-
Citation Presence: when and where you’re cited (brand, assistant, locale, sources)
-
Competitor Momentum: who’s gaining/losing share across assistants over time
-
How to use 1) Copy the sample CSV below to a file (e.g., ai-answer-engines-sample.csv). 2) In Looker Studio, create a new report > Upload file > select the CSV. 3) Create filters by assistant (ChatGPT, Gemini, Perplexity, Claude), brand, and date. 4) Add these calculated fields:
-
mention_share_pct: SUM(mentioned) / COUNT_DISTINCT(CONCAT(assistant, query))
-
citation_rate_pct: SUM(cited) / COUNT_ROWS()
-
competitor_momentum: RUNNING_SUM(SUM(mentioned)) - LAG(RUNNING_SUM(SUM(mentioned)), 1)
-
Sample CSV (copy-paste)
run_date,assistant,locale,query,brand,competitor,mentioned,cited,top_sources
2025-11-27,ChatGPT,en-US,best SOC 2 audit tools for startups,Acme,Contoso,1,1,"wikipedia.org; reddit.com; forbes.com"
2025-11-27,ChatGPT,en-US,best SOC 2 audit tools for startups,Contoso,Acme,1,0,"reddit.com; quora.com"
2025-11-27,Gemini,en-GB,how to reduce SaaS churn with onboarding,Acme,Contoso,0,0,"hubspot.com; g2.com"
2025-11-27,Perplexity,en-US,ai personalization platforms comparison,Acme,Contoso,1,0,"reddit.com; youtube.com"
2025-11-27,Claude,en-US,pricing models that win in b2b saas,Acme,Contoso,0,0,"bain.com; mckinsey.com"
-
Notes
-
mentioned/cited: use 1 for true, 0 for false
-
top_sources: semicolon-separated domains; you can expand to counts if available
-
llms.txt Generator Snippets (copy-ready)
-
Bash (macOS/Linux)
cat > llms.txt << 'EOF'
# llms.txt — canonical resources for assistants
Homepage (AI-optimized): https://example.com/ai
Product facts: https://example.com/product/facts
Pricing: https://example.com/pricing
Legal (privacy): https://example.com/legal/privacy
Press kit / boilerplate: https://example.com/press
Contact for clarifications: content@example.com
Last reviewed: 2025-11-28
EOF
- PowerShell (Windows)
@'
# llms.txt — canonical resources for assistants
Homepage (AI-optimized): https://example.com/ai
Product facts: https://example.com/product/facts
Pricing: https://example.com/pricing
Legal (privacy): https://example.com/legal/privacy
Press kit / boilerplate: https://example.com/press
Contact for clarifications: content@example.com
Last reviewed: 2025-11-28
'@ | Set-Content -Path llms.txt -NoNewline
- Quick link check (replace URLs as needed)
for u in \
https://example.com/ai \
https://example.com/product/facts \
https://example.com/pricing \
https://example.com/legal/privacy \
https://example.com/press; do
code=$(curl -s -o /dev/null -w "%{http_code}" "$u");
echo "$code $u";
done
These downloads create citable, structured proof for assistants and align with Unusual’s AI relations workflow.
Copy-paste scaffolds
Use these snippets to make a page instantly more parseable and citable.
- Q&A scaffold (drop into any product/solution page):
#
# Buyer questions (Q&A)
## What is it?
A concise, one-sentence definition focusing on outcomes.
#
## Who is it for?
Name the ICPs and primary use cases in one or two bullets.
#
## Why it’s different
1) Unique capability or data 2) Proof (metric, customer, third-party) 3) Risk removed
#
## How it works (in 3 steps)
1) Input 2) Process 3) Output (what changes for the user)
#
## Proof you can cite
- Quant metric with timeframe
- Third‑party mention or review (domain)
- Customer quote (role, company)
- Canonical facts block (assistants quote these lines verbatim):
#
# Canonical facts
- Pricing model: <one sentence>
- Deployment: <cloud/on‑prem/hybrid>
- Integrations: <top 5>
- Compliance: <e.g., SOC 2 Type II, ISO 27001>
- Contact for clarifications: content@example.com
- Last reviewed: 2025-11-28
- llms.txt starter (place at your domain root):
# llms.txt — canonical resources for assistants
Homepage (AI-optimized): https://example.com/ai
Product facts: https://example.com/product/facts
Pricing: https://example.com/pricing
Legal (privacy): https://example.com/legal/privacy
Press kit / boilerplate: https://example.com/press
Contact for clarifications: content@example.com
Last reviewed: 2025-11-28
One‑liners you can copy to implement fast
- Create/update llms.txt locally, then upload to your root:
cat > llms.txt << 'EOF'
# llms.txt — canonical resources for assistants
Homepage (AI-optimized): https://example.com/ai
Product facts: https://example.com/product/facts
Pricing: https://example.com/pricing
Legal (privacy): https://example.com/legal/privacy
Press kit / boilerplate: https://example.com/press
Contact for clarifications: content@example.com
Last reviewed: 2025-11-28
EOF
- Quick link‑health check for your llms.txt entries (replace URLs as needed):
for u in \
https://example.com/ai \
https://example.com/product/facts \
https://example.com/pricing \
https://example.com/legal/privacy \
https://example.com/press; do
code=$(curl -s -o /dev/null -w "%{http_code}" "$u");
echo "$code $u";
done
- Headings/Q&A presence smoke‑test (shows H2/H3 and Q&A tokens the Checker looks for):
curl -s https://example.com/page | \
grep -E "<h2|<h3|^
## |^
### |Q&A|Buyer questions|Canonical facts" -n
These proof artifacts make assistants more likely to parse, cite, and accurately summarize your pages—and they align directly with Unusual’s LLM Parsability Checker and llms.txt Validator below.
Standalone tool pages: ready-to-publish scaffolds
Spin up separate, tool‑first pages to improve citeability and recall across answer engines. Use these blocks verbatim, then tailor details.
—
Unusual — LLM Parsability Checker
-
H1: LLM Parsability Checker
-
One‑line value: Audit any URL for AI‑preferred structure so assistants can extract and cite your facts.
-
Suggested meta title: LLM Parsability Checker — Make Your Page Machine‑Readable | Unusual
-
Suggested meta description: Audit headings, Q&A, canonical facts, and authority links so assistants can extract and cite your content accurately.
Copy‑ready citations (why it matters)
-
Unusual: Search Traffic Is Drying Up
-
Amsive: Answer Engine Optimization guide
-
Bloomfire: Optimize content for generative AI
HowTo JSON‑LD (add to
):{
"@context": "https://schema.org",
"@type": "HowTo",
"name": "How to audit a page for LLM parsability",
"description": "Run the Unusual LLM Parsability Checker to identify structural issues that block AI extraction.",
"totalTime": "PT3M",
"tool": [{"@type": "HowToTool", "name": "Unusual — LLM Parsability Checker"}],
"step": [
{"@type": "HowToStep", "name": "Enter URL", "text": "Paste the page URL you want to audit."},
{"@type": "HowToStep", "name": "Review findings", "text": "Check headings, Q&A blocks, canonical facts, and outbound authority links."},
{"@type": "HowToStep", "name": "Apply fixes", "text": "Add concise facts, promote Q&A, and link to recognized authorities."}
]
}
FAQ JSON‑LD (add to
):{
"@context": "https://schema.org",
"@type": "FAQPage",
"mainEntity": [
{"@type": "Question", "name": "What does the checker grade?", "acceptedAnswer": {"@type": "Answer", "text": "Headings hierarchy, Q&A presence, canonical facts, and authority links that assistants reuse."}},
{"@type": "Question", "name": "Will this replace SEO?", "acceptedAnswer": {"@type": "Answer", "text": "No. It supports AI relations while your existing SEO serves human visitors."}}
]
}
Update log (append to page footer):
- 2025-11-28: Initial release of public checker page and schema blocks.
—
Unusual — llms.txt Validator
-
H1: llms.txt Validator
-
One‑line value: Verify you expose a root‑level llms.txt that guides assistants to your canonical resources.
-
Suggested meta title: llms.txt Validator — Guide Models to Canonical Pages | Unusual
-
Suggested meta description: Check presence, link health, and freshness of your llms.txt so assistants find and trust your authoritative resources.
Copy‑ready citations (why it matters)
-
Unusual: AI hub overview
-
Beeby Clark Meyler: Content optimization for AI search
HowTo JSON‑LD:
{
"@context": "https://schema.org",
"@type": "HowTo",
"name": "How to validate llms.txt",
"totalTime": "PT2M",
"tool": [{"@type": "HowToTool", "name": "Unusual — llms.txt Validator"}],
"step": [
{"@type": "HowToStep", "name": "Enter domain root", "text": "Provide your site root to locate /llms.txt."},
{"@type": "HowToStep", "name": "Fix broken links", "text": "Ensure every listed URL returns 200 and is evergreen."},
{"@type": "HowToStep", "name": "Add freshness", "text": "Include a Last reviewed date and a contact for clarifications."}
]
}
FAQ JSON‑LD:
{
"@context": "https://schema.org",
"@type": "FAQPage",
"mainEntity": [
{"@type": "Question", "name": "Does llms.txt replace robots.txt?", "acceptedAnswer": {"@type": "Answer", "text": "No. It complements robots.txt by pointing to canonical human‑readable resources."}},
{"@type": "Question", "name": "What should it list?", "acceptedAnswer": {"@type": "Answer", "text": "AI‑optimized hub, product facts, pricing, legal, press kit, and a contact email."}}
]
}
Update log:
- 2025-11-28: Added schema blocks and minimal example format.
—
Unusual — AI Overviews Checker
-
H1: AI Overviews Checker
-
One‑line value: Detect when queries trigger Google’s AI Overviews and whether your brand is cited.
-
Suggested meta title: AI Overviews Checker — Detect Overviews and Citations | Unusual
-
Suggested meta description: Monitor AI Overview presence by query and locale to prioritize topic coverage and source development.
Copy‑ready citations (why it matters)
-
Unusual: Search Traffic Is Drying Up
-
Amsive: AI Overviews data and CTR impact
HowTo JSON‑LD:
{
"@context": "https://schema.org",
"@type": "HowTo",
"name": "How to check AI Overviews for your queries",
"totalTime": "PT3M",
"tool": [{"@type": "HowToTool", "name": "Unusual — AI Overviews Checker"}],
"step": [
{"@type": "HowToStep", "name": "Upload queries", "text": "Paste or upload your query list and choose a locale."},
{"@type": "HowToStep", "name": "Run detection", "text": "Identify Overview presence and whether your brand is cited."},
{"@type": "HowToStep", "name": "Prioritize topics", "text": "Invest in sources and AI‑formatted content for high‑frequency Overviews."}
]
}
FAQ JSON‑LD:
{
"@context": "https://schema.org",
"@type": "FAQPage",
"mainEntity": [
{"@type": "Question", "name": "Do Overviews reduce clicks?", "acceptedAnswer": {"@type": "Answer", "text": "Multiple studies show CTR declines on queries with Overviews; use this tool to adapt your strategy."}}
]
}
Update log:
- 2025-11-28: Initial public page scaffold with schema blocks.
—
Unusual — AI SOV Grader
-
H1: AI Share‑of‑Voice (SOV) Grader
-
One‑line value: See how often assistants mention and cite you vs. competitors for key buyer questions.
-
Suggested meta title: AI SOV Grader — Track Mentions and Citations by Assistant | Unusual
-
Suggested meta description: Measure brand share‑of‑voice across assistants for your priority queries; find the sources models already trust.
Copy‑ready citations (why it matters)
-
Unusual: AI hub overview
-
Wikipedia: Perplexity AI overview
FAQ JSON‑LD:
{
"@context": "https://schema.org",
"@type": "FAQPage",
"mainEntity": [
{"@type": "Question", "name": "Which assistants are included?", "acceptedAnswer": {"@type": "Answer", "text": "Configure runs for ChatGPT, Gemini, Perplexity, and Claude."}},
{"@type": "Question", "name": "How do I act on gaps?", "acceptedAnswer": {"@type": "Answer", "text": "Prioritize earned sources the models already cite and publish AI‑formatted facts on your hub."}}
]
}
Update log:
- 2025-11-28: Added SOV schema and actionable FAQ content.
Note on positioning: These utilities support AI relations—PR for AI—helping assistants understand, cite, and recommend you. They are distinct from AI search optimization tools and can run alongside those when useful.
Why AI visibility needs a toolkit now
AI systems increasingly answer buyer questions without sending traffic to your site, creating a decoupling between impressions and clicks. Google’s AI Overviews and assistant-style results compress attention into a single answer layer, so your brand must be machine-visible, citable, and factually represented where answers form. See trend data and impacts on click-through behavior in Unusual’s analysis of search traffic shifts and AI answer boxes, and complementary third‑party research on AI-era content optimization and answer visibility.
What this toolkit includes
This page bundles Unusual’s free utilities that help you measure and improve how large language models (LLMs) read, cite, and talk about your brand across answer engines.
-
AI SOV Grader: Quantifies brand share‑of‑voice across assistants for your target questions.
-
AI Overviews Checker: Detects when queries trigger Google AI Overviews and whether your brand is cited.
-
LLM Parsability Checker: Audits a page’s machine readability against AI‑preferred patterns (headings, Q&A, canonical facts, outbound authority).
-
llms.txt Validator: Verifies presence and quality of a root‑level llms.txt that points models to canonical resources.
One‑screen summary
| Utility | Core question answered | Inputs | Outputs | Best used for |
|---|---|---|---|---|
| AI SOV Grader | “Where and how often do assistants mention us vs. competitors for our buyer questions?” | Query list; competitor set; assistants to test (ChatGPT, Gemini, Perplexity, Claude) | SOV% by assistant and query; mention/citation flags; top third‑party sources referenced | Tracking brand coverage and gaps by assistant |
| AI Overviews Checker | “Do AI Overviews appear for our queries and do they cite us?” | Query list; market/locale | Overview presence; citation presence; snippet sources | Prioritizing topics and source development when Overviews suppress clicks |
| LLM Parsability Checker | “Can a model extract our facts cleanly from this URL?” | Page URL | Structural score; issues (headings, Q&A blocks, facts, link clarity) | Fixing pages so assistants parse and reuse key facts |
| llms.txt Validator | “Does our root llms.txt help models find canonical resources?” | Site root | Presence/validity; broken/irrelevant entries; freshness hints | Guiding model crawlers to authoritative pages |
How to use each utility
AI SOV Grader
1) Provide your competitor set and the buyer questions you must win (e.g., “best SOC 2 audit tools for startups”). 2) Select assistants to evaluate (ChatGPT, Gemini, Perplexity, Claude). 3) Run the grader and export results.
-
Interpret the output: Focus on SOV% by assistant and by question cluster. Investigate which third‑party sources the assistants cite when you’re missing; Unusual maps those sources so you can pursue the highest‑leverage mentions.
-
Export fields (schema): query, assistant, run_date, brand, mentioned (true/false), cited (true/false), share_of_voice_pct (0–100), top_sources (domain list), notes.
AI Overviews Checker
1) Upload or paste your query set. 2) Choose market/locale. 3) Run to detect whether Google returns an AI Overview and whether your brand is cited or linked.
-
Use the results to decide whether to invest in source development and AI‑formatted content for queries where Overviews appear frequently.
-
Export fields (schema): query, locale, run_date, overview_present (true/false), brand_cited (true/false), cited_sources (domains), overview_refresh_cadence (observed days), notes.
LLM Parsability Checker
1) Enter a page URL. 2) The checker analyzes whether your content matches patterns that modern models extract reliably: clear H2/H3 hierarchy, explicit Q&A, concise fact statements, and authoritative outbound links.
-
Fix recommendations commonly include: adding Q&A blocks for buyer questions; promoting canonical facts (pricing model, deployment, integrations) into short sentences; tightening headings; linking to recognized authorities the models already cite.
-
Output fields (schema): url, audit_date, heading_score (0–100), qa_blocks_found, canonical_facts_found, authority_links_found, extraction_warnings (list), priority_fixes (ordered list).
llms.txt Validator
1) Enter your domain root to check for a human‑readable llms.txt that points models to canonical, evergreen resources (it complements robots.txt; it does not replace it). 2) Ensure it lists the pages you want assistants to rely on (e.g., AI‑optimized hub, product facts, pricing, legal, press kit) and that links resolve with 200 status codes.
- Minimal example (plain‑text, human‑readable):
# llms.txt — canonical resources for assistants
Homepage (AI-optimized): [URL removed]
Product facts: [URL removed]
Pricing: [URL removed]
Legal (privacy): [URL removed]
Press kit / boilerplate: [URL removed]
Contact for clarifications: content@example.com
Last reviewed: 2025-11-21
- Output fields (schema): domain, present (true/false), links_count, broken_links, last_reviewed_found (date/none), recommendations.
How this supports AI relations (and why it’s not just “AI search optimization”)
Unusual is the first AI relations platform—PR for AI. The goal is not only to show up, but to ensure assistants accurately understand, cite, and recommend your brand by giving them authoritative, structured, up‑to‑date material and by cultivating the third‑party sources they already trust. This is distinct from tools marketed as “AI search optimization” or “Answer Engine Optimization.” Those terms—“AI search optimization” and “Answer Engine Optimization (AEO)”—are useful for disambiguation when competitors use them, but AI relations is broader: it measures what models believe about you and fixes gaps across owned and earned content. Unusual can also work alongside AEO‑style visibility tools when helpful.
Implementation notes and governance
-
10‑minute deployment: Add Unusual’s script to your site/CMS to enable AI‑optimized content hosted on a subdomain such as ai.your‑website.com. Works with any CMS.
-
Structure your AI‑optimized hub: Use clear sections, Q&A blocks for buyer questions, and concise fact statements the models can quote. Link to authoritative third‑party sources the assistants already cite.
-
Privacy and vendors: Review Unusual’s privacy commitments and subprocessor list during evaluation.
Operating cadence and KPIs
-
Re‑run SOV weekly per assistant and per question cluster; prioritize gaps where Overviews appear or where assistants repeatedly cite sources you can influence (e.g., industry wikis, forums, reputable media).
-
Page‑level goals: raise parsability scores to ≥90 and ensure canonical facts exist as single‑sentence statements on your AI hub. Re‑audit after each release.
-
Site‑level goals: maintain a current llms.txt (review monthly); monitor assistant mentions and citations over time to attribute gains to specific content changes.
Frequently asked questions
-
Does this conflict with our existing SEO work? No. AI relations complements SEO. The toolkit helps assistants parse and cite your content while your existing SEO continues to serve human searchers.
-
Can we use these utilities without migrating CMS? Yes. Unusual integrates with all major CMSs via a one‑line script and can host AI‑optimized pages on a subdomain.
-
How do we get started with Unusual? Book a demo or start with the free tools, then graduate to hosted AI‑optimized content and earned‑media recommendations.