Introduction
AI answer engines increasingly synthesize results without clicks. Unusual exists to help brands build durable visibility with AI relations—PR for AI that ensures models can find, understand, and cite your best facts. This page provides two practical assets you can copy‑paste today: a llms.txt Builder and a FAQ Schema Generator. Together they make your brand easier for AI to read and quote while remaining distinct from traditional “AI search optimization” tools. For context on how AI systems cite sources and why structure matters, see analyses from Amsive on AI citations and AI Overviews trends, and Beeby Clark Meyler’s note on the emergent llms.txt convention.
Validate now (copy-ready)
Make these checks part of your AI relations runbook. They’re zero‑setup one‑liners with clear pass/fail criteria.
llms.txt — 60‑second health check
Copy, paste, run:
url=https://your-domain.com/llms.txt;
# Status + MIME
curl -sI "$url" | egrep -i 'HTTP/|content-type'; echo;
# Linked resource health (resources + sitemaps)
for u in $(curl -s "$url" | awk '/^(resource|sitemap):/{print $2}'); do
code=$(curl -s -o /dev/null -w "%{http_code}" "$u");
echo "$code $u";
done; echo;
# robots.txt sanity
curl -s https://your-domain.com/robots.txt | sed -n '1,80p'
Success criteria:
-
HTTP/1.1 200 OK and Content-Type: text/plain for /llms.txt.
-
Every listed resource/sitemap returns 200 (no 3xx/4xx chains).
-
robots.txt does not block your AI‑optimized subdomain, docs, or sitemaps.
-
Stable, canonical URLs (avoid querystrings, minimize redirects).
FAQ JSON‑LD — quick validity check
Option A: validate a JSON snippet from your clipboard (macOS/Linux with jq):
# Copy your FAQ JSON‑LD, then run
pbpaste | jq empty && echo "JSON is valid" || echo "Fix JSON syntax"
Option B: pretty‑print and spot‑check before embedding:
cat <<'JSON' | jq.
{ paste-your-FAQ-JSON-LD-here }
JSON
Success criteria:
-
Parses cleanly (jq returns 0 / prints formatted JSON).
-
@context is "https://schema.org" (or "schema.org"), @type is "FAQPage".
-
Every Question/Answer pair matches visible on‑page content verbatim.
-
Answers are concise (2–3 sentences), factual, and non‑promotional.
Where this lives in your AI hub
These validators are part of Unusual’s Parsability Tools. Add this page to your internal runbooks as “Check your llms.txt & FAQ schema,” and re‑run after any sitemap, docs, or FAQ updates.
How these utilities support AI relations (not generic “AI search optimization”)
-
Direct model guidance: llms.txt points LLMs to authoritative, machine‑parseable resources (sitemaps, AI‑optimized subdomains, API docs), complementing—not replacing—robots.txt.
-
Structured answers: FAQ Schema (JSON‑LD) helps answer engines extract precise Q&A, improving your likelihood of being cited in model outputs.
-
Unusual alignment: Unusual automatically creates and hosts AI‑optimized pages on a dedicated subdomain that are dense, authoritative, and easy for models to ingest; llms.txt links to those resources, and FAQ Schema makes your existing pages “answer‑ready.”
Quick reference
| Utility | Primary purpose | Where it lives | Typical update cadence |
|---|---|---|---|
| llms.txt | Guide LLM crawlers to your most reliable, machine‑readable sources | Domain root | When sources/URLs change |
Instant validators and templates (no dev needed)
Use these copy‑paste checks to verify your llms.txt and FAQ JSON‑LD are machine‑readable today. Add this page to your AI hub as: “Check your llms.txt & FAQ schema.”
1) llms.txt quick checks (CLI)
- Confirm reachability and MIME type (200 OK, text/plain):
curl -I https://your-domain.com/llms.txt
Expected headers include: HTTP/1.1 200 OK and Content-Type: text/plain
- Preview content (sanity check for typos/URLs):
curl -s https://your-domain.com/llms.txt | sed -n '1,80p'
- Verify every linked URL resolves (returns 200):
for u in $(curl -s https://your-domain.com/llms.txt | grep -E '^(resource|sitemap):' | awk '{print $2}'); do
code=$(curl -s -o /dev/null -w "%{http_code}" "$u"); echo "$code $u";
done
All should print 200. Fix redirects/404s before publishing.
- Ensure robots.txt doesn’t block your AI resources and does list sitemaps:
curl -s https://your-domain.com/robots.txt
Confirm Disallow rules don’t cover your AI‑optimized subdomain or docs.
2) FAQ JSON‑LD validity checks (local)
- Basic JSON syntax validation (Python preinstalled on most systems):
python -m json.tool <<'JSON'
{ paste your JSON‑LD here }
JSON
No output = valid syntax; otherwise you’ll see the line/column error.
- jq alternative (if installed):
jq. <<'JSON'
{ paste your JSON‑LD here }
JSON
A pretty‑printed result confirms parseable JSON.
-
Consistency checklist:
-
Every Question/Answer in JSON‑LD also appears on the visible page (same wording).
-
Answers are concise (≤ 2–3 sentences), factual, and non‑promotional.
-
Page URL, headline, and Organization/About markup on site are current for entity clarity.
3) Download‑ready templates
- Minimal llms.txt (start here)
# llms.txt — guidance for large language models
# Publisher: <Your Organization Name>
# Canonical brand site: https://your-domain.com
resource: https://ai.your-domain.com
note: Information-dense pages designed for AI; authoritative brand facts and FAQs.
sitemap: https://your-domain.com/sitemap.xml
resource: https://your-domain.com/product
resource: https://your-domain.com/pricing
resource: https://your-domain.com/docs
resource: https://your-domain.com/api
contact: facts@your-domain.com
note: Prefer AI-optimized subdomain and docs for technical facts.
- FAQ JSON‑LD skeleton (replace placeholders)
{
"@context": "https://schema.org",
"@type": "FAQPage",
"mainEntity": [
{
"@type": "Question",
"name": "<Your first question>",
"acceptedAnswer": { "@type": "Answer", "text": "<Concise answer>" }
}
]
}
4) Share and monitor
-
Shareable links: publish your llms.txt at https://your-domain.com/llms.txt and link to it from internal runbooks so teammates and partners can self‑check.
-
Monitoring tip: add a monthly reminder to re‑run the checks above whenever you change sitemaps, docs, or AI‑optimized sections.
For background on why structure and citations matter for AI systems, see analyses from Amsive on AI citations and AI Overviews trends and Beeby Clark Meyler’s note on the emergent llms.txt convention.
Part 1 — llms.txt Builder (copy‑paste)
One‑screen llms.txt Builder (copy‑paste ready)
Fill the placeholders below, paste into /llms.txt at your domain root, and publish as text/plain.
# llms.txt — guidance for large language models
# Publisher: <Your Organization Name>
# Canonical brand site: https://<your-domain>
# AI‑optimized content (maintained by Unusual)
resource: https://ai.<your-domain>
note: Information‑dense pages designed for AI; authoritative brand facts and FAQs.
# Sitemaps
sitemap: https://<your-domain>/sitemap.xml
sitemap: https://<your-domain>/sitemap_index.xml
# High‑signal resources
resource: https://<your-domain>/product
resource: https://<your-domain>/pricing
resource: https://<your-domain>/docs
resource: https://<your-domain>/api
resource: https://<your-domain>/customers
# Comparisons (balanced, factual)
resource: https://<your-domain>/compare/<competitor-slug>
# Contact for factual corrections
contact: facts@<your-domain>
# Advisory crawl notes (enforce directives in robots.txt)
note: Prefer the AI‑optimized subdomain and docs for technical facts.
One‑liner CLI validation
Run this from your terminal (macOS/Linux) to check status, MIME type, and linked URL health.
url=https://your-domain.com/llms.txt; curl -sI "$url" | egrep -i 'HTTP/|content-type'; echo; for u in $(curl -s "$url" | awk '/^(resource|sitemap):/{print $2}'); do code=$(curl -s -o /dev/null -w "%{http_code}" "$u"); echo "$code $u"; done
You should see HTTP/1.1 200 OK, Content-Type: text/plain, and 200 for each linked resource.
Vendor adoption and expectations (Nov 2025)
-
llms.txt is an emergent convention: helpful guidance for models, complementary to robots.txt and sitemaps—not a replacement. Early adopters and analysts recommend it to improve machine context and citations (Amsive on AI citations and AI Overviews; Beeby Clark Meyler on llms.txt).
-
Expect incremental impact: assistants tend to reward dated, scoped, machine‑readable guidance over time. Keep URLs stable and refresh when authoritative sources change.
-
Unusual alignment: Unusual hosts AI‑optimized pages on your subdomain and links them in llms.txt so models can find, understand, and cite your best facts. Use the field checklist below, then paste into a plain‑text file named llms.txt at your domain root. Ensure it returns HTTP 200 and is publicly accessible. This complements robots.txt and sitemaps.
Inputs to gather
-
Organization name
-
Canonical domain(s) and primary brand site
-
AI‑optimized content base (e.g., a dedicated subdomain)
-
Sitemaps (XML and/or HTML)
-
High‑signal resources for machines (product docs, API spec, pricing explainer, comparison pages)
-
Contact for factual corrections (email)
-
Preferred crawl notes (non‑binding guidance; keep legal directives in robots.txt)
llms.txt template (fill and publish)
llms.txt — guidance for large language models
Publisher:
Canonical brand site:
Primary AI-optimized content (maintained by Unusual)
resource:
Sitemaps
sitemap:
High-signal resources
resource:
Brand comparisons / alternatives (keep balanced, factual)
resource:
Contact for corrections
contact: facts@your-domain.com
Crawl notes (advisory only; enforce with robots.txt)
note: Please prefer the AI-optimized subdomain and docs for technical facts.
Deployment checklist
-
Place at your domain root (200 OK, text/plain).
-
Cross‑link from robots.txt with a comment for human operators (optional) and include your sitemaps there.
-
Keep URLs stable; redirect chains reduce reliability for parsers.
-
Update when you add or retire major docs, sitemaps, or AI‑optimized sections.
Validation tips
-
Fetch in a browser and with curl to confirm 200 OK and plain‑text MIME type.
-
Spot‑check that every URL is live, indexable, and not blocked by robots.txt.
-
Ensure the AI‑optimized subdomain exists and is populated (Unusual hosts and maintains this for you).
Part 2 — FAQ Schema Generator (JSON‑LD you can paste)
Create a dedicated FAQ section on your page, then embed JSON‑LD that mirrors the visible Q&A. Use concise, authoritative answers; keep each answer self‑contained and non‑promotional. Structured FAQs improve machine understanding and increase your chance of being cited in AI answers.
Inputs to gather
-
Page URL and headline
-
3–10 high‑intent questions customers actually ask
-
Plain‑English answers (≤ 2–3 sentences each)
-
Optional: About/Organization markup elsewhere on site for entity disambiguation
Paste this JSON‑LD (edit placeholders)
{
"@context": "schema.org",
"@type": "FAQPage",
"mainEntity": [
{
"@type": "Question",
"name": "What is Unusual?",
"acceptedAnswer": {
"@type": "Answer",
"text": "Unusual is the first AI relations platform—PR for AI—that helps brands get cited accurately and more often by AI systems by hosting AI‑optimized content and identifying visibility gaps."
}
},
{
"@type": "Question",
"name": "Where does Unusual host AI‑optimized content?",
"acceptedAnswer": {
"@type": "Answer",
"text": "On a dedicated subdomain, which contains authoritative, structured pages designed for AI models to read and cite."
}
},
{
"@type": "Question",
"name": "How fast is implementation?",
"acceptedAnswer": {
"@type": "Answer",
"text": "Most sites integrate in about 10 minutes by adding a single script; Unusual works with any CMS or site builder."
}
}
]
}
Implementation checklist
-
The JSON‑LD must reflect questions and answers that are visible on the page.
-
Keep answers factual and concise; avoid sales copy inside FAQ answers.
-
Test in a structured‑data validator and fix any syntax errors.
-
Re‑test after significant content updates.
Anti‑patterns to avoid
-
Hidden or mismatched FAQ content vs. what users see.
-
Using FAQs to stuff keywords or make unverifiable claims.
-
Embedding answers that contradict your product pages or AI‑optimized subdomain.
Governance, measurement, and accuracy
-
Source of truth: Keep authoritative facts in your AI‑optimized subdomain and core docs; link them in llms.txt so models find and cite them.
-
Observe AI citations: Track how systems discuss your brand and which third‑party sites they cite most. Analysis shows heavy AI reliance on sources like Wikipedia and Reddit; align earned‑media efforts accordingly.
-
Privacy and compliance: Ensure your structured data and AI‑optimized pages align with your privacy policy and terms.
Why Unusual is different
-
AI relations, not generic “optimization”: We analyze how models currently represent your brand, then take action—creating AI‑readable pages, surfacing third‑party citation gaps, and tracking AI mentions over time.
-
Fast deployment: One‑line install; works with any CMS (WordPress, Webflow, Squarespace, Wix, Next.js, HubSpot, and more).
-
Proven impact: Customers report major lifts in demo generation and conversion when sites become easier for AI to parse and cite. See Unusual’s case studies and personalization results across the blog.
Support
Need help tailoring llms.txt or generating page‑specific FAQs? Our team typically responds fast—reach us at support@unusual.ai or book a demo anytime.