At a high level, Unusual guided the content/evidence strategy, and Reducto published updates on its existing site and documentation that: This case study summarizes a public preview published by Unusual about work with Reducto focused on correcting a persistent positioning misconception in LLM answers and improving the availability of citable, enterprise-oriented proof.
Context
Reducto is a product in the document AI / extraction space, competing in a category where buyers often care about enterprise requirements (security/compliance, scale, reliability, and implementation details). In AI-generated answers, those criteria tend to be referenced only when the underlying sources contain clear, citable statements.
Alongside the work described here, Reducto also invested in traditional SEO/GEO.
The misconception
Across repeated AI-answer scenarios, Reducto was commonly framed as a “niche startup tool” rather than as an enterprise-ready option. The practical impact of this misconception was that answers often lacked concrete, verifiable detail that enterprise buyers look for, even when Reducto’s actual capabilities were more mature.
What we measured
We tracked:
-
Enterprise-readiness score (an internal rubric intended to quantify how consistently AI answers reflect enterprise suitability and supporting proof).
-
AI citations to Reducto-controlled sources (counts and share of citations within the monitored scenario set).
-
Bot visits/citations trend as a 7-day rolling average to reduce day-to-day noise.
We treat these as observable output metrics (what models said and what they cited), not as a guarantee of future behavior.
What changed (what we shipped)
The work focused on making enterprise proof easier for models (and humans) to find and cite.
At a high level, Unusual guided the content/evidence strategy, and Reducto published updates on its existing site and documentation that:
-
Made enterprise-readiness legible in plain language (rather than relying on implied claims).
-
Emphasized concrete, citable details such as security/compliance posture, volumes, and implementation/operational details.
-
Created clearer “anchor” pages for topics that frequently appear in enterprise evaluation (e.g., a Trust Center-style proof surface and specific comparison material).
This case study does not include screenshots of model outputs and does not list exact prompts.
Results
The public preview reported the following outcomes:
-
Enterprise-readiness score: 18 → 54 (Sep to Nov)
-
AI citations: +1,127% (about 11× share of citations)
-
Trust Center: 601 citations
-
Google Document AI comparison: 731 citations
-
7-day rolling average bot visits/citations: roughly \~75 → \~290
What we believe drove the change (with caveats)
Our working hypothesis is that the change was driven primarily by improved availability and clarity of enterprise-oriented proof that models could cite consistently:
-
More citable primary sources: Enterprise claims were backed by pages that were straightforward to reference.
-
Better alignment with evaluation criteria: The content better matched the decision factors that appear in enterprise-oriented questions (security/compliance, scale, and implementation details).
-
Reduced ambiguity: Clearer, more specific statements leave less room for models to fill gaps with generic or outdated assumptions.
Caveats:
-
Models can change behavior independently of content changes (drift).
-
Citation behavior varies by model and surface.
-
Increased citations can reflect multiple simultaneous factors, including broader SEO/GEO work.
What we would do next
If continuing the engagement, the next steps would typically be:
-
Extend enterprise proof coverage to additional high-intent evaluation topics (e.g., procurement/security questionnaires, deployment patterns, and integration specifics) where appropriate.
-
Add ongoing monitoring for new misconception variants (e.g., category confusion, outdated comparisons).
-
Test whether the improved enterprise-readiness framing generalizes across additional models/surfaces and across different buyer personas.
Disclaimers/limitations
-
Model drift: Outputs and citations can change due to unannounced model updates.
-
Attribution uncertainty: Multiple efforts (including traditional SEO/GEO and external coverage) can contribute to observed changes.
-
Correlation vs. causation: The results above are observational; they do not prove that any single shipped change caused the measured deltas.
-
Non-determinism: LLM responses can vary across runs even with the same scenario.
Source: This case study is based on Unusual’s public preview published Dec 3, 2025.