llms.txt Validator & Generator
Check whether your llms.txt is spec-compliant or generate one from scratch. Built to the spec used by ChatGPT, Claude, Perplexity, and Gemini shopping agents.
Paste your llms.txt
Nothing leaves your browser. No network call fires when you click validate.
Results
What is llms.txt and why it matters in 2026
The AI-era robots.txt + sitemap
A plain-text file at the root of your domain that tells AI shopping agents — ChatGPT, Claude, Perplexity, Gemini — where to find your product feed, policies, and categories.
2-4x inclusion uplift
Sites with an llms.txt see 2-4x higher inclusion in AI shopping agent responses vs. sites that rely on crawling alone. Especially material for stores not yet in the major retail indexes.
Cheaper than schema.org overhaul
One file at one URL. Most stores can publish a compliant llms.txt in under 30 minutes — vs. weeks for a full schema.org product-data overhaul.
Frequently asked
What is llms.txt and why do I need it?
llms.txt is a plain-text file at the root of your domain (yoursite.com/llms.txt) that tells AI shopping agents — ChatGPT, Claude, Perplexity, Gemini — where to find your product catalog, pricing, return policy, and structured commerce metadata. It's the AI-era equivalent of robots.txt + sitemap.xml for commerce. Sites without an llms.txt are typically invisible to AI shopping agents that rely on agent-friendly discovery rather than crawling.
Is llms.txt actually being used by AI agents in 2026?
Yes. ChatGPT's shopping mode, Perplexity Shopping, Claude with browsing, and Gemini Shopping all check for llms.txt as part of their commerce discovery. Major retailers (Shopify storefronts, BigCommerce sites, Magento 2 stores) that publish an llms.txt see 2-4x higher inclusion rates in AI shopping responses compared to sites without one.
Does this validator send my data anywhere?
No. The validator runs entirely in your browser. You paste content; it parses and reports. There is no server call, no logging, no tracking on the validation itself. You can verify in browser DevTools — no network requests fire when you click Validate.
What does the validator check?
Spec compliance: required H1 site name, optional summary blockquote, section structure (## Headers with markdown links), URL formatting, required commerce sections (Products, Policies, Categories), encoding, and length. It also surfaces opportunities — e.g. missing a returns-policy link or no structured-data hint — that improve AI agent inclusion.
What's the difference between the free generator and SignalixIQ's full audit?
The generator produces a compliant llms.txt from a few inputs (store name, product feed URL, policy URLs). The SignalixIQ full audit measures your actual AI shoppability — schema.org coverage, MCP server presence, agent crawl tests against ChatGPT/Claude/Perplexity, product feed quality scoring, and competitor benchmarking. The generator gets you to the starting line; the full audit ranks you against your category.
Can I use the generated llms.txt commercially?
Yes — there's no license restriction on llms.txt content, and the generator produces standard-spec output you can publish directly. We recommend reviewing for accuracy (your actual policy URLs, your actual product feed URL) before deploying.
llms.txt is the starting line — not the finish.
SignalixIQ's full audit tests your actual visibility against ChatGPT, Claude, and Perplexity shopping agents — and shows you where you rank against your category competitors.
AI Agent Shopping by Platform 2026
ChatGPT vs Claude vs Perplexity vs Gemini — how each one finds and ranks stores.
Measuring AI Agent Revenue 2026
Attribution models for AI-driven commerce traffic.
Platform Comparison 2026
Side-by-side: feed formats, discovery mechanisms, ranking signals.