Robots.txt Generator
Build robots.txt files visually. Add user-agent blocks, allow/disallow rules, crawl delays, and sitemap URLs. Presets for blocking AI crawlers, standard SEO config, and more.
Presets
Sitemap URLs
Host Directive (optional)
User-agent: * Disallow: /admin/
Frequently Asked Questions
What is a robots.txt file?
A robots.txt file is placed at the root of your website (e.g., https://example.com/robots.txt) and tells web crawlers which pages or directories they can and cannot access. It uses the Robots Exclusion Protocol. Most major search engines and bots respect this file, though it is advisory rather than enforced.
How do I block AI crawlers like GPTBot and ClaudeBot?
Use the 'Block All AI Crawlers' preset which generates Disallow: / rules for GPTBot (OpenAI), ChatGPT-User (OpenAI), CCBot (Common Crawl), ClaudeBot (Anthropic), anthropic-ai, Diffbot, and Google-Extended. You can also add custom bot names using the 'custom' user-agent option.
What is Crawl-delay?
The Crawl-delay directive tells a crawler how many seconds to wait between requests to your site. It helps reduce server load from aggressive crawlers. Note that Googlebot ignores Crawl-delay — use Google Search Console's crawl rate settings instead. Other bots like Bingbot and Slurp do respect it.
Should my Sitemap URL go in robots.txt?
Yes. Adding a Sitemap directive (e.g., Sitemap: https://example.com/sitemap.xml) in robots.txt helps search engines discover your sitemap automatically. You can add multiple Sitemap lines for multiple sitemap files. Sitemap URLs should always end in .xml.