robots.txt Builder

Create a robots.txt file to control how search engines crawl your site.

Rule Set 1

Sitemap URL

Generated robots.txt

User-agent: *
Allow: /
Disallow: /admin
Disallow: /private

Sitemap: https://example.com/sitemap.xml

Deep Dive

Generate a valid `robots.txt` file using a visual rule builder. Add allow/disallow rules for specific user agents (Googlebot, Bingbot, or all bots), set the sitemap URL, and set crawl delay. Download the ready-to-deploy file. All generation happens in the browser.

Who uses this?

  • Blocking search engines from indexing an admin panel
  • Preventing crawlers from accessing test or staging directories
  • Adding a sitemap reference to robots.txt
  • Setting a crawl delay for a resource-constrained server

Examples

Block all bots from /admin/

Input

User-agent: *, Disallow: /admin/

Output

User-agent: * Disallow: /admin/

Common Errors & Fixes

Robots.txt is blocking the entire site

`Disallow: /` blocks all content for the matched user agent. Verify you didn't accidentally use `/` as the disallow path.

File is not being respected by a crawler

Ensure the file is at `https://yourdomain.com/robots.txt` (the exact root). Subdomains need their own robots.txt file.

Expert FAQ

Does robots.txt prevent pages from appearing in search results?

Blocking a page in robots.txt prevents crawling but does not always prevent indexing. If other sites link to the page, Google may still index it. Use `noindex` meta tags to prevent indexing.

Is robots.txt case-sensitive?

The user agent names are case-insensitive, but the paths are case-sensitive on case-sensitive servers (e.g., Linux).

Can I block a specific directory?

Yes. `Disallow: /private/` blocks all URLs under `/private/`. The trailing slash is important.

Alternatives

robots-txt.comGoogle Search Console robots.txt testerSEMrush Site Audit