How to Create robots.txt in Next.js for SEO and Crawling
In Next.js, create a
robots.txt file inside the public folder to serve it automatically at the root URL. This file controls how search engines crawl your site and is accessible at /robots.txt without extra configuration.Syntax
Place a plain text file named robots.txt inside the public folder of your Next.js project. The file uses simple rules to allow or disallow web crawlers from accessing parts of your site.
Example rules include:
User-agent: *— targets all crawlersDisallow: /path— blocks crawlers from that pathAllow: /path— explicitly allows crawling a path
The public folder is special in Next.js; files here are served statically at the root URL.
plaintext
User-agent: * Disallow:
Example
This example shows how to create a robots.txt file that allows all crawlers to access everything except the /admin path.
plaintext
User-agent: * Disallow: /admin
Output
When you run your Next.js app and visit https://yourdomain.com/robots.txt, you will see:
User-agent: *
Disallow: /admin
Common Pitfalls
Not placing the file in the public folder: If you put robots.txt anywhere else, Next.js won't serve it at /robots.txt.
Incorrect file format: The file must be plain text, not JSON or HTML.
Forgetting to redeploy: Changes to robots.txt require redeploying your Next.js app to update the file on the server.
nextjs
/* Wrong: placing robots.txt in pages folder (won't work) */ // No code needed here, just explanation
Quick Reference
- Put
robots.txtinpublic/folder. - Use plain text format with
User-agentandDisallowrules. - Access it at
/robots.txtURL after deployment. - Update and redeploy to change rules.
Key Takeaways
Place your robots.txt file inside the public folder in Next.js to serve it at /robots.txt.
Use simple plain text rules like User-agent and Disallow to control crawler access.
Always redeploy your app after changing robots.txt to update it on the server.
Do not put robots.txt in pages or other folders; it must be in public for static serving.