0
0
NextJSframework~5 mins

Robots.txt configuration in NextJS - Cheat Sheet & Quick Revision

Choose your learning style9 modes available
Recall & Review
beginner
What is the purpose of a robots.txt file in a Next.js project?
The robots.txt file tells search engines which pages or sections of your website they can or cannot visit and index. It helps control what appears in search results.
Click to reveal answer
beginner
How do you serve a robots.txt file in a Next.js app?
You can place a robots.txt file inside the public/ folder. Next.js will serve it automatically at /robots.txt when the site runs.
Click to reveal answer
beginner
What does this robots.txt content mean?<br>
User-agent: *
Disallow: /admin
Allow: /
It means all search engines (*) are told not to visit or index any pages under /admin, but they can visit and index all other pages.
Click to reveal answer
intermediate
How can you dynamically generate a robots.txt file in Next.js?
You can create an API route (e.g., /api/robots.txt) that returns the robots.txt content with the correct text/plain header. Then rewrite /robots.txt to this API in next.config.js.
Click to reveal answer
beginner
Why is it important to test your robots.txt file after deployment?
Testing ensures search engines see the correct rules. Mistakes can block important pages from indexing or allow private pages to be indexed, affecting your site's visibility.
Click to reveal answer
Where should you place a static robots.txt file in a Next.js project?
AInside the <code>public/</code> folder
BInside the <code>pages/</code> folder
CInside the <code>components/</code> folder
DInside the <code>api/</code> folder
What does User-agent: * mean in a robots.txt file?
AIt applies to no search engines
BIt applies only to Google
CIt applies only to Bing
DIt applies to all search engines
How do you tell search engines not to index the /private folder?
ABlock: /private
BAllow: /private
CDisallow: /private
DNoindex: /private
Which HTTP header should be set when serving robots.txt dynamically?
AContent-Type: text/plain
BContent-Type: text/html
CContent-Type: application/json
DContent-Type: application/xml
What happens if you block all pages in robots.txt?
ASearch engines will index all pages
BSearch engines will not index any pages
CSearch engines will index only the homepage
DSearch engines will ignore the file
Explain how to add a robots.txt file to a Next.js project and what it controls.
Think about where static files go and what search engines read.
You got /4 concepts.
    Describe how to create a dynamic robots.txt in Next.js and why you might want to do that.
    Consider when your rules might change based on environment or conditions.
    You got /4 concepts.