Bird
0
0

Which directive in robots.txt is used to prevent crawlers from accessing specific pages?

easy📝 Conceptual Q2 of 15
SEO Fundamentals - Technical SEO Basics
Which directive in robots.txt is used to prevent crawlers from accessing specific pages?
AAllow
BCrawl-delay
CDisallow
DSitemap
Step-by-Step Solution
Solution:
  1. Step 1: Identify the directive that blocks URLs

    The Disallow directive tells crawlers which pages or folders they cannot access.
  2. Step 2: Differentiate from other directives

    Allow permits access, Crawl-delay sets wait time, and Sitemap points to sitemap files.
  3. Final Answer:

    Disallow -> Option C
  4. Quick Check:

    Disallow = Blocks pages [OK]
Quick Trick: Disallow blocks URLs from crawlers [OK]
Common Mistakes:
  • Using Allow to block pages
  • Confusing Crawl-delay with blocking

Want More Practice?

15+ quiz questions · All difficulty levels · Free

Free Signup - Practice All Questions
More SEO Fundamentals Quizzes