Bird
0
0

Which robots.txt directive correctly blocks all web crawlers from accessing any part of the website?

easy📝 Factual Q3 of 15
SEO Fundamentals - Technical SEO Basics
Which robots.txt directive correctly blocks all web crawlers from accessing any part of the website?
AUser-agent: all<br>Disallow: /
BUser-agent: *<br>Disallow: /
CUser-agent: *<br>Allow: /
DUser-agent: all<br>Allow: /
Step-by-Step Solution
Solution:
  1. Step 1: Identify the wildcard for all crawlers

    The wildcard * is used to specify all user agents.
  2. Step 2: Use Disallow to block access

    Setting Disallow: / blocks access to the entire site.
  3. Final Answer:

    User-agent: *
    Disallow: /
    -> Option B
  4. Quick Check:

    Wildcard * + Disallow: / blocks all [OK]
Quick Trick: Use 'User-agent: *' with 'Disallow: /' to block all [OK]
Common Mistakes:
  • Using 'User-agent: all' instead of '*'
  • Using 'Allow' instead of 'Disallow' to block
  • Omitting the slash '/' after Disallow

Want More Practice?

15+ quiz questions · All difficulty levels · Free

Free Signup - Practice All Questions
More SEO Fundamentals Quizzes