Complete the code to allow all web crawlers to access the entire website.
User-agent: [1]
Disallow:The User-agent: * line means the rule applies to all web crawlers. Leaving Disallow: empty means no pages are blocked.
Complete the code to block all web crawlers from accessing the /private directory.
User-agent: *
Disallow: [1]The Disallow: /private line blocks crawlers from accessing anything under the /private directory.
Fix the error in the robots.txt snippet to correctly block Googlebot from /secret.
User-agent: Googlebot
Disallow[1] /secretThe correct syntax uses a colon ':' after Disallow to specify the path to block.
Fill both blanks to block Bingbot from /admin and allow all others full access.
User-agent: Bingbot Disallow: [1] User-agent: [2] Disallow:
Bingbot is blocked from /admin by Disallow: /admin. The wildcard * applies to all other crawlers, and an empty Disallow means full access.
Fill all three blanks to block all bots from /tmp, allow Googlebot full access, and block all others from /logs.
User-agent: [2] Disallow: User-agent: * Disallow: [1] Disallow: [3]
Googlebot is allowed full access with an empty Disallow. All other bots are blocked from both /tmp and /logs using multiple Disallow directives in the general rule.