0
0
Agentic_aiml~20 mins

Sandboxing dangerous operations in Agentic Ai - Practice Problems & Coding Challenges

Choose your learning style8 modes available
Challenge - 5 Problems
🎖️
Sandboxing Mastery
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 conceptual
intermediate
1:30remaining
Why sandboxing is important in AI agents

Imagine you have an AI agent that can execute code on your computer. Why is sandboxing these operations important?

ATo allow the AI to connect to the internet without restrictions.
BTo make the AI run faster by giving it more CPU power.
CTo prevent the AI from accessing or damaging sensitive files or system resources.
DTo let the AI change its own code freely.
Attempts:
2 left
💻 code output
intermediate
2:00remaining
Output of sandboxed code execution

What will be the output of this sandboxed Python code snippet?

Agentic_ai
sandbox_env = {'__builtins__': {}}
code = 'result = 5 + 3'
exec(code, sandbox_env)
output = sandbox_env.get('result', None)
print(output)
ANone
B8
CNameError
DSyntaxError
Attempts:
2 left
model choice
advanced
2:00remaining
Choosing a sandboxing method for AI agents

You want to run untrusted AI-generated code safely. Which sandboxing method provides the strongest isolation?

AUsing Python's <code>exec</code> with an empty <code>__builtins__</code> dictionary.
BRunning code in a separate Docker container with limited permissions.
CRunning code directly on the host machine with user confirmation.
DUsing a virtual environment (venv) without additional restrictions.
Attempts:
2 left
hyperparameter
advanced
1:30remaining
Configuring sandbox resource limits

When sandboxing AI code execution, which resource limit is most critical to prevent denial-of-service attacks?

AMemory limit to prevent excessive RAM usage.
BDisk space limit to prevent large file creation.
CNetwork bandwidth limit to slow down data transfer.
DCPU time limit to stop infinite loops or heavy computation.
Attempts:
2 left
🔧 debug
expert
2:30remaining
Debugging sandbox escape vulnerability

Given this sandboxed Python code, which option shows the way an attacker could escape the sandbox?

sandbox_env = {'__builtins__': {}}
code = '''
import os
os.system('echo escaped')
'''
exec(code, sandbox_env)
AThe code raises a NameError because 'import' is not allowed in the sandbox.
BThe code raises an AttributeError because 'os' has no attribute 'system'.
CThe code runs successfully and prints 'escaped' because 'os' is accessible.
DThe code raises a SyntaxError due to missing built-ins.
Attempts:
2 left