0
0
Agentic_aiml~20 mins

Sandboxing dangerous operations in Agentic Ai - ML Experiment: Train & Evaluate

Choose your learning style8 modes available
Experiment - Sandboxing dangerous operations
Problem:You are building an AI agent that can execute code snippets safely. Currently, the agent runs all code directly, which can cause security risks like deleting files or accessing private data.
Current Metrics:No safety checks in place; 100% of dangerous operations execute successfully, causing potential harm.
Issue:The AI agent lacks sandboxing, so dangerous operations are not blocked or isolated, risking system security.
Your Task
Implement a sandbox environment that safely executes code snippets, blocking or isolating dangerous operations while allowing safe code to run.
You cannot disable code execution entirely.
You must allow safe operations like arithmetic and string manipulation.
You must block or sandbox file system access, network calls, and system commands.
Hint 1
Hint 2
Hint 3
Hint 4
Solution
Agentic_ai
import ast

class SafeExec(ast.NodeVisitor):
    def __init__(self):
        self.safe = True
        self.allowed_calls = {"print", "len", "range", "int", "str", "float", "list", "dict", "set"}

    def visit_Import(self, node):
        self.safe = False

    def visit_ImportFrom(self, node):
        self.safe = False

    def visit_Call(self, node):
        # Check if function called is allowed
        if isinstance(node.func, ast.Name):
            if node.func.id not in self.allowed_calls:
                self.safe = False
        elif isinstance(node.func, ast.Attribute):
            # Disallow attribute calls (like os.system)
            self.safe = False
        self.generic_visit(node)

    def visit_Attribute(self, node):
        # Disallow attribute access
        self.safe = False


def safe_exec(code_str):
    try:
        tree = ast.parse(code_str)
    except SyntaxError:
        return "Syntax Error: Code cannot be parsed."

    checker = SafeExec()
    checker.visit(tree)

    if not checker.safe:
        return "Unsafe code detected: Execution blocked."

    # Define safe built-ins
    safe_builtins = {"print": print, "len": len, "range": range, "int": int, "str": str, "float": float, "list": list, "dict": dict, "set": set}

    safe_globals = {"__builtins__": safe_builtins}
    safe_locals = {}

    try:
        exec(code_str, safe_globals, safe_locals)
        return "Code executed safely."
    except Exception as e:
        return f"Error during execution: {e}"

# Example usage
code_safe = "print('Hello, world!')\nx = len([1,2,3])\nprint(x)"
code_dangerous = "import os\nos.system('rm -rf /')"

print(safe_exec(code_safe))
print(safe_exec(code_dangerous))
Added AST parsing to analyze code structure before execution.
Blocked import statements and attribute access to prevent dangerous operations.
Restricted callable functions to a safe whitelist.
Executed code in a restricted environment with limited built-ins.
Results Interpretation

Before: All code runs directly, dangerous operations succeed, risking system security.

After: Dangerous operations are detected and blocked before execution. Safe code runs normally.

Sandboxing by analyzing code and restricting execution environment prevents harmful operations while allowing safe code to run.
Bonus Experiment
Try extending the sandbox to allow safe file read operations but block writes and deletes.
💡 Hint
Intercept file open calls and check mode; allow 'r' mode only. Use mock or wrapper functions to control file access.