HomeCases#0004
#0004⬤ CRITICALREALAI FailureGeneral AI

Security: Unsandboxed exec() with pre-injected os/sys modules in PyInterpreter

📋 Scenario

The PyInterpreter.execute() method in the agenticSeek project runs LLM-generated Python code via exec() with no sandboxing, pre-injecting os and sys modules and full __builtins__, allowing arbitrary code execution through prompt injection.

Impact

Full host compromise: arbitrary code execution, file system access, environment variable theft, and system command execution.

🔍 Root Cause

Unsanitized exec() with dangerous modules pre-injected and no sandboxing or input validation.

Recommendation

Remove os and sys from execution namespace, restrict __builtins__ to safe functions, and use container-based isolation (Docker, E2B) for code execution.

🔑 Key Pattern

Unsafe code execution with LLM-generated input

📚 Transferable Lesson

LLM-generated code must always be executed in a sandboxed, restricted environment with no access to dangerous modules or system functions.

Intelligence Scores
Severity Score98/100
Quality Score95/100
AI Confidence93/100
Case Metadata
IndustryGeneral AI
Failure TypeAI Failure
Risk PatternSecurity Risk
Case TypeREAL
PriorityHIGH
ValidationHigh Confidence
← Back to Cases