Sandboxed? #1103
Replies: 6 comments 1 reply
-
|
Great call @SeaDude we don't but there is a PR coming for this soon. |
Beta Was this translation helpful? Give feedback.
-
|
Apologies for double post, but seems more relevant here. Any support for Docker yet? Would love to try it |
Beta Was this translation helpful? Give feedback.
-
|
I've been running Crew AI exclusively in "developer" docker container. I'd be happy to submit my setup if there's interest. @joaomdmoura If you know of the branch that has some work started on it, happy to compare my work to theirs, or just blindly submit mine ;) |
Beta Was this translation helpful? Give feedback.
-
|
Found this today https://hub.docker.com/r/sageil/crewai/tags |
Beta Was this translation helpful? Give feedback.
-
|
CrewAI does not have built-in sandboxing at the moment. There are a few approaches depending on what you need: Container-level isolation: You can run CrewAI agents inside Docker containers with restricted filesystem and network access. This is the strongest isolation but adds deployment complexity. E2B and Modal both offer managed sandboxes designed for agent code execution if you do not want to manage containers yourself. Action-level gating: Instead of (or in addition to) containerizing the entire agent, you can intercept each tool call before it executes and evaluate it against a policy. This lets you block specific actions (like writing to credential files, running destructive shell commands, or making HTTP requests to unauthorized endpoints) while letting safe operations through without friction. The second approach is useful because even inside a sandbox, you probably want to control what the agent does. A sandboxed agent that deletes all your project files is still a problem, just a contained one. SafeClaw is an open-source tool that implements action-level gating with deny-by-default policies. You define rules in YAML (which actions are allowed, denied, or require human approval), and the engine evaluates every tool call before execution. It works with any agent framework since it wraps the tool invocation layer rather than the framework itself. For CrewAI specifically, you would wrap your tool functions so that each call goes through the policy engine first. If the policy denies it, the agent gets a permission denied response and can adjust its approach. If the policy requires approval, it pauses for human confirmation. Safe operations go through instantly. The practical combination is: use containers for OS-level isolation (network, filesystem boundaries) and action gating for application-level control (which specific operations are permitted within those boundaries). |
Beta Was this translation helpful? Give feedback.
-
|
Great question! Sandboxing is critical for multi-agent systems where agents can execute code or call external tools. Beyond sandboxing the execution environment, it's also worth thinking about runtime security at the agent level — things like:
I've been looking at ClawMoat for this — it's an open-source security scanner designed specifically for AI agents. It acts as a runtime "security moat" that can intercept and validate agent actions before they execute. Could be a nice complement to execution sandboxing: sandbox handles the OS-level isolation, while something like ClawMoat handles the agent-level security policies. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
AutoGPT has an option to execute code inside a Docker Container. Does CrewAI have some sort of sandbox implemented to help protect against malicious code execution?
Beta Was this translation helpful? Give feedback.
All reactions