Execute LLM-generated code
We have built a code-executor that can be used to execute LLM-generated code. This is useful for many situations:
You want to include error-free code in your training code. This method is used in Open Thoughts.
LLM generates some code to generate visualization etc.
Agents and tool-use.
Here is a simple example of code execution in action:
The inherited class contains three methods:
code
: This is the method that returns the piece of code to be run. This is usually part of the row (you can usecurator.LLM
to generate this code).code_input
: This is optional, but can return a json that represents values to be passed toinput()
in the code.code_output
: This is where you parse the output of the execution.
We offer three methods of running the code:
Multiprocessing: This is the default backend (and can be activated as
CodeExecutor(backend="multiprocessing")
This runs code locally and is therefore the least safe option.Docker: Use
CodeExecutor(backend="docker")
to use Docker to run the code. Safer option than multiprocessing.Ray: If you have a ray cluster, you can use it by setting
CodeExecutor(backend="ray")
. This is useful when your code can take a long time to run.E2B: Code can also be run using e2b.dev. Use
CodeExecutor(backend="e2b")
.
Last updated