Save $$$ with batch mode
Providers like OpenAI and Anthropic offer batch mode, which allows you to upload a bunch of prompts to be processed asynchronously, for lower costs (typically 50%). However, these APIs are often very cumbersome to manage:
You typically have to prepare your batch file, upload it, and poll for responses periodically.
Large datasets will typically not fit in a single batch due to batch size limits, and so you will need to split your dataset into mutiple smaller batches, increasing the complexity you need to manage.
With Curator, you only need to toggle a single flag to save $$$, without any headache!
Using batch mode
Let's look at a simple example of reannotating instructions from the WildChat dataset with new responses from gpt-4o-mini.
First, we need to load the WildChat dataset using HuggingFace:
We then create a new LLM
class and apply to dataset
. All you need to do to enable batching is setting batch=True
when initializing your LLM
object, and you're done!
Supported Models
Currently, we only support batch mode for OpenAI and Anthropic models, but feel free to tell us which providers you want us to add support for, or send a PR if you want to contribute!
Last updated