Using LiteLLM with curator

This guide demonstrates how to use LiteLLM as a backend for curator to generate synthetic data using various LLM providers. We'll walk through an example of generating synthetic recipes, but this approach can be adapted for any synthetic data generation task.

Prerequisites

  • Python 3.10+

  • Curator (pip install bespokelabs-curator)

  • Access to an LLM provider (e.g., Gemini API key)

Steps

1. Create a curator.LLM Subclass

First, create a class that inherits from curator.LLM. You'll need to implement two key methods:

  • prompt(): Generates the prompt for the LLM

  • parse(): Processes the LLM's response into your desired format

"""Generate synthetic recipes for different cuisines using curator."""

from datasets import Dataset

from bespokelabs import curator


class RecipeGenerator(curator.LLM):
    """A recipe generator that generates recipes for different cuisines."""

    def prompt(self, input: dict) -> str:
        """Generate a prompt using the template and cuisine."""
        return f"Generate a random {input['cuisine']} recipe. Be creative but keep it realistic."

    def parse(self, input: dict, response: str) -> dict:
        """Parse the model response along with the input to the model into the desired output format.."""
        return {
            "recipe": response,
            "cuisine": input["cuisine"],
        }

2. Set Up Your Seed Dataset

Create a dataset of inputs using the HuggingFace Dataset class:

3. Configure LiteLLM Backend

Initialise your generator with LiteLLM configuration:

4. Generate Data

Generate your synthetic data:

LiteLLM Configuration

API Keys and Environment Variables

For Gemini:

Curator Configuration

Rate Limits

Configure rate limit with backend parameters:

Providers and Models

Here are a list of providers. This is not an exhaustive list. Please refer to the litellm provider documentation.

Together

Other common models (more info):

DeepInfra

Other common models (all models — use prefix deepinfra):

Last updated