Using LiteLLM for diverse providers

This guide demonstrates how to use LiteLLM as a backend for curator to generate synthetic data using various LLM providers. We'll walk through an example of generating synthetic recipes, but this approach can be adapted for any synthetic data generation task.

Prerequisites

  • Python 3.10+

  • Curator (pip install bespokelabs-curator)

  • Access to an LLM provider (e.g., Gemini API key)

Steps

1. Create a curator.LLM Subclass

First, create a class that inherits from curator.LLM. You'll need to implement two key methods:

  • prompt(): Generates the prompt for the LLM

  • parse(): Processes the LLM's response into your desired format

"""Generate synthetic recipes for different cuisines using curator."""

from datasets import Dataset

from bespokelabs import curator


class RecipeGenerator(curator.LLM):
    """A recipe generator that generates recipes for different cuisines."""

    def prompt(self, input: dict) -> str:
        """Generate a prompt using the template and cuisine."""
        return f"Generate a random {input['cuisine']} recipe. Be creative but keep it realistic."

    def parse(self, input: dict, response: str) -> dict:
        """Parse the model response along with the input to the model into the desired output format.."""
        return {
            "recipe": response,
            "cuisine": input["cuisine"],
        }

2. Set Up Your Seed Dataset

Create a dataset of inputs using the HuggingFace Dataset class:

# List of cuisines to generate recipes for
cuisines = [
    {"cuisine": cuisine}
    for cuisine in [
        "Chinese",
        "Italian",
        "Mexican",
        "French",
        "Japanese",
        "Indian",
        "Thai",
        "Korean",
        "Vietnamese",
        "Brazilian",
    ]
]
cuisines = Dataset.from_list(cuisines)

3. Configure LiteLLM Backend

Initialise your generator with LiteLLM configuration:

recipe_generator = RecipeGenerator(
    model_name="gemini/gemini-1.5-flash",  # LiteLLM model identifier
    backend="litellm",                      # Specify LiteLLM backend
    backend_params={
        "max_requests_per_minute": 2_000,   # Rate limit for requests
        "max_tokens_per_minute": 4_000_000  # Token usage limit
    },
)

4. Generate Data

Generate your synthetic data:

recipes = recipe_generator(cuisines)
print(recipes.to_pandas())

LiteLLM Configuration

API Keys and Environment Variables

For Gemini:

export GEMINI_API_KEY='your-api-key-here'  # Get from https://aistudio.google.com/app/apikey

Curator Configuration

Rate Limits

Configure rate limit with backend parameters:

# Custom RPM/TPM configuration
# By default, this is set to:
# - max_requests_per_minute: 10
# - max_tokens_per_minute: 100_000
backend_params={
    "max_requests_per_minute": 2_000,     # 2K requests/minute
    "max_tokens_per_minute": 4_000_000    # 4M tokens/minute
}

Last updated