Bespoke Labs
  • Welcome
  • BESPOKE CURATOR
    • Getting Started
      • Quick Tour
      • Key Concepts
      • Visualize your dataset with the Bespoke Curator Viewer
      • Automatic recovery and caching
      • Structured Output
    • Save $$$ on LLM inference
      • Using OpenAI for batch inference
      • Using Anthropic for batch inference
      • Using Gemini for batch inference
      • Using Mistral for batch inference
      • Using kluster.ai for batch inference
    • How-to Guides
      • Using vLLM with Curator
      • Using Ollama with Curator
      • Using LiteLLM with curator
      • Handling Multimodal Data in Curator
      • Executing LLM-generated code
      • Using HuggingFace inference providers with Curator
    • Data Curation Recipes
      • Generating a diverse QA dataset
      • Using SimpleStrat block for generating diverse data
      • Curate Reasoning data with Claude-3.7 Sonnet
      • Synthetic Data for function calling
    • Finetuning Examples
      • Aspect based sentiment analysis
      • Finetuning a model to identify features of a product
    • API Reference
  • Models
    • Bespoke MiniCheck
      • Self-Hosting
      • Integrations
      • API Service
    • Bespoke MiniChart
    • OpenThinker
Powered by GitBook
On this page
  • Prerequisites
  • Steps
  • Example Output
  • Batch Configuration
  1. BESPOKE CURATOR
  2. Save $$$ on LLM inference

Using kluster.ai for batch inference

PreviousUsing Mistral for batch inferenceNextHow-to Guides

Last updated 1 month ago

You can use kluster.ai for batch inference in Curator to generate synthetic data. In this example, we will generate answers for GSM8K dataset, but the approach can be adapted for any data generation task. The following models are supported with pricing for different completion windows:

Model ID
Realtime
24h
48h
72h

meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8

$0.20/$0.80

$0.25

$0.20

$0.15

meta-llama/Llama-4-Scout-17B-16E-Instruct

$0.08/$0.45

$0.15

$0.12

$0.10

deepseek-ai/DeepSeek-V3-0324

$0.70/$1.40

$0.63

$0.50

$0.35

google/gemma-3-27b-it

$0.35

$0.30

$0.25

$0.20

deepseek-ai/DeepSeek-V3

$1.25

$0.63

$0.50

$0.35

deepseek-ai/DeepSeek-R1

$3.00/$5.00

$3.50

$3.00

$2.50

Qwen/Qwen2.5-VL-7B-Instruct

$0.30

$0.15

$0.10

$0.05

klusterai/Meta-Llama-3.1-405B-Instruct-Turbo

$3.50

$0.99

$0.89

$0.79

klusterai/Meta-Llama-3.3-70B-Instruct-Turbo

$0.70

$0.20

$0.18

$0.15

klusterai/Meta-Llama-3.1-8B-Instruct-Turbo

$0.18

$0.05

$0.04

$0.03

Note: Prices shown as $ per 1M tokens. For Realtime, some models have different input/output prices shown as input/output. Please find the up to date map here:

Prerequisites

  • Python 3.10+

  • Curator: Install via pip install bespokelabs-curator

  • kluster.ai API key: Get your key from

Steps

1. Setup environment vars

export KLUSTERAI_API_KEY=<your_api_key>

2. Create a curator.LLM subclass

Create a class that inherits from curator.LLM. Implement two key methods:

  • prompt(): Generates the prompt for the LLM.

  • parse(): Processes the LLM's response into your desired format.

Here’s the implementation:

"""Example of reannotating the WildChat dataset using curator."""

import logging
from bespokelabs import curator

# To see more detail about how batches are being processed
logger = logging.getLogger("bespokelabs.curator")
logger.setLevel(logging.INFO)

class Reasoner(curator.LLM):
    """Curator class for processing GSM8K dataset."""

    def prompt(self, input):
        """Create a prompt for the LLM to reason about the problem."""
        return f"Answer the following question: {input['question']}"

    def parse(self, input, response):
        """Parse the LLM response to extract reasoning and solution.

        The response format is expected to be '<think>reasoning</think>answer'
        """
        full_response = response

        # Extract reasoning and answer using regex
        import re

        reasoning_pattern = r"<think>(.*?)</think>"
        reasoning_match = re.search(reasoning_pattern, full_response, re.DOTALL)

        reasoning = reasoning_match.group(1).strip() if reasoning_match else ""
        # Answer is everything after </think>
        answer = re.sub(reasoning_pattern, "", full_response, flags=re.DOTALL).strip()

        return [
            {
                "question": input["question"],
                "reasoning": reasoning,
                "deepseek_solution": answer,
                "gold_answer": input["answer"],
            }
        ]

3. Configure Reasoner to use DeepSeek-R1 through kluster.ai

reasoner = Reasoner(model_name="deepseek-ai/DeepSeek-R1", 
                    backend="klusterai", 
                    batch=True, 
                    backend_params={"max_retries": 1, "completion_window": "1h"})

4 Generate Data

Generate the structured data and output the results as a pandas DataFrame:

from datasets import load_dataset

dataset = load_dataset("openai/gsm8k", name="main")
dataset_to_use = dataset["train"].take(3)
output = reasoner(dataset)

Example Output

Using the above example, the output might look like this:

from IPython.display import HTML, display, Markdown
which = 0
question = output[which]['question']
gold_answer = output[which]['gold_answer']
model_answer = output[which]['deepseek_solution']
thought = output[which]['reasoning']

to_display_input = question.replace("\n", "<br>")
to_display_output = model_answer.replace("\n", "<br>")

display(Markdown(
    "<h1>Question</h1>"
    f"<h3>{question}</h3>"
))
display(Markdown(
    "<h1>Model answer</h1>"
    f"<p>{model_answer}</p>"
))
display(Markdown(
    "<h1>Gold answer</h1>"
    f"<p>{gold_answer}</p>"
))
display(Markdown(
    "<h1>Model Thought</h1>"
    f"<p>{thought}</p>"
))

Batch Configuration

Check out complete

https://api.kluster.ai/v1/models
https://www.kluster.ai/
batch configuration