Using Anthropic for batch inference
Prerequisites
Steps
1. Setup environment vars
export ANTHROPIC_API_KEY=<your_api_key>"""Example of reannotating the WildChat dataset using curator."""
import logging
from bespokelabs import curator
# To see more detail about how batches are being processed
logger = logging.getLogger("bespokelabs.curator")
logger.setLevel(logging.INFO)
class WildChatReannotator(curator.LLM):
"""A reannotator for the WildChat dataset."""
def prompt(self, input: dict) -> str:
"""Extract the first message from a conversation to use as the prompt."""
return input["conversation"][0]["content"]
def parse(self, input: dict, response: str) -> dict:
"""Parse the model response along with the input to the model into the desired output format.."""
instruction = input["conversation"][0]["content"]
return {"instruction": instruction, "new_response": response}
3. Configure the Anthropic model
4. Generate Data
Example Output
instruction
new_response
Batch Configuration
Last updated