Bespoke Labs
  • Welcome
  • BESPOKE CURATOR
    • Getting Started
      • Quick Tour
      • Key Concepts
      • Visualize your dataset with the Bespoke Curator Viewer
      • Automatic recovery and caching
      • Structured Output
    • Save $$$ on LLM inference
      • Using OpenAI for batch inference
      • Using Anthropic for batch inference
      • Using Gemini for batch inference
      • Using Mistral for batch inference
      • Using kluster.ai for batch inference
    • How-to Guides
      • Using vLLM with Curator
      • Using Ollama with Curator
      • Using LiteLLM with curator
      • Handling Multimodal Data in Curator
      • Executing LLM-generated code
      • Using HuggingFace inference providers with Curator
    • Data Curation Recipes
      • Generating a diverse QA dataset
      • Using SimpleStrat block for generating diverse data
      • Curate Reasoning data with Claude-3.7 Sonnet
      • Synthetic Data for function calling
    • Finetuning Examples
      • Aspect based sentiment analysis
      • Finetuning a model to identify features of a product
    • API Reference
  • Models
    • Bespoke MiniCheck
      • Self-Hosting
      • Integrations
      • API Service
    • Bespoke MiniChart
    • OpenThinker
Powered by GitBook
On this page
  • Guardrails
  • Ollama
  1. Models
  2. Bespoke MiniCheck

Integrations

PreviousSelf-HostingNextAPI Service

Last updated 7 months ago

Guardrails

Bespoke MiniCheck is available as a Guardrails validator here:

Example usage:

# Import Guard and Validator
from guardrails.hub import BespokeMiniCheck
from guardrails import Guard

# Setup Guard
guard = Guard().use(
    BespokeMiniCheck,
    split_sentences=True,
    threshold=0.5,
    on_fail="fix"
)

# Validator passes
guard.validate("Alex likes cats.",
               metadata={"context": "Alex likes cats and dogs"})  
# Validator fails
guard.validate("Alex likes cats.",
               metadata={"context": "Alex likes dogs, but not cats."})  

Ollama

Once you have Ollama, it is pretty straightforward to use the model. Note that Ollama doesn't yet support getting logits from the model, therefore we just output "yes" or "no".

As part of Ollama, there are two examples available:

Bespoke-MiniCheck-7B is available from Ollama .

More information can be found from their .

https://hub.guardrailsai.com/validator/bespokelabs/bespoke_minicheck
here
blog post
Fact checking
RAG use case