Llama Guard

Guardrails & SafetyOpen SourceVerified

Meta's open-source LLM-based safety classifier for content moderation of human-AI conversations, fine-tuned on a taxonomy of safety risks for both prompts and responses.

Price

$0 – $0