LLM Guard
Guardrails & SafetyOpen SourceVerified
Open-source security toolkit by ProtectAI providing a suite of scanners to detect prompt injection, PII leakage, toxicity, and sensitive data in LLM inputs and outputs.
Price
$0 – $0
Open-source security toolkit by ProtectAI providing a suite of scanners to detect prompt injection, PII leakage, toxicity, and sensitive data in LLM inputs and outputs.
$0 – $0