Aidena

Rebuff

Guardrails & SafetyOpen SourceVerifiedOpen Source

Rebuff is a self-hardening prompt injection detection framework that uses multiple layered techniques to identify and block injection attacks against LLM applications. It combines heuristic analysis, LLM-based detection, and a vector database of known attacks for multi-layered defense. Best suited for developers building LLM-powered apps who need protection against prompt injection without building custom detection systems.

Price

From $0

License: Apache-2.0