Rebuff
Self-hardening prompt injection detector.
Overview
Rebuff is a self-hardening prompt injection detector that protects AI applications from prompt injection (PI) attacks through a multi-layered defense. It is designed to fortify itself each time it is challenged by an attack, providing a unique solution that adapts and evolves. Rebuff offers a user-friendly Playground API for hands-on experimentation and testing.
✨ Key Features
- Self-Hardening Technology
- Heuristics-based filtering
- LLM-based detection
- VectorDB for storing attack embeddings
- Canary tokens for detecting leakages
- Playground API
🎯 Key Differentiators
- Self-hardening mechanism that learns from attacks
- Multi-layered defense approach
- Open-source
Unique Value: Offers a dynamic and evolving security solution against prompt injection attacks that strengthens with each encountered threat.
🎯 Use Cases (2)
✅ Best For
- Integrated into LangChain for prompt injection detection.
💡 Check With Vendor
Verify these considerations match your specific requirements:
- Cannot provide 100% protection against prompt injection attacks as it is still a prototype.
💻 Platforms
✅ Offline Mode Available
🔌 Integrations
🛟 Support Options
- ✓ Email Support
- ✓ Dedicated Support (NA tier)
💰 Pricing
Free tier: Open-source and free to use.
🔄 Similar Tools in AI Jailbreak Prevention
Lakera Guard
An AI security platform that protects large language models and AI applications from prompt injectio...
CalypsoAI
An AI security platform that protects organizations from data breaches and malicious attacks by scan...
Giskard
An open-source AI testing framework for evaluating and securing large language models by identifying...
Credo AI
An AI governance platform that helps enterprises streamline AI adoption by implementing and automati...
Promptfoo
An open-source framework for testing, evaluating, and securing large language model (LLM) applicatio...
Lasso Security
Evaluates LLM applications for security vulnerabilities that surface during real-world use....