Rebuff

Self-hardening prompt injection detector.

Visit Website →

Overview

Rebuff is a self-hardening prompt injection detector that protects AI applications from prompt injection (PI) attacks through a multi-layered defense. It is designed to fortify itself each time it is challenged by an attack, providing a unique solution that adapts and evolves. Rebuff offers a user-friendly Playground API for hands-on experimentation and testing.

✨ Key Features

  • Self-Hardening Technology
  • Heuristics-based filtering
  • LLM-based detection
  • VectorDB for storing attack embeddings
  • Canary tokens for detecting leakages
  • Playground API

🎯 Key Differentiators

  • Self-hardening mechanism that learns from attacks
  • Multi-layered defense approach
  • Open-source

Unique Value: Offers a dynamic and evolving security solution against prompt injection attacks that strengthens with each encountered threat.

🎯 Use Cases (2)

Detecting and defending against prompt injection attacks Securing AI systems from malicious input

✅ Best For

  • Integrated into LangChain for prompt injection detection.

💡 Check With Vendor

Verify these considerations match your specific requirements:

  • Cannot provide 100% protection against prompt injection attacks as it is still a prototype.

💻 Platforms

Web API

✅ Offline Mode Available

🔌 Integrations

LangChain API

🛟 Support Options

  • ✓ Email Support
  • ✓ Dedicated Support (NA tier)

💰 Pricing

Contact for pricing
Free Tier Available

Free tier: Open-source and free to use.

Visit Rebuff Website →