Garak

The LLM Vulnerability Scanner

Visit Website →

Overview

Garak, which stands for Generative AI Red-Teaming and Assessment Kit, systematically identifies the vulnerabilities of LLMs by using a combination of static, dynamic, and adaptive probes. It is ideal for security researchers, developers, and AI ethics professionals to assess the risks of generative systems by probing for issues such as hallucinations, data leakage, prompt injections, toxicity, and misinformation.

✨ Key Features

  • LLM Vulnerability Scanning
  • Automated Scanning
  • Broad LLM Support (OpenAI, Hugging Face, Cohere, Replicate, etc.)
  • Structured Reporting
  • Focused Security Coverage (prompt injection, jailbreaks, data leakage, toxicity, etc.)
  • Modularity & Extensibility

🎯 Key Differentiators

  • Open-source and free to use
  • Direct focus on LLM security vulnerabilities
  • Extensive and research-informed library of probes

Unique Value: Provides a powerful, systematic, and focused open-source tool for proactively identifying vulnerabilities in LLMs.

🎯 Use Cases (3)

LLM security testing AI red teaming Assessing ethical risks of generative AI

💡 Check With Vendor

Verify these considerations match your specific requirements:

  • Not suitable for rapid CI/CD checks due to the potential for long scan durations.

🏆 Alternatives

Adversa AI Other open-source red teaming tools

Offers a free and extensible solution for in-depth security audits of LLMs, which is a more accessible alternative to commercial platforms.

💻 Platforms

Desktop (Linux, OSX)

✅ Offline Mode Available

🛟 Support Options

  • ✓ Email Support
  • ✓ Live Chat

💰 Pricing

Contact for pricing
Free Tier Available

Free tier: Open source and free to use.

Visit Garak Website →