Lakera AI LLM Protect: How to Use & Pricing

In the revolutionary world of AI, Large Language Models (LLMs) have made significant strides, interpreting and creating human-language texts seamlessly.

However, with great power comes greater vulnerabilities, like “prompt injection” techniques which can maliciously manipulate LLM-powered chatbots.

Stepping into the breach to address these concerns is Swiss startup Lakera.

LLMs, with their ability to understand and generate texts, are nothing short of miraculous. Yet, their sophistication also presents vulnerabilities.

Techniques like prompt injections can exploit these models, turning their strengths into potential pitfalls.

What Is Lakera AI

Image Credit: Lakera AI

Lakera AI is a Swiss startup focused on addressing the vulnerabilities associated with Large Language Models (LLMs). As LLMs have gained prominence in understanding and generating human-like texts, they’ve also become susceptible to risks like “prompt injection.”

Lakera, backed by experts from fields like aerospace and healthcare, has committed to defending LLMs from these vulnerabilities, raising $10 million towards this mission. Their interactive game, Gandalf, is both a research tool and a challenge, asking users to trick LLMs, providing valuable insights that feed into their primary product, Lakera Guard.

This tool, designed for developers, offers robust AI application security, using a blend of crowdsourced data, open-source databases, and Lakera’s research.

How to Use Lakera AI to Protect LLM

Lakera AI Playground
Image Credit: Lakera AI
  1. Visit the Lakera website.
  2. Click on “Start for Free” to create a new account.
  3. In the “Try the playground” section, you’ll find several example prompts. You can test these prompts to determine if they are “benign,” “prompt leakage,” “jailbreak,” or “pill.”
  4. Within the “Try the playground” section, you can also input your own prompt to assess its nature.
  5. Access the Lakera AI API through the “API access” section.

Gandalf: Lakera’s Interactive LLM Game

In a unique strategy to understand and counteract LLM vulnerabilities, Lakera introduced an interactive game named Gandalf.

This game, which uses giants like OpenAI’s GPT3.5, and LLMs from Cohere and Anthropic, throws a challenge to users: trick the LLM into divulging a password. Gandalf isn’t just a game but a research tool.

With 30 million interactions recorded from a million users in just six months, the insights are invaluable. These findings are poised to be integrated into Lakera’s flagship product, Lakera Guard.

Understanding and Categorizing Prompt Injections

Lakera’s efforts to understand prompt injections have led to the creation of a unique “prompt injection taxonomy.”

This systematic classification divides attacks into ten distinct categories. When customers interact using LLMs, Lakera’s system references this taxonomy, offering an added layer of protection.

Lakera’s Multifaceted Security Approach

While prompt injections are a significant threat, Lakera’s vision encompasses a broader security perspective.

The company is also devising strategies to counter data leakages, content moderation issues, and the challenges of LLM-generated misinformation.

By focusing on safety, security, and data privacy, Lakera aims to offer a holistic security solution.

Lakera’s Market Position and Customer Base

Lakera isn’t just a startup with a mission; it’s a force to be reckoned with. Catering to a diverse clientele ranging from Fortune 500 giants to budding startups, Lakera emphasizes the universal need for fortified AI application security.

Their journey, which began as a response to the security challenges accompanying the rise of LLMs, took a significant leap in August 2023 with the beta release of Lakera Guard.

Lakera Guard

Developers aiming to securely develop AI applications have a new ally: Lakera Guard. Built using a proprietary database enriched with insights from the Gandalf game, Lakera Guard is a robust security tool.

This database, which boasts nearly 30 million attack data points, is a blend of crowdsourced insights, open-source databases, and Lakera’s pioneering research.

Integrating Lakera Guard into AI applications is straightforward, with the tool designed to empower developers to fortify their AI applications seamlessly.

Lakera AI Pricing

Lakera AI provides several pricing tiers. The Community Plan is free, allowing 100k requests monthly, with extra requests at $2 per 1k, hosted via SaaS and ensuring data is EU-based.

For more extensive needs, the Pro Plan starts at $599 per month, offering similar features as the Community Plan but charging $6 for additional requests and ensuring no data storage. The Enterprise Plan is tailored for larger operations, with custom pricing.

It accommodates up to 1M requests, additional ones at $3 per 1k, and offers flexible hosting options, including SaaS or Private VPC/On-prem. This plan also ensures no data storage with residency options across the EU, US, and AUS. All Lakera’s plans prioritize security and compliance, with features like data encryption and adherence to SOC2 and GDPR standards.

Read More: How to Use Character AI Group Chat Feature


In the vast and evolving realm of AI, security remains paramount. As LLMs continue to shape our digital interactions, ensuring their security is non-negotiable.

Lakera, with its visionary approach, stands as a beacon of hope. From the interactive Gandalf game to the robust Lakera Guard tool, the company is leading the charge in safeguarding the AI frontier.

For developers and enterprises, Lakera’s innovations serve as a reminder that while AI’s potential is limitless, it’s our shared responsibility to ensure its secure evolution.

Leave a Comment