Brixo
Skip to main content
Back to Security Agents
Lakera logo

Lakera

Lakera is the world’s leading real-time GenAI security company. Customers rely on the Lakera AI Security Platform for security that doesn’t slow down their AI applications. To accelerate secure adoption of AI, the company created Gandalf, an educational platform, where more than one million users have learned about AI security. Lakera uses AI to continuously evolve defenses, so customers can stay ahead of emerging threats. Join us to shape the future of intelligent computing: www.lakera.ai/careers

Visit Website

Founded

2021

Location

San Francisco, CA

Employees

80

Funding

$10M Series A

Lakera: AI-Native Security for LLMs, Agents, and GenAI Apps

Lakera provides an AI-first security platform purpose-built for large language models (LLMs) and agents. Its stack combines real-time runtime protection with proactive red teaming, so teams can ship GenAI features with confidence.

  • Core value: stop prompt injection, jailbreaks, data leakage, and harmful content—inline, with low latency.
  • Differentiator: defenses are continuously informed by the world’s largest public AI red team via Lakera’s Gandalf game.
  • Learn more on the [homepage](https://www.lakera.ai).

    ---

    Key Products

  • **Lakera Guard (Runtime Protection)**
  • Inline screening of prompts and model outputs to block prompt injection, jailbreaks, sensitive data exposure, and policy violations.
  • Sub-50 ms target latency; centralized policy management; JSON verdicts to gate model calls.
  • API-first with a single `/guard` endpoint, plus quickstarts and policy templates.
  • Explore the [Guard API](https://docs.lakera.ai/docs/api/guard), [Policies](https://docs.lakera.ai/docs/policies), and [Quickstart](https://docs.lakera.ai/docs/quickstart).
  • **Lakera Red (Proactive Red Teaming & Pen Testing)**
  • AI-native testing for LLM apps and agents that simulates direct and indirect attacks.
  • Prioritizes risks and provides remediation guidance before go-live.
  • See [Lakera Red](https://www.lakera.ai/lakera-red).
  • **Gandalf & Agent Breaker (Learning + Threat Intel)**
  • Public CTF-style games used by 1M+ players to surface real-world attack tactics.
  • Intelligence feeds back into runtime defenses and policies.
  • Play [Gandalf](https://gandalf.lakera.ai) and learn more about [Gandalf/Agent Breaker](https://www.lakera.ai/lakera-gandalf).
  • ---

    How Lakera Works

  • **Inline Layer for Any LLM/Agent**
  • Sits in front of your models and tools to enforce consistent policies across apps and teams.
  • Model-agnostic, API-first integration pattern.
  • See the [API overview](https://docs.lakera.ai/docs/api).
  • **Centralized Policies, Consistent Outcomes**
  • Configure org-wide content safety, DLP/PII controls, and jailbreak/prompt-injection checks.
  • Apply the same policy set across chatbots, RAG apps, and autonomous agents.
  • Review [policy docs](https://docs.lakera.ai/docs/policies).
  • **Low-Latency, Real-Time Decisions**
  • Designed for sub-50 ms inline protection, enabling safe, responsive user experiences.
  • Read the product perspective on [LLM security](https://www.lakera.ai/blog/llm-security) and [LLM security tools](https://www.lakera.ai/blog/llm-security-tools).
  • ---

    AI Agent Security Angle

    Lakera maps controls directly to the threats agents face in tool-enabled workflows:

  • Block indirect prompt injection and malicious prompt chaining.
  • Enforce DLP/PII and prevent exfiltration through tool outputs.
  • Apply allow/deny logic before tools execute.
  • Dive deeper:

  • Guide to [Prompt Injection](https://www.lakera.ai/blog/guide-to-prompt-injection)
  • [Data Loss Prevention](https://www.lakera.ai/blog/data-loss-prevention) and [PII Handling](https://www.lakera.ai/blog/personally-identifiable-information)
  • [Data Exfiltration](https://www.lakera.ai/blog/data-exfiltration)
  • ---

    Use Cases

  • **Secure customer-facing chatbots and RAG apps** against jailbreaks and prompt injection. See the [prompt injection guide](https://www.lakera.ai/blog/guide-to-prompt-injection).
  • **Data loss prevention** around PII and sensitive data in LLM interactions. Explore [DLP](https://www.lakera.ai/blog/data-loss-prevention) and [PII best practices](https://www.lakera.ai/blog/personally-identifiable-information).
  • **Agent security for autonomous workflows**—protect tool use from indirect injections and malicious chaining. Read about [LLM security](https://www.lakera.ai/blog/llm-security) and [tools](https://www.lakera.ai/blog/llm-security-tools).
  • **Content safety and compliance** with consistent policy enforcement across inputs and outputs. See recent [product updates](https://www.lakera.ai/product-updates).
  • ---

    Integrations

  • **Model-agnostic, API-first** guard in front of any LLM or agent. Check the [Guard API](https://docs.lakera.ai/docs/api/guard).
  • **Proxy guardrails via LiteLLM**, useful for multi-model agent stacks. View the [LiteLLM integration](https://docs.litellm.ai/docs/proxy/guardrails/lakera_ai).
  • **Quickstarts and SDK-style guides** for fast setup and policy management. Start with the [Guard overview](https://docs.lakera.ai/guard) and [Quickstart](https://docs.lakera.ai/docs/quickstart).
  • ---

    Pricing and Free Plan

  • **Community plan**: $0 with request limits—ideal for development and evaluation.
  • **Enterprise plan**: custom pricing and SLAs.
  • See the official [pricing page](https://platform.lakera.ai/pricing) and a third-party overview of [free plan limits](https://www.eesel.ai/blog/lakera-pricing).
  • ---

    Company Snapshot

  • **Name**: Lakera — [Website](https://www.lakera.ai)
  • **Founded**: 2021
  • **Founders**: David Haber (CEO), Matthias Kraft (CTO), Mateo Rojas-Carulla (CSO) — see [About](https://www.lakera.ai/about)
  • **HQ**: San Francisco, US, and Zurich, CH — see [About](https://www.lakera.ai/about)
  • **Funding**: Raised a $20M Series A — [announcement](https://www.lakera.ai/news/lakera-raises-20m-series-a-to-deliver-real-time-genai-security)
  • **Corporate news**: Announced acquisition by Check Point Software — [LinkedIn post](https://www.linkedin.com/posts/lakeraai_exciting-announcement-lakera-is-set-to-activity-7373708191844470784-xYRP), [press coverage](https://www.venturelab.swiss/Lakera-acquired-by-Checkpoint-in-USD-300-million-deal), [investor note](https://atomico.com/insights/lakera-acquisition-making-generative-ai-safer-for-the-world)
  • **Team**: ~80 employees; 15k+ followers on LinkedIn — see [LinkedIn](https://www.linkedin.com/company/lakeraai)
  • ---

    Traction and Community

  • 1M+ players on Gandalf contribute real-world attack intelligence to improve defenses — learn about [Gandalf](https://www.lakera.ai/lakera-gandalf).
  • Ongoing educational content on LLM and agent security — browse the [blog](https://www.lakera.ai/blog/llm-security-tools).
  • ---

    User Sentiment

  • Pros
  • Helps teams ship LLM features safely with coverage for prompt attacks, DLP, and content safety — see [G2 reviews](https://www.g2.com/products/lakera-guard/reviews).
  • Practitioners cite Lakera Guard as a pragmatic layer for jailbreak and prompt injection protection in production — examples on [r/devsecops](https://www.reddit.com/r/devsecops/comments/1o95hh6/our_ai_project_failed_because_we_ignored_prompt/) and [r/devops](https://www.reddit.com/r/devops/comments/1nudj4x/how_the_hell_are_you_all_handling_ai_jailbreak/).
  • Community learning via Gandalf is praised for hands-on exposure — see [discussion](https://www.reddit.com/r/WebGames/comments/13rxjr1/gandalf_lakera_try_to_manipulate_chatgpt_into/) and the [Gandalf page](https://www.lakera.ai/lakera-gandalf).
  • Cons
  • Pricing and customization concerns for some users — see [G2 pros/cons](https://www.g2.com/products/lakera-guard/reviews?qs=pros-and-cons).
  • General skepticism toward review platforms can affect perceived trust in ratings — examples on [r/marketing](https://www.reddit.com/r/marketing/comments/1ntmeod/are_g2_reviews_even_legit_anymore/) and [r/PPC](https://www.reddit.com/r/PPC/comments/1hs29za/capterra_and_other_gartner_ppc_concerns/).
  • Advanced attackers may still bypass some guardrails in staged challenges, underscoring a moving threat landscape — discussion on [r/ChatGPTJailbreak](https://www.reddit.com/r/ChatGPTJailbreak/comments/1emzp1i/whats_difficult_right_now/).
  • ---

    Who It’s For

  • Security, platform, and AI engineering teams deploying LLM apps and agents to production.
  • Enterprises in regulated industries needing policy enforcement, privacy controls, and incident visibility.
  • Teams seeking an inline, centralized layer that screens every prompt and output with consistent policies.
  • ---

    Why Lakera

  • **Purpose-built for GenAI**: Traditional security controls miss LLM-specific risks; Lakera addresses them head-on.
  • **Real-time, low-latency protection**: Keep experiences fast while blocking high-impact threats.
  • **Threat intelligence flywheel**: Gandalf community activity continuously strengthens defenses across the platform.
  • Read positioning and updates on the [homepage](https://www.lakera.ai) and [product updates](https://www.lakera.ai/product-updates).

    ---

    Getting Started

    1. Review the [Quickstart](https://docs.lakera.ai/docs/quickstart).

    2. Implement the [Guard API](https://docs.lakera.ai/docs/api/guard) and apply [policies](https://docs.lakera.ai/docs/policies).

    3. Optionally enforce guardrails via [LiteLLM proxy integration](https://docs.litellm.ai/docs/proxy/guardrails/lakera_ai).

    4. Pilot with the [Community plan](https://platform.lakera.ai/pricing), then scale to Enterprise as needed.

    ---

    Selected Links

  • [Lakera Homepage](https://www.lakera.ai)
  • [Lakera Guard Docs](https://docs.lakera.ai/docs/api/guard)
  • [Policies](https://docs.lakera.ai/docs/policies)
  • [Quickstart](https://docs.lakera.ai/docs/quickstart)
  • [Lakera Red](https://www.lakera.ai/lakera-red)
  • [Gandalf](https://gandalf.lakera.ai)
  • [Gandalf/Agent Breaker Overview](https://www.lakera.ai/lakera-gandalf)
  • [Founders & Company](https://www.lakera.ai/about)
  • [Series A Announcement](https://www.lakera.ai/news/lakera-raises-20m-series-a-to-deliver-real-time-genai-security)
  • [Acquisition News (LinkedIn)](https://www.linkedin.com/posts/lakeraai_exciting-announcement-lakera-is-set-to-activity-7373708191844470784-xYRP)
  • [Acquisition Coverage](https://www.venturelab.swiss/Lakera-acquired-by-Checkpoint-in-USD-300-million-deal)
  • [Pricing](https://platform.lakera.ai/pricing)
  • [G2 Listing](https://www.g2.com/products/lakera-guard/reviews)
  • [Capterra Listing](https://www.capterra.com/p/10023187/Lakera-Guard/)
  • Related Companies

    CalypsoAI logo

    CalypsoAI

    CalypsoAI is an adaptive AI security platform that empowers enterprises to innovate safely—staying ahead of evolving threats to deliver unmatched protection and performance. As a trusted global leader, CalypsoAI partners with organizations of all sizes to responsibly unlock AI’s full potential. Founded in Silicon Valley in 2018 by the most talented minds in AI, data science and machine learning, CalypsoAI has established key partnerships with some of the world’s largest companies and secured backing from investors including Paladin Capital Group, Lockheed Martin Ventures, Lightspeed Venture Partners, 8VC, Hakluyt Capital and Empros Capital. The company has raised $38.2 million to date.

    Dropzone AI logo

    Dropzone AI

    Dropzone AI is the first AI SOC analyst that autonomously investigates alerts 24/7. It integrates with existing tools, adapts to your environment, and generates decision-ready reports. You can focus on real threats and 10X your team without adding headcount. No playbooks, code, or prompts required.

    HiddenLayer logo

    HiddenLayer

    HiddenLayer, a Gartner-recognized Cool Vendor for AI Security, is the leading provider of Security for AI. Its AISec Platform unifies supply chain security, runtime defense, posture management, and automated red teaming to protect agentic, generative and predictive AI applications. The platform enables organizations across the private and public sectors to reduce risk, ensure compliance, and adopt AI with confidence. Founded by a team of cybersecurity and machine learning veterans, HiddenLayer combines patented technology with industry-leading research to defend against prompt injection, adversarial manipulation, model theft, and supply chain compromise. The company is backed by strategic investors including M12 (Microsoft’s Venture Fund), Moore Strategic Ventures, Booz Allen Ventures, IBM Ventures, and Capital One Ventures.

    Mindgard logo

    Mindgard

    Mindgard is the leading provider of AI security solutions. Spun out from over a decade of AI security research at Lancaster University and headquartered in Boston and London, Mindgard helps enterprises secure their AI models, agents, and applications across the AI lifecycle. AI introduces risks that traditional security tools cannot detect, leaving organizations unable to find, measure, or secure their AI. Security teams struggle with a lack of visibility into AI activity and its attack surfaces. Difficulty reproducing agentic AI behavior creates uncertainty and compliance challenges. Ultimately, an inability to enforce AI controls heights the risk of compromise. Mindgard delivers AI detection and response through attack-driven defense, giving enterprises the ability to map their AI attack surface, measure and validate AI risk, and actively defend their AI. - Visibility into AI inventory and activity reveals what attackers can find out about your AI. - Continuous and automated AI red teaming assesses how attackers can exploit your AI. - Enforcement controls and policies at runtime stops attackers from breaching your AI. Mindgard stands out for its: - Flexibility: Test AI models directly or via apps using CI/CD, our web UI, or tools like Burp Suite. - Usability: The only non-open-source AI red teaming platform, fast and easy to set up, test, and report with. - R&D pipeline: Backed by a decade of university research and active PhD-level innovation and publishing. Mindgard works with the AI models and guardrails you build, buy and use. Extensive coverage beyond LLMs, including image, audio, and multi-modal. Whether you are using open source, internally developed, 3rd party purchased, or popular LLMs like OpenAI, Claude, Bard, we’ve got you covered. Trusted by leading organizations in finance, healthcare, and technology, Mindgard is backed by investors including .406 Ventures, IQ Capital, Atlantic Bridge, and Lakestar. For more information, visit mindgard.ai

    Nexusflow logo

    Nexusflow

    Nexusflow Solution enables Generative AI agents that surpass GPT-4 in your workflow and continuously automatically update with security guardrails.

    ProtectAI logo

    ProtectAI

    Prisma AIRS is the world’s most comprehensive AI security platform. It's natively integrated and uses best-in-class security to secure the entire AI attack lifecycle for every AI app, agent, models and dataset your business uses or builds. It empowers organizations to deploy AI bravely knowing that whatever they build is secure.