Brixo
Skip to main content
Back to Agent Operations
Portkey logo

Portkey

AI Gateway, Guardrails, and Governance. Processing 14 Billion+ LLM tokens every day. Backed by Lightspeed.

Visit Website

Founded

2023

Location

San Francisco, CA

Employees

26

Funding

$12M+

Portkey AI — AI Gateway for Production LLM Apps

Portkey is an AI Gateway that unifies how teams build, secure, and scale production LLM apps. It provides a single API for 200+ models with intelligent routing, caching, guardrails, governance, and full-stack observability. Founded in 2023 by Rohit Agarwal and Ayush Garg, the company has raised a $3M seed led by Lightspeed, as reported by [Forbes](https://www.forbes.com/sites/davidprosser/2023/08/23/portkeyai-raises-3-million-to-help-clients-build-with-generative-ai/), [Lightspeed](https://lsvp.com/stories/our-investment-in-portkey-ai/), and [VentureBurn](https://ventureburn.com/2023/08/portkey-ai-raises-3m-to-accelerate-generative-ai-apps/).

  • HQ: San Francisco, US
  • Scale: Processes 14B+ LLM tokens/day (per LinkedIn company page)
  • Product category: AI Gateway / LLM Infrastructure
  • Open-source components: [Gateway on GitHub](https://github.com/Portkey-AI/gateway)
  • What Portkey Does

    Portkey abstracts model access and reliability so you can ship LLM features faster and safer.

  • OpenAI-compatible endpoint/SDK — drop-in with minimal changes
  • Unified access to 200+ models with failover and latency-aware routing
  • Prompt/response caching and cost controls to reduce spend
  • Guardrails and policy checks for safety and compliance
  • Deep observability (40+ metrics), traces, dashboards, alerts, and audit logs
  • Centralized governance: RBAC, org policies, usage quotas, enterprise auditability
  • Explore features: [AI Gateway](https://portkey.ai/features/ai-gateway), [Observability](https://portkey.ai/features/observability), [Docs hub](https://portkey.ai/docs)

    Core Capabilities

  • Multi-provider routing and resiliency
  • Conditional and latency-aware routing with automatic retries and rate-limit handling
  • Provider failover across OpenAI, Anthropic, Google Gemini, Azure OpenAI, AWS Bedrock, GitHub Models, and more
  • Caching and cost management
  • Prompt/response caching; spend tracking and attribution per team/project
  • Guardrails and safety
  • Content checks, policy enforcement, and configurable filters at the gateway layer
  • Full-stack LLM observability
  • 40+ request metrics, traces, dashboards, alerts, and OpenTelemetry export
  • Governance and enterprise controls
  • RBAC, org-level policies, quotas, audit logs, and enterprise auditability
  • Developer experience
  • OpenAI-compatible Python/Node SDKs, plus integrations with popular AI frameworks and tracing stacks
  • Integrations & Ecosystem

  • Model providers: [OpenAI, Anthropic, Google Gemini, Azure OpenAI, AWS Bedrock, GitHub Models (and more)](https://portkey.ai/docs/integrations/llms)
  • SDKs and agents:
  • [Python SDK](https://github.com/Portkey-AI/portkey-python-sdk), [Node SDK](https://github.com/Portkey-AI/portkey-node-sdk)
  • [OpenAI-compatible APIs](https://portkey.ai/docs/api-reference/sdk/python)
  • [Vercel AI SDK integration](https://vercel.com/integrations/portkey)
  • [OpenAI Agents SDK](https://portkey.ai/docs/integrations/agents/openai-agents)
  • MCP support
  • Frameworks and tracing:
  • Works alongside LangChain and LlamaIndex
  • Tracing providers: [Langfuse](https://portkey.ai/docs/integrations/tracing-providers/langfuse), [HoneyHive](https://portkey.ai/docs/integrations/tracing-providers/honeyhive), [Phoenix](https://portkey.ai/docs), [Weights & Biases](https://portkey.ai/docs), [Traceloop](https://portkey.ai/docs), plus [OpenTelemetry](https://portkey.ai/docs/product/observability/opentelemetry)
  • Deployment Options

  • Fully hosted by Portkey
  • Hybrid: Portkey control plane + data plane in your VPC
  • On‑prem data plane with centralized control
  • Compare options and security posture: [Enterprise & Security](https://portkey.ai/docs/enterprise/security), [Feature Comparison](https://portkey.ai/docs/product/enterprise-offering/security-portkey)
  • Security & Compliance

  • SOC 2 Type II, ISO 27001, GDPR, HIPAA readiness
  • PII anonymization, zero data retention options, KMS support
  • SSO/SAML and enterprise auditability
  • Details: [Security Overview](https://portkey.ai/docs/enterprise/security), [Enterprise Offering](https://portkey.ai/docs/product/enterprise-offering)
  • Who It’s For

  • Product/platform teams shipping LLM features to production
  • AI infrastructure engineers needing reliability, compliance, and cost control across multiple providers
  • Enterprises with auditability, SSO, PII controls, and hybrid/on‑prem requirements
  • Common Use Cases

  • Unified model access with failover and latency-aware routing across OpenAI, Anthropic, Google, Azure, and Bedrock
  • Prompt/response caching to reduce costs and improve response times
  • Safety guardrails and policy enforcement for privacy/compliance
  • Full-stack LLM observability, tracing, alerting, and OpenTelemetry export
  • Centralized governance: RBAC, audit logs, quotas, and cost attribution
  • Agent workloads via OpenAI Agents SDK and MCP connectors
  • Proof Points

  • Scale: 14B+ tokens processed daily (per LinkedIn)
  • Open-source credibility: [Portkey Gateway on GitHub](https://github.com/Portkey-AI/gateway)
  • Active community and founders: [CTO AMA](https://www.reddit.com/r/developersIndia/comments/1cusrv0/hi_im_ayush_garg_cofounder_cto_portkey_ai_ama/)
  • Customer Sentiment

  • Pros
  • Easy drop‑in gateway and unified API; quick setup for monitoring and routing
  • Strong cost tracking and caching to reduce spend (G2 and case mentions)
  • Reliable multi‑provider routing and outage handling
  • Suited for user‑facing apps where retries and rate limits matter
  • Cons
  • Less flexible for highly bespoke workflows vs in‑house or lower‑level routers (Reddit threads above)
  • Some third‑party benchmarks show better raw throughput/latency on alternatives in specific scenarios; teams should benchmark for their workloads
  • Fewer public case studies vs mature DevOps tools; POCs recommended
  • Pricing & Trial

  • Free tier available
  • Business from $99/month (as listed on third‑party trackers)
  • Enterprise: custom pricing; also available on marketplaces
  • [Pricing page](https://portkey.ai/pricing)
  • [AWS Marketplace enterprise listing](https://aws.amazon.com/marketplace/pp/prodview-o2leb4xcrkdqa)
  • [Microsoft Marketplace listing](https://marketplace.microsoft.com/en-us/product/saas/portkey.enterprise-saas)
  • Why Teams Choose Portkey

  • Faster time‑to‑production with an OpenAI‑compatible drop‑in gateway
  • Reliability via multi‑provider routing, retries, and robust rate‑limit handling
  • Cost efficiency with built‑in caching and granular spend tracking
  • Enterprise‑grade governance, auditing, and compliance
  • Deep, actionable observability with 40+ metrics and OpenTelemetry
  • Getting Started

  • Read the [AI Gateway overview](https://portkey.ai/features/ai-gateway)
  • Connect providers via the [providers guide](https://portkey.ai/docs/integrations/llms)
  • Instrument tracing with [observability and OpenTelemetry](https://portkey.ai/features/observability)
  • Build with the [Python SDK](https://portkey.ai/docs/api-reference/sdk/python) or [Node SDK](https://portkey.ai/docs/api-reference/sdk/node)
  • Use Portkey inside the [Vercel AI SDK](https://vercel.com/integrations/portkey) or [OpenAI Agents SDK](https://portkey.ai/docs/integrations/agents/openai-agents)
  • Key Resources

  • Website and docs: [Portkey.ai](https://portkey.ai), [Features](https://portkey.ai/features/ai-gateway), [Docs hub](https://portkey.ai/docs)
  • Security and enterprise: [Enterprise Offering](https://portkey.ai/docs/product/enterprise-offering), [Security Overview](https://portkey.ai/docs/enterprise/security), [Security & Comparison](https://portkey.ai/docs/product/enterprise-offering/security-portkey)
  • Providers and SDKs: [Providers](https://portkey.ai/docs/integrations/llms), [Python SDK](https://portkey.ai/docs/api-reference/sdk/python), [Node SDK](https://portkey.ai/docs/api-reference/sdk/node), [Vercel AI SDK](https://vercel.com/integrations/portkey), [OpenAI Agents SDK](https://portkey.ai/docs/integrations/agents/openai-agents)
  • Open-source: [Portkey Gateway (GitHub)](https://github.com/Portkey-AI/gateway)
  • Funding: [Forbes](https://www.forbes.com/sites/davidprosser/2023/08/23/portkeyai-raises-3-million-to-help-clients-build-with-generative-ai/), [Lightspeed](https://lsvp.com/stories/our-investment-in-portkey-ai/), [VentureBurn](https://ventureburn.com/2023/08/portkey-ai-raises-3m-to-accelerate-generative-ai-apps/)
  • Sentiment: [G2 reviews](https://www.g2.com/products/portkey/reviews), [LLMDevs thread](https://www.reddit.com/r/LLMDevs/comments/1fdii62/best_llm_gateway/), [LocalLLaMA thread](https://www.reddit.com/r/LocalLLaMA/comments/1mh9r0z/best_llm_gateway/), [CTO AMA](https://www.reddit.com/r/developersIndia/comments/1cusrv0/hi_im_ayush_garg_cofounder_cto_portkey_ai_ama/)
  • Comparisons/benchmarks: [Kong benchmark](https://konghq.com/blog/engineering/ai-gateway-benchmark-kong-ai-gateway-portkey-litellm), [AntStack caching comparison](https://www.antstack.com/blog/comparison-of-llm-prompt-caching-cloudflare-ai-gateway-portkey-and-amazon-bedrock/)
  • Related Companies

    Galileo logo

    Galileo

    Galileo is the leading platform for enterprise GenAI evaluation and observability. Our comprehensive suite of products support builders across the new AI development workflow—from fine-tuning LLMs to developing, testing, monitoring, and securing their AI applications. Each product is powered by our research-backed evaluation metrics. Today, Galileo is used by 100s of AI teams from startups to Fortune 50 enterprises, including Twilio, Comcast, and HP.

    HoneyHive logo

    HoneyHive

    HoneyHive is the leading AI observability and evals platform, trusted by next-gen AI startups to Fortune 100 enterprises. We make it easy and repeatable for modern AI teams to debug, evaluate, and monitor AI agents, and deploy them to production with confidence. HoneyHive’s founding team brings AI and infrastructure expertise from Microsoft OpenAI, Amazon, Amplitude, New Relic, and Sisu. The company is based in New York and San Francisco.

    Humanloop logo

    Humanloop

    Humanloop is the LLM evals platform for enterprises. Teams at Gusto, Vanta and Duolingo use Humanloop to ship reliable AI products. We enable you to adopt best practices for prompt management, evaluation and observability.

    LangFuse logo

    LangFuse

    Langfuse is the 𝗺𝗼𝘀𝘁 𝗽𝗼𝗽𝘂𝗹𝗮𝗿 𝗼𝗽𝗲𝗻 𝘀𝗼𝘂𝗿𝗰𝗲 𝗟𝗟𝗠𝗢𝗽𝘀 𝗽𝗹𝗮𝘁𝗳𝗼𝗿𝗺. It helps teams collaboratively develop, monitor, evaluate, and debug AI applications. Langfuse can be 𝘀𝗲𝗹𝗳-𝗵𝗼𝘀𝘁𝗲𝗱 in minutes and is battle-tested and used in production by thousands of users from YC startups to large companies like Khan Academy or Twilio. Langfuse builds on a proven track record of reliability and performance. Developers can trace any Large Language model or framework using our SDKs for Python and JS/TS, our open API or our native integrations (OpenAI, Langchain, Llama-Index, Vercel AI SDK). Beyond tracing, developers use 𝗟𝗮𝗻𝗴𝗳𝘂𝘀𝗲 𝗣𝗿𝗼𝗺𝗽𝘁 𝗠𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁, 𝗶𝘁𝘀 𝗼𝗽𝗲𝗻 𝗔𝗣𝗜𝘀, 𝗮𝗻𝗱 𝘁𝗲𝘀𝘁𝗶𝗻𝗴 𝗮𝗻𝗱 𝗲𝘃𝗮𝗹𝘂𝗮𝘁𝗶𝗼𝗻 𝗽𝗶𝗽𝗲𝗹𝗶𝗻𝗲𝘀 to improve the quality of their applications. Product managers can 𝗮𝗻𝗮𝗹𝘆𝘇𝗲, 𝗲𝘃𝗮𝗹𝘂𝗮𝘁𝗲, 𝗮𝗻𝗱 𝗱𝗲𝗯𝘂𝗴 𝗔𝗜 𝗽𝗿𝗼𝗱𝘂𝗰𝘁𝘀 by accessing detailed metrics on costs, latencies, and user feedback in the Langfuse Dashboard. They can bring 𝗵𝘂𝗺𝗮𝗻𝘀 𝗶𝗻 𝘁𝗵𝗲 𝗹𝗼𝗼𝗽 by setting up annotation workflows for human labelers to score their application. Langfuse can also be used to 𝗺𝗼𝗻𝗶𝘁𝗼𝗿 𝘀𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝗿𝗶𝘀𝗸𝘀 through security framework and evaluation pipelines. Langfuse enables 𝗻𝗼𝗻-𝘁𝗲𝗰𝗵𝗻𝗶𝗰𝗮𝗹 𝘁𝗲𝗮𝗺 𝗺𝗲𝗺𝗯𝗲𝗿𝘀 to iterate on prompts and model configurations directly within the Langfuse UI or use the Langfuse Playground for fast prompt testing. Langfuse is 𝗼𝗽𝗲𝗻 𝘀𝗼𝘂𝗿𝗰𝗲 and we are proud to have a fantastic community on Github and Discord that provides help and feedback. Do get in touch with us!

    LangSmith logo

    LangSmith

    LangChain provides the agent engineering platform and open source frameworks developers need to ship reliable agents fast.

    Phoenix (Arize AI) logo

    Phoenix (Arize AI)

    Ship Agents that Work. Arize AI & Agent Engineering Platform. One place for development, observability, and evaluation.