Portkey AI — AI Gateway for Production LLM Apps
Portkey is an AI Gateway that unifies how teams build, secure, and scale production LLM apps. It provides a single API for 200+ models with intelligent routing, caching, guardrails, governance, and full-stack observability. Founded in 2023 by Rohit Agarwal and Ayush Garg, the company has raised a $3M seed led by Lightspeed, as reported by [Forbes](https://www.forbes.com/sites/davidprosser/2023/08/23/portkeyai-raises-3-million-to-help-clients-build-with-generative-ai/), [Lightspeed](https://lsvp.com/stories/our-investment-in-portkey-ai/), and [VentureBurn](https://ventureburn.com/2023/08/portkey-ai-raises-3m-to-accelerate-generative-ai-apps/).
HQ: San Francisco, USScale: Processes 14B+ LLM tokens/day (per LinkedIn company page)Product category: AI Gateway / LLM InfrastructureOpen-source components: [Gateway on GitHub](https://github.com/Portkey-AI/gateway)What Portkey Does
Portkey abstracts model access and reliability so you can ship LLM features faster and safer.
OpenAI-compatible endpoint/SDK — drop-in with minimal changesUnified access to 200+ models with failover and latency-aware routingPrompt/response caching and cost controls to reduce spendGuardrails and policy checks for safety and complianceDeep observability (40+ metrics), traces, dashboards, alerts, and audit logsCentralized governance: RBAC, org policies, usage quotas, enterprise auditabilityExplore features: [AI Gateway](https://portkey.ai/features/ai-gateway), [Observability](https://portkey.ai/features/observability), [Docs hub](https://portkey.ai/docs)
Core Capabilities
Multi-provider routing and resiliencyConditional and latency-aware routing with automatic retries and rate-limit handlingProvider failover across OpenAI, Anthropic, Google Gemini, Azure OpenAI, AWS Bedrock, GitHub Models, and moreCaching and cost managementPrompt/response caching; spend tracking and attribution per team/projectGuardrails and safetyContent checks, policy enforcement, and configurable filters at the gateway layerFull-stack LLM observability40+ request metrics, traces, dashboards, alerts, and OpenTelemetry exportGovernance and enterprise controlsRBAC, org-level policies, quotas, audit logs, and enterprise auditabilityDeveloper experienceOpenAI-compatible Python/Node SDKs, plus integrations with popular AI frameworks and tracing stacksIntegrations & Ecosystem
Model providers: [OpenAI, Anthropic, Google Gemini, Azure OpenAI, AWS Bedrock, GitHub Models (and more)](https://portkey.ai/docs/integrations/llms)SDKs and agents:[Python SDK](https://github.com/Portkey-AI/portkey-python-sdk), [Node SDK](https://github.com/Portkey-AI/portkey-node-sdk)[OpenAI-compatible APIs](https://portkey.ai/docs/api-reference/sdk/python)[Vercel AI SDK integration](https://vercel.com/integrations/portkey)[OpenAI Agents SDK](https://portkey.ai/docs/integrations/agents/openai-agents)MCP supportFrameworks and tracing:Works alongside LangChain and LlamaIndexTracing providers: [Langfuse](https://portkey.ai/docs/integrations/tracing-providers/langfuse), [HoneyHive](https://portkey.ai/docs/integrations/tracing-providers/honeyhive), [Phoenix](https://portkey.ai/docs), [Weights & Biases](https://portkey.ai/docs), [Traceloop](https://portkey.ai/docs), plus [OpenTelemetry](https://portkey.ai/docs/product/observability/opentelemetry)Deployment Options
Fully hosted by PortkeyHybrid: Portkey control plane + data plane in your VPCOn‑prem data plane with centralized controlCompare options and security posture: [Enterprise & Security](https://portkey.ai/docs/enterprise/security), [Feature Comparison](https://portkey.ai/docs/product/enterprise-offering/security-portkey)Security & Compliance
SOC 2 Type II, ISO 27001, GDPR, HIPAA readinessPII anonymization, zero data retention options, KMS supportSSO/SAML and enterprise auditabilityDetails: [Security Overview](https://portkey.ai/docs/enterprise/security), [Enterprise Offering](https://portkey.ai/docs/product/enterprise-offering)Who It’s For
Product/platform teams shipping LLM features to productionAI infrastructure engineers needing reliability, compliance, and cost control across multiple providersEnterprises with auditability, SSO, PII controls, and hybrid/on‑prem requirementsCommon Use Cases
Unified model access with failover and latency-aware routing across OpenAI, Anthropic, Google, Azure, and BedrockPrompt/response caching to reduce costs and improve response timesSafety guardrails and policy enforcement for privacy/complianceFull-stack LLM observability, tracing, alerting, and OpenTelemetry exportCentralized governance: RBAC, audit logs, quotas, and cost attributionAgent workloads via OpenAI Agents SDK and MCP connectorsProof Points
Scale: 14B+ tokens processed daily (per LinkedIn)Open-source credibility: [Portkey Gateway on GitHub](https://github.com/Portkey-AI/gateway)Active community and founders: [CTO AMA](https://www.reddit.com/r/developersIndia/comments/1cusrv0/hi_im_ayush_garg_cofounder_cto_portkey_ai_ama/)Customer Sentiment
ProsEasy drop‑in gateway and unified API; quick setup for monitoring and routingStrong cost tracking and caching to reduce spend (G2 and case mentions)Reliable multi‑provider routing and outage handlingSuited for user‑facing apps where retries and rate limits matterConsLess flexible for highly bespoke workflows vs in‑house or lower‑level routers (Reddit threads above)Some third‑party benchmarks show better raw throughput/latency on alternatives in specific scenarios; teams should benchmark for their workloadsFewer public case studies vs mature DevOps tools; POCs recommendedPricing & Trial
Free tier availableBusiness from $99/month (as listed on third‑party trackers)Enterprise: custom pricing; also available on marketplaces[Pricing page](https://portkey.ai/pricing)[AWS Marketplace enterprise listing](https://aws.amazon.com/marketplace/pp/prodview-o2leb4xcrkdqa)[Microsoft Marketplace listing](https://marketplace.microsoft.com/en-us/product/saas/portkey.enterprise-saas)Why Teams Choose Portkey
Faster time‑to‑production with an OpenAI‑compatible drop‑in gatewayReliability via multi‑provider routing, retries, and robust rate‑limit handlingCost efficiency with built‑in caching and granular spend trackingEnterprise‑grade governance, auditing, and complianceDeep, actionable observability with 40+ metrics and OpenTelemetryGetting Started
Read the [AI Gateway overview](https://portkey.ai/features/ai-gateway)Connect providers via the [providers guide](https://portkey.ai/docs/integrations/llms)Instrument tracing with [observability and OpenTelemetry](https://portkey.ai/features/observability)Build with the [Python SDK](https://portkey.ai/docs/api-reference/sdk/python) or [Node SDK](https://portkey.ai/docs/api-reference/sdk/node)Use Portkey inside the [Vercel AI SDK](https://vercel.com/integrations/portkey) or [OpenAI Agents SDK](https://portkey.ai/docs/integrations/agents/openai-agents)Key Resources
Website and docs: [Portkey.ai](https://portkey.ai), [Features](https://portkey.ai/features/ai-gateway), [Docs hub](https://portkey.ai/docs)Security and enterprise: [Enterprise Offering](https://portkey.ai/docs/product/enterprise-offering), [Security Overview](https://portkey.ai/docs/enterprise/security), [Security & Comparison](https://portkey.ai/docs/product/enterprise-offering/security-portkey)Providers and SDKs: [Providers](https://portkey.ai/docs/integrations/llms), [Python SDK](https://portkey.ai/docs/api-reference/sdk/python), [Node SDK](https://portkey.ai/docs/api-reference/sdk/node), [Vercel AI SDK](https://vercel.com/integrations/portkey), [OpenAI Agents SDK](https://portkey.ai/docs/integrations/agents/openai-agents)Open-source: [Portkey Gateway (GitHub)](https://github.com/Portkey-AI/gateway)Funding: [Forbes](https://www.forbes.com/sites/davidprosser/2023/08/23/portkeyai-raises-3-million-to-help-clients-build-with-generative-ai/), [Lightspeed](https://lsvp.com/stories/our-investment-in-portkey-ai/), [VentureBurn](https://ventureburn.com/2023/08/portkey-ai-raises-3m-to-accelerate-generative-ai-apps/)Sentiment: [G2 reviews](https://www.g2.com/products/portkey/reviews), [LLMDevs thread](https://www.reddit.com/r/LLMDevs/comments/1fdii62/best_llm_gateway/), [LocalLLaMA thread](https://www.reddit.com/r/LocalLLaMA/comments/1mh9r0z/best_llm_gateway/), [CTO AMA](https://www.reddit.com/r/developersIndia/comments/1cusrv0/hi_im_ayush_garg_cofounder_cto_portkey_ai_ama/)Comparisons/benchmarks: [Kong benchmark](https://konghq.com/blog/engineering/ai-gateway-benchmark-kong-ai-gateway-portkey-litellm), [AntStack caching comparison](https://www.antstack.com/blog/comparison-of-llm-prompt-caching-cloudflare-ai-gateway-portkey-and-amazon-bedrock/)