Web Analytics

gateway

⭐ 8344 stars English by Portkey-AI

English | 中文 | 日本語


AI Gateway

#### Route to 250+ LLMs with 1 fast & friendly API

Portkey AI Gateway Demo showing LLM routing capabilities

Docs | Enterprise | Hosted Gateway | Changelog | API Reference

License Discord Twitter npm version Better Stack Badge

Deploy to AWS EC2 Ask DeepWiki


The AI Gateway is designed for fast, reliable & secure routing to 1600+ language, vision, audio, and image models. It is a lightweight, open-source, and enterprise-ready solution that allows you to integrate with any language model in under 2 minutes.


#### What can you do with the AI Gateway?



[!TIP]
Starring this repo helps more developers discover the AI Gateway 🙏🏻
>
star-2


Quickstart (2 mins)

1. Setup your AI Gateway

# Run the gateway locally (needs Node.js and npm)
npx @portkey-ai/gateway
The Gateway is running on http://localhost:8787/v1
The Gateway Console is running on http://localhost:8787/public/

Deployment guides:   Portkey Cloud (Recommended)   Docker   Node.js   Cloudflare   Replit   Others...

2. Make your first request

# pip install -qU portkey-ai

from portkey_ai import Portkey

OpenAI compatible client

client = Portkey( provider="openai", # or 'anthropic', 'bedrock', 'groq', etc Authorization="sk-*" # the provider API key )

Make a request through your AI Gateway

client.chat.completions.create( messages=[{"role": "user", "content": "What's the weather like?"}], model="gpt-4o-mini" )

Supported Libraries:   JS   Python   REST   OpenAI SDKs   Langchain   LlamaIndex   Autogen   CrewAI   More..

On the Gateway Console (http://localhost:8787/public/) you can see all of your local logs in one place.

3. Routing & Guardrails

Configs in the LLM gateway allow you to create routing rules, add reliability and setup guardrails.
config = {
  "retry": {"attempts": 5},

"output_guardrails": [{ "default.contains": {"operator": "none", "words": ["Apple"]}, "deny": True }] }

Attach the config to the client

client = client.with_options(config=config)

client.chat.completions.create( model="gpt-4o-mini", messages=[{"role": "user", "content": "Reply randomly with Apple or Bat"}] )

This would always response with "Bat" as the guardrail denies all replies containing "Apple". The retry config would retry 5 times before giving up.

Request flow through Portkey's AI gateway with retries and guardrails

You can do a lot more stuff with configs in your AI gateway. Jump to examples →


Enterprise Version (Private deployments)

AWS   Azure   GCP   OpenShift   Kubernetes

The LLM Gateway's enterprise version offers advanced capabilities for org management, governance, security and more out of the box. View Feature Comparison →

The enterprise deployment architecture for supported platforms is available here - Enterprise Private Cloud Deployments

Book an enterprise AI gateway demo



AI Engineering Hours

Join weekly community calls every Friday (8 AM PT) to kickstart your AI Gateway implementation! Happening every Friday

Minutes of Meetings published here.


LLMs in Prod'25

Insights from analyzing 2 trillion+ tokens, across 90+ regions and 650+ teams in production. What to expect from this report:

Get the Report


Core Features

Reliable Routing

Security & Accuracy

Cost Management

Collaboration & Workflows



* Available in hosted and enterprise versions


Cookbooks

☄️ Trending

🚨 Latest

View all cookbooks →

Supported Providers

Explore Gateway integrations with 45+ providers and 8+ agent frameworks.

| | Provider | Support | Stream | | -------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------- | ------- | ------ | | | OpenAI | ✅ | ✅ | | | Azure OpenAI | ✅ | ✅ | | | Anyscale | ✅ | ✅ | | | Google Gemini | ✅ | ✅ | | | Anthropic | ✅ | ✅ | | | Cohere | ✅ | ✅ | | | Together AI | ✅ | ✅ | | | Perplexity | ✅ | ✅ | | | Mistral | ✅ | ✅ | | | Nomic | ✅ | ✅ | | | AI21 | ✅ | ✅ | | | Stability AI | ✅ | ✅ | | | DeepInfra | ✅ | ✅ | | | Ollama | ✅ | ✅ | | | Novita AI | ✅ | ✅ | /chat/completions, /completions |

View the complete list of 200+ supported models here


Agents

Gateway seamlessly integrates with popular agent frameworks. Read the documentation here.

| Framework | Call 200+ LLMs | Advanced Routing | Caching | Logging & Tracing | Observability | Prompt Management* | |------------------------------|--------|-------------|---------|------|---------------|-------------------| | Autogen | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | CrewAI | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | LangChain | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | Phidata | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | Llama Index | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | Control Flow | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | Build Your Own Agents | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |


*Available on the hosted app. For detailed documentation click here.

Gateway Enterprise Version

Make your AI app more reliable and forward compatible, while ensuring complete data security and privacy.

✅  Secure Key Management - for role-based access control and tracking
✅  Simple & Semantic Caching - to serve repeat queries faster & save costs
✅  Access Control & Inbound Rules - to control which IPs and Geos can connect to your deployments
✅  PII Redaction - to automatically remove sensitive data from your requests to prevent indavertent exposure
✅  SOC2, ISO, HIPAA, GDPR Compliances - for best security practices
✅  Professional Support - along with feature prioritization

Schedule a call to discuss enterprise deployments


Contributing

The easiest way to contribute is to pick an issue with the good first issue tag 💪. Read the contribution guidelines here.

Bug Report? File here | Feature Request? File here

Getting Started with the Community

Join our weekly AI Engineering Hours every Friday (8 AM PT) to: Join the next session → | Meeting notes


Community

Join our growing community around the world, for help, ideas, and discussions on AI.

Rubeus Social Share (4) --- Tranlated By Open Ai Tx | Last indexed: 2025-06-27 ---