Skip to content

Provide a example for integrating Aibrix with Envoy AI Gateway (using Gateway API + InferencePool) #1732

@googs1025

Description

@googs1025

🚀 Feature Description and Motivation

I’m currently trying to integrate a mock Aibrix-deployed model (e.g., llama2-7b) with Envoy AI Gateway using the Kubernetes Gateway API and the InferencePool / AIGatewayRoute pattern (similar to the official envoyproxy/ai-gateway ai-gateway).

Use Case

We want to expose multiple Aibrix-managed LLMs (e.g., llama2-7b, qwen-14b) through a single, secure, OpenAI-compatible API endpoint powered by Envoy AI Gateway and gateway-api-inference-extension

Proposed Solution

  • Helm does not require the gateway to be installed; the gateway and ai-gateway are installed externally.
  • add example and README.md

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions