Getting started with Gateway API Inference Extension¶
Experimental
This project is still in an alpha state and breaking changes may occur in the future.
This quickstart guide is intended for engineers familiar with k8s and model servers (vLLM in this instance). The goal of this guide is to get an Inference Gateway up and running!
Prerequisites¶
- A cluster with:
- Support for services of type
LoadBalancer
. For kind clusters, follow this guide to get services of type LoadBalancer working. - Support for sidecar containers (enabled by default since Kubernetes v1.29) to run the model server deployment.
Steps¶
Deploy Sample Model Server¶
Two options are supported for running the model server:
-
GPU-based model server.
Requirements: a Hugging Face access token that grants access to the model meta-llama/Llama-3.1-8B-Instruct. -
CPU-based model server (not using GPUs).
The sample uses the model Qwen/Qwen2.5-1.5B-Instruct.
Choose one of these options and follow the steps below. Please do not deploy both, as the deployments have the same name and will override each other.
For this setup, you will need 3 GPUs to run the sample model server. Adjust the number of replicas in ./config/manifests/vllm/gpu-deployment.yaml
as needed.
Create a Hugging Face secret to download the model meta-llama/Llama-3.1-8B-Instruct. Ensure that the token grants access to this model.
Deploy a sample vLLM deployment with the proper protocol to work with the LLM Instance Gateway.
kubectl create secret generic hf-token --from-literal=token=$HF_TOKEN # Your Hugging Face Token with access to the set of Llama models
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/vllm/gpu-deployment.yaml
This setup is using the formal vllm-cpu
image, which according to the documentation can run vLLM on x86 CPU platform.
For this setup, we use approximately 9.5GB of memory and 12 CPUs for each replica.
While it is possible to deploy the model server with less resources, this is not recommended. For example, in our tests, loading the model using 8GB of memory and 1 CPU was possible but took almost 3.5 minutes and inference requests took unreasonable time. In general, there is a tradeoff between the memory and CPU we allocate to our pods and the performance. The more memory and CPU we allocate the better performance we can get.
After running multiple configurations of these values we decided in this sample to use 9.5GB of memory and 12 CPUs for each replica, which gives reasonable response times. You can increase those numbers and potentially may even get better response times. For modifying the allocated resources, adjust the numbers in cpu-deployment.yaml as needed.
Deploy a sample vLLM deployment with the proper protocol to work with the LLM Instance Gateway.
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/vllm/cpu-deployment.yaml
Install the Inference Extension CRDs¶
VERSION=v0.2.0
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/releases/download/$VERSION/manifests.yaml
kubectl apply -k https://github.com/kubernetes-sigs/gateway-api-inference-extension/config/crd
Deploy InferenceModel¶
Deploy the sample InferenceModel which is configured to load balance traffic between the food-review-0
and food-review-1
LoRA adapters of the sample model server.
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/inferencemodel.yaml
Deploy the InferencePool and Extension¶
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/inferencepool-resources.yaml
Deploy Inference Gateway¶
Choose one of the following options to deploy an Inference Gateway.
-
Enable the Gateway API and configure proxy-only subnets when necessary. See Deploy Gateways for detailed instructions.
-
Deploy Gateway and HealthCheckPolicy resources
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/gateway/gke/gateway.yaml kubectl apply -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/gateway/gke/healthcheck.yaml
Confirm that the Gateway was assigned an IP address and reports a
Programmed=True
status:$ kubectl get gateway inference-gateway NAME CLASS ADDRESS PROGRAMMED AGE inference-gateway inference-gateway <MY_ADDRESS> True 22s
-
Deploy the HTTPRoute
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/gateway/gke/httproute.yaml
-
Confirm that the HTTPRoute status conditions include
Accepted=True
andResolvedRefs=True
:kubectl get httproute llm-route -o yaml
-
Given that the default connection timeout may be insufficient for most inference workloads, it is recommended to configure a timeout appropriate for your intended use case.
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/gateway/gke/gcp-backend-policy.yaml
Please note that this feature is currently in an experimental phase and is not intended for production use. The implementation and user experience are subject to changes as we continue to iterate on this project.
-
Requirements
- Gateway API CRDs installed.
-
Install Istio
TAG=1.26-alpha.80c74f7f43482c226f4f4b10b4dda6261b67a71f # on Linux wget https://storage.googleapis.com/istio-build/dev/$TAG/istioctl-$TAG-linux-amd64.tar.gz tar -xvf istioctl-$TAG-linux-amd64.tar.gz # on macOS wget https://storage.googleapis.com/istio-build/dev/$TAG/istioctl-$TAG-osx.tar.gz tar -xvf istioctl-$TAG-osx.tar.gz # on Windows wget https://storage.googleapis.com/istio-build/dev/$TAG/istioctl-$TAG-win.zip unzip istioctl-$TAG-win.zip ./istioctl install --set tag=$TAG --set hub=gcr.io/istio-testing
-
If you run the Endpoint Picker (EPP) with the
--secureServing
flag set totrue
(the default mode), it is currently using a self-signed certificate. As a security measure, Istio does not trust self-signed certificates by default. As a temporary workaround, you can apply the destination rule to bypass TLS verification for EPP. A more secure TLS implementation in EPP is being discussed in Issue 582.kubectl apply -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/gateway/istio/destination-rule.yaml
-
Deploy Gateway
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/gateway/istio/gateway.yaml
-
Label the gateway
kubectl label gateway llm-gateway istio.io/enable-inference-extproc=true
Confirm that the Gateway was assigned an IP address and reports a
Programmed=True
status:$ kubectl get gateway inference-gateway NAME CLASS ADDRESS PROGRAMMED AGE inference-gateway inference-gateway <MY_ADDRESS> True 22s
-
Deploy the HTTPRoute
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/gateway/istio/httproute.yaml
-
Confirm that the HTTPRoute status conditions include
Accepted=True
andResolvedRefs=True
:kubectl get httproute llm-route -o yaml
Kgateway recently added support for inference extension as a technical preview. This means do not run Kgateway with inference extension in production environments. Refer to Issue 10411 for the list of caveats, supported features, etc.
-
Requirements
-
Set the Kgateway version and install the Kgateway CRDs.
KGTW_VERSION=v2.0.0-rc.2 helm upgrade -i --create-namespace --namespace kgateway-system --version $KGTW_VERSION kgateway-crds oci://cr.kgateway.dev/kgateway-dev/charts/kgateway-crds
-
Install Kgateway
helm upgrade -i --namespace kgateway-system --version $KGTW_VERSION kgateway oci://cr.kgateway.dev/kgateway-dev/charts/kgateway --set inferenceExtension.enabled=true
-
Deploy the Gateway
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/gateway/kgateway/gateway.yaml
Confirm that the Gateway was assigned an IP address and reports a
Programmed=True
status:$ kubectl get gateway inference-gateway NAME CLASS ADDRESS PROGRAMMED AGE inference-gateway kgateway <MY_ADDRESS> True 22s
-
Deploy the HTTPRoute
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/gateway/kgateway/httproute.yaml
-
Confirm that the HTTPRoute status conditions include
Accepted=True
andResolvedRefs=True
:kubectl get httproute llm-route -o yaml
Try it out¶
Wait until the gateway is ready.
IP=$(kubectl get gateway/inference-gateway -o jsonpath='{.status.addresses[0].value}')
PORT=80
curl -i ${IP}:${PORT}/v1/completions -H 'Content-Type: application/json' -d '{
"model": "food-review",
"prompt": "Write as if you were a critic: San Francisco",
"max_tokens": 100,
"temperature": 0
}'
IP=$(kubectl get gateway/inference-gateway -o jsonpath='{.status.addresses[0].value}')
PORT=80
curl -i ${IP}:${PORT}/v1/completions -H 'Content-Type: application/json' -d '{
"model": "Qwen/Qwen2.5-1.5B-Instruct",
"prompt": "Write as if you were a critic: San Francisco",
"max_tokens": 100,
"temperature": 0
}'
Cleanup¶
The following cleanup assumes you would like to clean ALL resources that were created in this quickstart guide.
Please be careful not to delete resources you'd like to keep.
-
Uninstall the Inference Pool
kubectl delete -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/inferencepool-resources.yaml --ignore-not-found kubectl delete -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/inferencemodel.yaml --ignore-not-found kubectl delete -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/vllm/cpu-deployment.yaml --ignore-not-found kubectl delete -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/vllm/gpu-deployment.yaml --ignore-not-found kubectl delete secret hf-token --ignore-not-found
-
Uninstall the Gateway
kubectl delete -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/gateway/gke/gateway.yaml --ignore-not-found kubectl delete -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/gateway/gke/healthcheck.yaml --ignore-not-found kubectl delete -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/gateway/gke/gcp-backend-policy.yaml --ignore-not-found kubectl delete -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/gateway/gke/httproute.yaml --ignore-not-found kubectl delete -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/gateway/istio/gateway.yaml --ignore-not-found kubectl delete -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/gateway/istio/destination-rule.yaml --ignore-not-found kubectl delete -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/gateway/istio/httproute.yaml --ignore-not-found kubectl delete -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/gateway/kgateway/gateway.yaml --ignore-not-found kubectl delete -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/gateway/kgateway/httproute.yaml --ignore-not-found
-
Uninstall the CRDs
kubectl delete -k https://github.com/kubernetes-sigs/gateway-api-inference-extension/config/crd --ignore-not-found