Installing Helix on Kubernetes with Helm
This page describes how to install Helix on Kubernetes.
Requirements
Control Plane is the Helix API, web interface, and postgres database and requires:
- Linux, macOS or Windows
- Docker
- 4 CPUs, 8GB RAM and 50GB+ free disk space
Inference Provider requires ONE OF:
- An NVIDIA GPU if you want to use private Helix Runners (example), or
- Ollama running locally on macOS, Linux or Windows (example), or
- An OpenAI-compatible API provider, such as TogetherAI (example) - we like TogetherAI because you can run the same open source models via their API that you can run locally using Helix GPU Runners, but you can use any OpenAI-compatible API (e.g. vLLM, Azure OpenAI, Gemini etc)
Private Helix Runners require:
- As much system memory as you have GPU memory
- Min 8GB GPU for small models (Llama3-8B, Phi3-Mini), 24GB for Mixtral/SDXL, 40GB for Llama3-70B
- Min 24GB GPU for fine-tuning (text or image)
- Recommend 2x24GB GPUs for e.g. text & image inference in parallel
- NVIDIA 3090s, A6000s are typically good price/performance
- 150GB+ of free disk space
- A fast internet connection (small runner image is 23GB)
Deploying the Control Plane
This section details how to install the Helix control plane.
There is an example script in the repository that shows you an example of deploying the control plane to a kind cluster.
1. Install Keycloak
Helix uses Keycloak for authentication. If you have one already, you can skip this step. Otherwise, to install one through Helm (chart info, repo).
For example:
helm upgrade --install keycloak oci://registry-1.docker.io/bitnamicharts/keycloak \
--set auth.adminUser=admin \
--set auth.adminPassword=oh-hallo-insecure-password \
--set httpRelativePath="/auth/" \
--set image.tag="23"
By default it only has ClusterIP service, in order to expose it, you can either port-forward or create a load balancer to access it if you are on k3s or minikube:
kubectl expose pod keycloak-0 --port 8888 --target-port 8080 --name keycloak-ext --type=LoadBalancer
Alternatively, if you run on k3s:
helm upgrade --install keycloak oci://registry-1.docker.io/bitnamicharts/keycloak \
--set auth.adminUser=admin \
--set auth.adminPassword=oh-hallo-insecure-password \
--set httpRelativePath="/auth/" \
--set service.type=LoadBalancer \
--set service.ports.http=8888
2. Install the Helm Repository
helm repo add helix https://charts.helix.ml
helm repo update
3. Apply the Chart
Copy the values-example.yaml from the repository to configure the Helix control plane. You can look at the configuration documentation to learn more about what they do.
curl -o values-example.yaml https://raw.githubusercontent.com/helixml/helix/main/charts/helix-controlplane/values-example.yaml
You must edit the provider configuration in this file so that Helix can run. Specifying a remote provider (e.g. openai
or togetherai
) is the easiest, but you must provide API keys to do that. A helix
provider ensures local operation but then you must also add a runner.
Now you’re ready to install the control plane helm chart with the latest images.
export LATEST_RELEASE=$(curl -s https://get.helix.ml/latest.txt)
helm upgrade --install my-helix-controlplane helix/helix-controlplane \
-f values-example.yaml \
--set image.tag="${LATEST_RELEASE}"
Ensure all the pods start. If they do not inspect the logs.
Once they are all running, access the control plane via port-forwarding (default) or according to your configuration.
You can configure the Kubernetes deployment by overriding the settings in the values.yaml.
Deploying a Runner
This section describes how to install a Helix runner on Kubernetes.
1. Install the Helm Repository
helm repo add helix https://charts.helix.ml
helm repo update
2. Apply the Chart
Then, install the runner:
export LATEST_RELEASE=$(curl -s https://get.helix.ml/latest.txt)
helm upgrade --install my-helix-runner helix/helix-runner \
--set runner.host="my-helix-controlplane" \
--set runner.token="oh-hallo-insecure-token" \
--set runner.memory=24GB \
--set replicaCount=1 \
--set image.tag="${LATEST_RELEASE}-small"
More Help
If you get stuck, please get in touch. But here’s some extra links to help you get deployed: