Installing Helix Quick Start

tl;dr

Linux | macOS | Windows (WSL2)

curl -sL -O https://get.helix.ml/install.sh && bash install.sh

Follow the instructions to install Helix on your local machine. Installer will prompt you before making any changes.

View source | Manual instructions | Kubernetes | Discord





Why install Helix?

Install Helix to get:

  • Helix Apps, version controlled configuration for LLM-based applications
  • Knowledge, continuously updated RAG from a URL
  • API integrations so your app can call an API to get up to date information when needed
  • New Helix App Editor UI

Requirements

  • Control Plane is the Helix API, web interface, and postgres database and requires:

    • Linux, macOS or Windows
    • Docker
    • 4 CPUs, 8GB RAM and 50GB+ free disk space
  • Inference Provider requires ONE OF:

    • An NVIDIA GPU if you want to use private Helix Runners (example), or
    • Ollama running locally on macOS, Linux or Windows (example), or
    • An OpenAI-compatible API provider, such as TogetherAI (example) - we like TogetherAI because you can run the same open source models via their API that you can run locally using Helix GPU Runners, but you can use any OpenAI-compatible API (e.g. vLLM, Azure OpenAI, Gemini etc)
  • Private Helix Runners require:

    • As much system memory as you have GPU memory
    • Min 8GB GPU for small models (Llama3-8B, Phi3-Mini), 24GB for Mixtral/SDXL, 40GB for Llama3-70B
    • Min 24GB GPU for fine-tuning (text or image)
    • Recommend 2x24GB GPUs for e.g. text & image inference in parallel
    • NVIDIA 3090s, A6000s are typically good price/performance
    • 150GB+ of free disk space
    • A fast internet connection (small runner image is 23GB)

Download the Helix Installer

Use the installer script to get started with Helix quickly. Run the following commands to download and make the installer executable:

curl -sL -O https://get.helix.ml/install.sh
chmod +x install.sh

Now run the installer and follow the instructions with sudo ./install.sh. You will be prompted before any changes are made to your system.

You can also run ./install.sh --help to see what options are available, or read on for common configuration options.

Installer Examples

Just install the CLI

curl -sL -O https://get.helix.ml/install.sh
chmod +x install.sh
./install.sh --cli

This will just install the CLI on its own. Useful if you want to connect to a Helix deployment from another machine.

Local Helix on Linux or Windows (WSL2) with an NVIDIA GPU

This will set up the CLI, the controlplane and a runner on localhost if an NVIDIA GPU is available:

curl -sL -O https://get.helix.ml/install.sh
chmod +x install.sh
sudo ./install.sh
  • If you have an older GPU (e.g. NVIDIA 1060, 1080 series), specify --older-gpu. This will disable image inference and text/image fine-tuning, which only works on newer GPUs (e.g. 3090 onwards).
  • If you want to use text fine-tuning, as well as needing a newer GPU (e.g. 3090 onwards) you also need to set --hf-token <YOUR_TOKEN> to a valid Huggingface token, then you now need to accept sharing your contact information with Mistral here and then fetch an access token from here and then specify it in this parameter.

This will create a HelixML directory with the following files:

  • docker-compose.yaml - the compose file for the control plane
  • .env - appropriately configured secrets and configuration for the control plane
  • runner.sh - script to start the runner assuming a local GPU

It will print out instructions on how to start everything.

Install alongside Ollama on macOS, Linux or Windows

Install locally alongside Ollama already running:

curl -sL -O https://get.helix.ml/install.sh
chmod +x install.sh
./install.sh --openai-api-key ollama --openai-base-url http://host.docker.internal:11434/v1

This assumes you have downloaded some models with Ollama, for example by running:

ollama pull llama3:instruct

These models will then show up in the Helix UI. You can reference them, e.g. model: llama3:instruct in the assistant model field in your helix.yaml.

Using an external LLM won’t work with image inference or text/image fine-tuning. Connect a full GPU to enable those features.

Install Control Plane pointing at TogetherAI

Install CLI and controlplane locally with external TogetherAI API key:

curl -sL -O https://get.helix.ml/install.sh
chmod +x install.sh
./install.sh --cli --controlplane --together-api-key YOUR_TOGETHER_API_KEY

This won’t work with image inference or text/image fine-tuning. Connect a full GPU to enable those features.

Set up Control Plane with a DNS name

If you want to make your Helix deployment available to other people, you should get a domain name or subdomain and set up an A record pointing to the IP address of your Control Plane server.

Then, you can install the CLI and Control Plane on the server, specifying the DNS name, and the installer will automatically set up TLS with Caddy:

curl -sL -O https://get.helix.ml/install.sh
chmod +x install.sh
./install.sh --cli --controlplane --api-host https://helix.mycompany.com

The automatic Caddy installation currently only works on Ubuntu.

See Manual Install for full instructions on other platforms.

Attach a Runner to an existing Control Plane

Install just the runner, pointing to a controlplane with a DNS name (find runner token in /opt/HelixML/.env on the control plane node):

curl -sL -O https://get.helix.ml/install.sh
chmod +x install.sh
./install.sh --runner --api-host https://helix.mycompany.com --runner-token YOUR_RUNNER_TOKEN

Install Control Plane pointing at any OpenAI-compatible API

Install the CLI and controlplane locally with OpenAI-compatible API key and base URL:

curl -sL -O https://get.helix.ml/install.sh
chmod +x install.sh
./install.sh --cli --controlplane --openai-api-key YOUR_OPENAI_API_KEY --openai-base-url YOUR_OPENAI_BASE_URL

This won’t work with image inference or text/image fine-tuning. Connect a full GPU to enable those features.

Upgrading

Just run the installer again. It will reuse secrets in your .env file and back it up in case you need to copy over any changes.

(Optional) Enabling Helix Apps

Helix Apps are a fun new way to define LLM applications as code (LLMGitOps!?). Your users can create helix.yaml configuration files that tell Helix what tools it has access to (e.g. APIs) and what scripts it can run (e.g. GPTScript).

To enable this you need to provide some extra configuration and create a Github App to request access to a user’s repository.

1. Create a Github OAuth App

  1. Browse to your Github Organization’s Settings page then at the bottom left navigation bar click Developer Settings -> OAuth Apps. This is an example URL for the helixml org: https://github.com/organizations/helixml/settings/applications. If you can’t see the settings page you probably don’t have permission. You can also try creating a personal Oauth App instead.

  2. Create an informative name and set the homepage URL to your domain. Finally set the Authorization callback URL to: https://YOUR_DOMAIN/api/v1/github/callback. This url must be publically accessible from Github’s servers.

    You can test if it is publically accessible with: curl https://YOUR_DOMAIN/api/v1/github/callback -i. You should see a 401 error. If it produces a DNS error, a time out, or a 404, then your control plane has not been setup correctly.

  3. Now that the app has been created, click on the Create a new client secret button. Make a note of the Client ID and Client secret.

2. Enable Github Apps in Helix Configuration

  1. Browse to your Helix installation directory and edit the .env file.

  2. Add the following lines to your .env file:

GITHUB_INTEGRATION_ENABLED=true
GITHUB_INTEGRATION_CLIENT_ID=XXX
GITHUB_INTEGRATION_CLIENT_SECRET=XXX
GITHUB_INTEGRATION_WEBHOOK_URL=https://YOUR_DOMAIN/api/v1/github/webhook

3. Restart the Helix Control Plane

Restart helix with docker compose up -d. This will recreate the control plane container.

4. Test Helix Apps

Now go ahead and browse to https://YOUR_DOMAIN/apps and click on NEW APP at the top right. You should be able to connect to and add a repository that you are a maintainer/owner of.

(Optional) Securing Helix

By default, new registrations are enabled to make it easy for you to create an account. Also by default, all accounts are admin accounts.

After creating your own accounts, you can choose to disable new registrations. Go to http(s)://<YOUR_CONTROLPLANE_HOSTNAME>/auth and click “Administration Console”. Log in with admin and KEYCLOAK_ADMIN_PASSWORD from your .env file. Click the “master” dropdown and switch to the helix realm. Under “Realm settings” -> “Login”, you can untick “User registration”. You can also set up OAuth, email validation etc here.

To lock down admin users to a specific set of users, go to Users in Keycloak and find the users you want to be admins. Copy their IDs into .env as a comma-separated list under ADMIN_USER_IDS variable. Run docker compose up -d to update the stack.

You may also wish to review all available configuration options in Environment Variables.

More Configuration

For further configuration options you can put in your .env file, such as connecting GitHub for easy git push deployment of Helix Apps, check the manual install docs.

Last updated on