Text RAG
The default “Learn” mode for text is to use retrieval augmented generation (RAG). This method ingests data into a database for quick retrieval. At inference time, the language model includes relevant information in the context.
RAG Tutorial
- Click “New Session”, slide the toggle to “Learn” and select “Text”.
- Now pick a recent paper from https://arxiv.org/ on a subject that’s interesting to you (click the “recent” link to find something the base model definitely won’t know about).
- Paste the PDF link into the “Links” field and click the “+” button. You can also paste in plain text or drag and drop documents (pdf, docx) into the file upload form.
- Click “Continue” and Helix will download and ingest the content.
- Now chat with the chat bot and ask questions about the paper.
- Share this chat bot with your friends by clicking the “Share” button.
Configuring RAG
To configure the RAG settings used by Helix, click the hammer and spanner icon at the top right.
You can configure the following settings:
- Rag Distance Function: the distance metric used for measuring similarity.
- Rag Threshold: the relevancy threshold. This requires tuning for your specific use case.
- Rag Results Count: how many results are used in the language model context.
- Rag Chunk Size: the number of characters included in each chunk.
- Rag Chunk Overlap: the number of characters that each chunk overlaps by.
Last updated on