GenAI
A guide to Retrieval-Augmented Generation design choices.
6 hours ago
Building Retrieval-Augmented Generation systems, or RAGs, is easy. With tools like LamaIndex or LangChain, you can get your RAG-based Large Language Model up and running in no time. Sure, some engineering effort is needed to ensure the system is efficient and scales well, but in principle, building the RAG is the easy part. Whatâs much more difficult is designing it well.
Having recently gone through the process myself, I discovered how many big and small design choices need to be made for a Retrieval-Augmented Generation system. Each of them can potentially impact the performance, behavior, and cost of your RAG-based LLM, sometimes in non-obvious ways.
Without further ado, let me present this â by no means exhaustive yet hopefully useful â list of RAG design choices. Let it guide your design efforts.
Retrieval-Augmented Generation gives a chatbot access to some external data so that it can answer usersâ questions based on this data rather than general knowledge or its own dreamed-up hallucinations.
As such, RAG systems can become complex: we need to get the data, parse it to a chatbot-friendly format, make it available and searchable to the LLM, and finally ensure that the chatbot is making the correct use of the data it was given access to.
I like to think about RAG systems in terms of the components they are made of. There are five main pieces to the puzzle:
Indexing: Embedding external data into a vector representation.
Storing: Persisting the indexed embeddings in a database.
Retrieval: Finding relevant pieces in the stored data.
Synthesis: Generating answers to userâs queries.
Evaluation: Quantifying how good the RAG system is.
In the remainder of this article, we will go through the five RAG components one by one, discussing the design choices, their implications and trade-offs, and some useful resources helping to make the decision.