Splitting text into appropriately sized chunks is crucial when preparing data for embedding and retrieval in a RAG system. Two main factors guide this process: Model Constraints and Retrieval Effectiveness.
Model Constraints:
– Embedding models have a maximum token length for input, and anything beyond this limit gets truncated. It’s important to be aware of the limitations of your chosen model and ensure that each data chunk doesn’t exceed this maximum token length.
– Multilingual models often have shorter sequence limits compared to English models. For example, the Paraphrase multilingual MiniLM-L12 v2 model has a maximum context window of 128 tokens.
– Consider the text length the model was trained on. Some models might technically accept longer inputs but were trained on shorter chunks, which could affect performance on longer texts. An example is the Multi QA base from SBERT.
Retrieval Effectiveness:
– While chunking data to the model’s maximum length seems logical, it may not always lead to the best retrieval outcomes. Larger chunks offer more context but can obscure key details, making it harder to retrieve precise matches. Smaller chunks can enhance match accuracy but might lack the necessary context for complete answers. Hybrid approaches use smaller chunks for search but include surrounding context at query time for balance.
– The considerations for chunk size remain consistent whether working on multilingual or English projects. There isn’t a definitive answer regarding chunk size, so it’s recommended to explore further resources on the topic.
Methods for splitting text:
Text can be split using rule-based or machine learning-based models. Rule-based methods focus on character analysis, while machine learning-based models, like NLTK & Spacy tokenizers or advanced transformer models, depend on language-specific training, primarily in English. ML-based sentence splitters currently work poorly for most non-English languages and are computationally intensive, so starting with a simple rule-based splitter is recommended. A common and effective method is a recursive character text splitter used in LangChain or LlamaIndex, which shortens sections by finding the nearest split character in a prioritized sequence.
An example of using LangChain’s recursive character splitter is shown below:
– Define the tokenizer as the intended embedding model since different models may count words differently.
– Set a small chunk size and chunk overlap.
– Define a length function that counts the tokens using the tokenizer.
– Specify the separators in a prioritized order.
– Apply the text splitter to the formatted document.
After splitting the text, the next step is to embed these chunks for storage.
Source link