#LLM FOR BEGINNERS
Useful when answering complex queries on internal documents in a step-by-step manner with ReAct and Open AI Tools agents.
The basic RAG chatbots I have built in the past using standard LangChain components such as vectorstore, retrievers, etc have worked out well for me. Depending on the internal dataset I feed in, they are capable of handling humble questions such as “What is the parental leave policy in India” (source dataset: HR policy documents), “What are the main concerns regarding the flavor of our product” (source dataset: social media/Tweets), “What are the themes in Monet paintings” (source dataset: Art Journals), etc. More recently, the complexity of queries being fed to it has increased, for instance, “Has there been an increase in the concerns regarding flavor in the past 1 month”. Unless there is a specific section in the internal documents that specifically talks about the comparison, it is highly unlikely the chatbot would display the correct answer. The reason is — the correct answer requires the following steps to be planned/executed systematically:
STEP 1: calculate the start date and end date based on “past 1 month” and today’s date
STEP 2: fetch the queries mentioning flavor issues for the start date
STEP 3: count the queries from Step 2
STEP 4: fetch the queries mentioning flavor issues for the end date
STEP 5: count the queries from Step 4
STEP 6: calculate the percentage increase/decrease using counts from Step 3 and Step 5.
Luckily for us, LLMs are very good at such planning! And Langchain agents are the ones orchestrating this planning for us.
The core idea of agents is to use a language model to choose a sequence of actions to take. In agents, a language model is used as a reasoning engine to determine which actions to take and in which order. [Source]