Almost a year ago, a human mimicking technology was given to people which they didn’t ask for. Since the launch of OpenAI’s chatbot no one has been spared from learning how they can use tools akin to ChatGPT to make their lives better personally and professionally. At the foundation of these tools are large language models (LLMs) built on millions of data points and billions of dollars. But these chatbots by the big tech companies have not reaped results, yet.
“Essentially, we’re against the narrative of OpenAI, Anthropic and Cohere. We’re much more aligned with the open source side which is tending to lean towards smaller, specialised models as opposed to one model to rule them all,” Mark McQuade the co-founder of Arcee.ai, an AI startup that works towards developing domain-specific LLMs.
“Although large foundational models certainly have their place, companies are trying to shove 20 use cases into one model. Each use case should have its own small language model in order for that to be scalable, and efficient,” he added.
For example, if you want a customer support language model why, it need not write poetry. It is like having a thousand piece toolset when all you need is a single screwdriver. With the smaller specialised models, you have the ability to train them more efficiently.
Moreover, the larger the model, the more it opens up the possibility of hallucinations because it has a bunch of not needed data that saturates the importance of the overall data.
But, that’s just the foundation and not how you can gather the best return on investment out of an LLM, noted McQuade who has previously served as the ML success & business development lead at Hugging Face, the open source platform.
Seeing the opportunity, Mark and his team built the End2End RAG system that sits on top of the main LLM. The way to get the most ROI is to pair an in-domain specialised model with the system he said.
A report published by the Wall Street Journal two weeks ago, brought to light that big tech companies have not yet been able to generate profits through their generative AI products. It stated that Microsoft has lost money on the firsts of its products, said a person with knowledge of the figures. It and Google are now launching AI-backed upgrades to their software with higher price tags. Zoom Video Communications has also tried to mitigate costs by sometimes using a simpler AI it developed in-house.
McQuade further explained that the team has built its own form of retrieval augmented generation (RAG), that sits on top of the model.
“The RAG that you see today is really glorified prompt engineering. The most common standard RAG flow you see today is completely unaware of the context of your data. Without looking up in your data it sends the lookup plus the original query to GPT,” he added.
His team has built an end to end RAG system where first they train the entire RAG architecture on data provided. Then they train retriever and generator models as one system, simultaneously, so they feed off each other. They become much more contextually aware of the data. After the system is tuned users can hit it for inference and add more data to their knowledge base, as you would in a typical RAG system.
“The smaller system greatly reduces hallucinations,” he stated. Apart from reduced hallucinations, there’s also a drastic difference from a cost perspective. Two billion tokens hitting GPT-4 costs about 360k. Two billion tokens hitting our system if it runs inside a virtual private cloud (VPC) is about 30k for the cost of compute, McQuade said.
Interestingly, the team’s lead NLP researcher is an author of a paper from 2021 called fine tuning the entire RAG architecture. He’s spent four years on his thesis and PhD attacking domain adaptation of LLMs.
The Experimental Phase
When cloud entered the market in 2006 people started playing with it. It was a slow adoption curve, but everyone’s on the cloud today. McQuade believes generative AI is the same.
“People need to test it and that’s what they’re doing. We firmly believe in a world of millions — if not billions of models — essentially a model per task. On the closed source, you’re gonna get bigger multimodal models and on the open source side, they’re gonna get smaller and more efficient. That’ll be a great battle,” he said. This explains why Microsoft, AWS, and Google, everyone is backing Meta’s Llama, or integrating into their offerings.
He is betting that the bigger model will not win but from a technology standpoint the next really big thing is obviously multimodal. It’s going to be around agents and synthetic dataset generation. Agents will allow you to not only get responses from LLM’s but complete tasks. McQuade shared, “We are taking the focus on synthetic data set generation and language models are only as good as the data.”
It’s the hardest piece to any model whether it’s training or fine tuning. There’s been a big push recently for generating high quality synthetic data. That’s one of the biggest waves in the next 3-6 months as they won’t need to rely solely on messy unstructured data, McQuade concluded.