At the ongoing VivaTech, the annual technology conference for startups, in Paris, Meta AI chief Yann LeCun, advised students looking to work in the AI ecosystem, to not work on LLMs.
“If you are a student interested in building the next generation of AI systems, don’t work on LLMs. This is in the hands of large companies, there’s nothing you can bring to the table,” said LeCun at the conference.
He also said that people should develop next-generation AI systems that overcome the limitations of large language models.
Moving Away from LLMs
Interestingly, the discussion on alternatives of LLM-based models has been ongoing for a while now. Recently, Mufeed VH, the young creator of Devika, a Devin alternative, spoke about how people should move away from Transformer models and start building new architectures.
“Everyone’s doing the same thing, but if we focus on different architectures, like RMKV [a RNN architecture], it would be really good,” said Mufeed who goes on to explain the unlimited context window and inference for that particular architecture.
He also believes that with this approach, it is even possible to build something nearly as impressive as GPT-4.
Moving away from LLMs is something LeCun has been prominently advocating and believes in taking away the control from the hands of a few. Another reason why he pushes for open-source too.
“Eventually all our interactions with the digital world will be mediated by AI assistants,” he said and urged for platforms to not allow a small number of AI assistants to control the entire digital world.
“This will be extremely dangerous for diversity of thought, for democracy, for just about everything”, he said.
But, LLMs are Only Advancing..
While LeCun might be against LLMs, the transformer model training models are evolving. Dan Hou, an AI/ML advisor, spoke about GPT-4o and emphasised on its training model.
When text was believed to be the basis for all sophisticated models, GPT-4o was designed to understand video and audio natively. This impacts the volume of data that future versions can be trained on.
“How much smarter can AI get? With a natively multi-modal architecture, I suspect the answer is much, much better,” said Hou.
Furthermore, Sam Altman, in a recent interview also spoke about how data wouldn’t be a problem anymore, thereby addressing the concerns of training LLMs.
Source: X