GlobalFoundries, a company that manufactures chips for various clients such as AMD and General Motors, had previously announced a partnership with Lightmatter. Harris mentioned that his company is collaborating with major semiconductor companies and hyperscalers, which are the largest cloud companies like Microsoft, Amazon, and Google.
If Lightmatter or another company can revolutionize the infrastructure of large AI projects, a significant bottleneck in the advancement of smarter algorithms could be eliminated. The increased use of computation has been crucial for developments like ChatGPT, and many AI researchers believe that scaling up hardware is essential for future progress in the field, including the aspiration of achieving artificial general intelligence (AGI).
By connecting a million chips with light, it may be possible to develop algorithms that are several steps ahead of current cutting-edge technology, according to Lightmatter’s CEO Nick Harris. He confidently suggests that Passage could enable AGI algorithms.
The data centers required for training large AI algorithms typically consist of racks filled with thousands of computers running specialized silicon chips, connected by a complex network of electrical wiring. Managing training processes across numerous systems with wired connections presents significant engineering challenges. Converting signals between electronic and optical forms also imposes limitations on the computational abilities of chips.
Lightmatter’s approach aims to simplify the internal communication within AI data centers. Harris explains that in traditional setups, communication between GPUs involves traversing multiple layers of switches, whereas in a Passage-connected data center, every GPU would have direct high-speed connections to all other chips.
Lightmatter’s work on Passage exemplifies how the recent advancements in AI have motivated companies of all sizes to rethink key hardware components behind innovations like OpenAI’s ChatGPT. Nvidia, a leading GPU supplier for AI projects, recently unveiled its latest chip for AI training, the Blackwell GPU, at its annual conference. Nvidia’s superchip comprises two Blackwell GPUs and a conventional CPU processor interconnected using NVLink-C2C, the company’s new high-speed communications technology.
While the chip industry is known for enhancing computing power without increasing chip size, Nvidia chose to double the power of its Blackwell GPUs by combining two chips, resulting in higher power consumption. This decision, along with Nvidia’s focus on connecting chips with high-speed links, suggests that advancements in other crucial components for AI supercomputers, like those proposed by Lightmatter, could become increasingly significant.