In a significant development for AI technology, Multiverse Computing has achieved a financial milestone by raising €189 million in Series B funding. The company has gained attention with its quantum-inspired AI model compression technology, CompactifAI. This technology offers a potential reduction in the size of large language models (LLMs) by up to 95%, without compromising performance. As AI models increasingly dominate tech ecosystems, solutions like Multiverse’s CompactifAI emerge as crucial for handling the growing infrastructure costs associated with LLMs across various sectors.
Previously, AI model compression primarily relied on methodologies such as quantisation and pruning. However, these techniques often resulted in models that failed to match the original performance levels of LLMs. CompactifAI, in contrast, addresses this limitation by maintaining the efficacy of original models. Historical endeavors in compression have often struggled to balance reduction with effectiveness, but CompactifAI’s innovations suggest a promising shift in this paradigm.
What Sets CompactifAI Apart?
CompactifAI leverages the unique capabilities of Tensor Networks, a method pioneered by Multiverse’s co-founder Román Orús, to simplify neural network structures. Unlike traditional approaches, it creates highly-compressed AI models that perform as efficiently as their original counterparts. These advanced models are not only faster but also more cost-effective in inference operations, encouraging their deployment across diverse hardware platforms like PCs, mobile devices, and even vehicles.
Who Are the Key Players?
The latest funding round was spearheaded by Bullhound Capital, with participation from renowned investors including HP Tech Ventures and Toshiba. This collective interest underscores the growing recognition of Multiverse’s contributions to AI model efficiency.
“Roman Orus has convinced us that he and his team of engineers are developing truly world-class solutions in this highly complex and compute-intensive field,”
said Per Roman, representing lead investor Bullhound Capital.
Enrique Lizaso Olmos, Multiverse’s CEO, remarked on the positive reception of CompactifAI, emphasizing its pivotal role in redefining AI resource requirements.
“What started as a breakthrough in model compression quickly proved transformative — unlocking new efficiencies in AI deployment and earning rapid adoption for its ability to radically reduce the hardware requirements for running AI models.”
The collaboration between Multiverse Computing and influential backers such as HP is part of an industry-wide push towards more efficient AI systems.
“At HP, we are dedicated to leading the future of work by providing solutions that drive business growth and enhance professional fulfilment,”
noted Tuan Tran, HP’s President of Technology and Innovation, highlighting the strategic alignment of HP with Multiverse’s mission.
The funding will accelerate the deployment of LLMs by Multiverse, which is critical in lowering the prohibitive costs currently associated with their widespread adoption. This financial boost reaffirms the industry’s commitment toward creating more scalable AI solutions for varied applications.
CompactifAI’s innovations mark a notable deviation from previous compression methods that struggled to balance advantage with functionality. Such advancements not only promise to enhance AI deployment efficiency but also suggest a wider impact on global AI infrastructure strategies. Stakeholders across various industries await further developments from Multiverse Computing as they continue to pioneer in AI technology.