Fractile, a UK-based AI chip company founded by Walter Goodwin in 2022, emerged from stealth mode today, announcing a substantial $15 million in Seed funding. The company aims to disrupt the AI hardware market with its novel AI chip, designed to drastically improve performance and reduce costs by up to 100 times faster and 10 times cheaper than current solutions. This significant advancement addresses the limitations of existing AI chips, which hinder the efficiency and scalability of AI models.
Fractile’s approach to AI chip design diverges significantly from historical methods. Previous news surrounding AI hardware advancements often focused on incremental improvements and specialization for specific workloads. In contrast, Fractile aims to fundamentally change computational operations, leveraging in-memory compute to eliminate the need for shuttling data between memory and processors. This novel method positions Fractile as a pioneer in the AI hardware industry, setting new performance benchmarks.
In earlier reports, AI chip development faced challenges due to the rapid evolution of model architectures juxtaposed with the lengthy chip design process. Fractile’s solution, which integrates computational operations directly into memory, addresses these challenges head-on. This innovative approach ensures compatibility with existing silicon foundry processes, facilitating seamless integration into current AI infrastructures. Notably, this marks a significant departure from previous hardware solutions that struggled to keep pace with AI advancements.
Unique Computational Approach
Fractile’s AI chip leverages novel circuits to perform 99.99% of the operations required for model inference, achieving targets of being 100 times faster and 10 times cheaper. This shift to in-memory compute allows for computational operations to be conducted directly within the memory, eliminating the need to transfer model parameters between memory and processor chips. This approach not only boosts speed and reduces costs but also enhances compatibility with unmodified silicon foundry processes used by leading AI chips.
“Fractile’s approach supercharges inference, delivering astonishing improvements in terms of speed and cost. This is more than just a speed-up – changing the performance point for inference allows us to explore completely new ways to use today’s leading AI models to solve the world’s most complex problems,” said Dr. Walter Goodwin, CEO, and Founder of Fractile.
Energy Efficiency and Scalability
The new chip design also promises significant reductions in power consumption, a critical factor in scaling AI compute performance. Fractile’s system aims to achieve 20 times the Tera Operations Per Second per Watt (TOPS/W) compared to other systems currently available. This energy efficiency allows the system to serve more users simultaneously while reducing operational costs, thus enabling broader AI application deployment.
“AI is evolving so rapidly that building hardware for it is akin to shooting at a moving target in the dark. Because Fractile’s team has a deep background in AI, the company has the depth of knowledge to understand how AI models are likely to evolve, and how to build hardware for the requirements of not just the next two years, but 5-10 years into the future,” said John Cassidy, Partner at Kindred Capital.
This funding round, led by Kindred Capital, NATO Innovation Fund, and Oxford Science Enterprises, with participation from Cocoa and Inovia Capital, will enable Fractile to grow its team and accelerate product development. The company has already attracted top talent from industry leaders like NVIDIA, ARM, and Imagination, and has begun securing key patents for its technology. Fractile is in active discussions with potential partners and anticipates commercializing its first AI accelerator hardware soon.
Fractile’s innovative approach offers substantial performance and cost advantages, crucial for the future scalability of AI. By integrating computational operations within memory, Fractile addresses current hardware limitations, enabling more efficient and cost-effective AI model deployment. These advancements are expected to open new possibilities for AI applications across various fields, including drug discovery, climate modeling, and more.