A small yet ambitious team in Stockholm, known as Farang, has attracted significant attention by securing €1.5 million in seed funding to develop large language models (LLMs) with a fresh architectural design. The AI research lab aims to redefine how these models function compared to prevailing technologies like ChatGPT, Claude, and Gemini. Its unique method processes entire responses prior to converting them into text, akin to visualizing a painting before crafting it on canvas. This approach is set to challenge conventional word-by-word text generation, offering a potentially more efficient form of communication.
Previously, AI models have largely centered around the transformer architecture, a framework which has sustained its dominance. Although transformer models have incorporated processes to simulate “thinking” time, they fundamentally rely on sequential word predictions. The architecture proposed by Farang is a diversion from this established method, emphasizing internal reasoning mechanisms that could alleviate computational strains. Farang’s pursuit of this novel avenue has garnered support, as seen in past educational partnerships and trials on niche applications.
What Distinguishes Farang’s Large Language Models?
The distinctiveness of Farang’s development lies in specialized applications where existing AI models exhibit inadequacies. These areas include frameworks for particular programming environments and distinct medical sectors. Currently, Farang is focusing on enhancing React programming by generating more optimized and iterative code compared to conventional LLMs. The technology’s capacity to handle certain niche fields presents a unique opportunity for development in rather unexplored areas.
Why is Data Privacy a Key Concern?
Farang’s technology permits organizations to deploy specialized AI models on their private infrastructure, which could solve multiple concerns about data privacy in fields such as healthcare and law. By enabling on-premises AI models with privacy controls, sensitive data can be managed internally, protecting proprietary information from external AI service exposure. This privacy-centric design is especially important for companies needing to ensure data sovereignty when dealing with sensitive information.
Romanus, the founder of Farang, clarifies the strategic approach, noting that the team is not building on top of existing models but innovating from scratch with a foundational architecture aiming initially at specific domains. He articulates the strategic advantage, saying,
We’re not building another application layer on top of existing models. We’ve developed a completely new foundational architecture that enables us to create specialised AI assistants that outperform current solutions in specific domains like programming and medicine, while using twenty-five times less computational resources.
His vision extends to having companies operate these AI systems integrated with their own infrastructure to ensure complete privacy and control.
The round of funding, backed by prominent entities such as Voima Ventures and the Amadeus APEX Technology Fund, along with key angel investors, emphasizes the market confidence in Farang’s pathway. Voima Ventures’ Inka Mero remarked on the contribution Farang’s technology could make toward Europe’s position in the global AI spectrum.
Ion Hauer from APEX Ventures also emphasizes the potential of Farang’s new methodology, expressing a broader belief in an architectural shift in AI development. The focus now will be on scaling proof-of-concept models and investing in computational power tailored to specialized sectors like programming and medicine.
The emergence of Farang as a contender introduces a noteworthy shift in the artificial intelligence sector. By diverting from the typical transformer models, Farang presents an architectural advancement that could redefine AI efficiencies and capabilities. The intent to cater to specialized markets while ensuring privacy protection could attract clientele from sectors traditionally reluctant to deploy AI due to data sensitivity concerns. As developments unfold, observing the practical implications and improvements in real-world applications will shed light on the viability of Farang’s ambitious roadmap.
