Resemble AI, known for its focus on artificial intelligence in cybersecurity, has successfully secured $13 million in funding. The financial boost promises to drive further development in securing organizations from a developing threat: deepfake cyberattacks. With AI-related fraud on the rise, Resemble AI aims to protect businesses from sophisticated digital deceptions. The company seeks to address growing needs as technology advances and risks become more severe.
Earlier reports highlighted concerns with AI-generated fraudulent activities, but focused more broadly on emerging digital threats without singling out specifics such as deepfakes. As AI technology evolves, so too does the complexity of cyber fraud, with specific attention now shifting to deepfakes due to their increasing impact and difficulty to detect. This shift underscores broader trends in cybersecurity, where earlier focus was more diffuse compared to the pinpointed challenges faced today.
What Are Resemble AI’s Plans With the New Funding?
Resemble AI plans to utilize the recent capital to enhance its existing AI platform composed of two primary solutions. DETECT-3B Omni plays a pivotal role by identifying deepfakes in various formats such as audio, imagery, video, and text. This platform is already in use by several prominent players, including government bodies, leading entertainment firms, and Fortune 500 telecommunication corporations.
How Does Resemble AI’s Intelligence Model Assist Organizations?
The Intelligence model offers essential insights by delivering clear and factual context to generated content. It helps organizations not only validate the authenticity of the content but also understand why the content is genuine. This deeper analysis provides companies with better tools to combat digital threats by understanding the underlying mechanisms of deceitful AI-generated content.
The current environment sees AI being leveraged for malicious intents, making cybersecurity crucial. According to Zac Cohen from Trulioo, “The sophisticated tooling to defraud a system … is now available to a much wider swath of bad actors.”
Adding to the sentiment, William Fitzgerald from WEX notes, “The barrier to entry into becoming a fraudster at scale is essentially gone.” Such conditions reflect an urgent need for companies to stay ahead of malicious threats.
Research further indicates that 3% of global revenue, or about $95 billion annually, is impacted by identity gaps. Alarmingly, almost 60% of firms struggle to combat bot-related fraud despite expressed confidence in handling such threats. This disconnect highlights the challenges organizations face in truly understanding and addressing these vulnerabilities.
Continuous adaptability and innovation in cybersecurity infrastructure are necessary to counteract sophisticated threats like deepfakes. Investing in cutting-edge solutions and regularly revisiting security strategies can safeguard businesses more effectively against these evolving challenges. These measures will shape the future landscape of digital safety and can significantly reduce risks associated with AI-driven fraud.
