Safe Superintelligence Inc. (SSI), a newly established US company co-founded by former OpenAI chief scientist Ilya Sutskever, has successfully raised $1 billion in funding. The company, which focuses on creating a safe superintelligent AI, aims to use these funds for acquiring computing power and hiring researchers and engineers in Palo Alto and Tel Aviv. SSI’s mission is distinct; it seeks to solve what it terms “the most important technical problem of our time” by developing superintelligence that is both highly capable and inherently safe.
SSI’s funding round was led by NFDG, an investment partnership formed by Nat Friedman and SSI’s CEO Daniel Gross. Prominent venture capital firms such as a16z (Andreessen Horowitz), Sequoia Capital, DST Global, and SV Angel also participated. In comparison to other firms in the AI space, SSI’s approach emphasizes the integration of safety and capabilities as simultaneous goals, setting it apart from others who may prioritize speed of development over safety.
Unique Approach to Superintelligence
Ilya Sutskever, along with co-founders Daniel Gross and Daniel Levy, has structured SSI to operate like a traditional for-profit entity but with a rigorous focus on safety. Gross handles computing and fundraising efforts, while Levy takes on the role of principal scientist. The team is keen on hiring based on talent and character rather than formal qualifications, a strategy intended to foster innovation and dedication. Unlike other AI labs, SSI aims to “scale in peace,” free from the typical distractions of management overhead or product cycles.
Strategic Partnerships and Future Plans
The new funding round has valued SSI at $5 billion. The company plans to collaborate with cloud providers and chip companies to meet its computing power needs, although specific partners have not yet been disclosed. This strategic move is essential for the company’s goal of building a safe superintelligence. Sutskever also hinted at a different approach to scaling the AI, questioning the prevalent “scaling hypothesis” and emphasizing a unique path.
SSI’s previous initiatives and statements have always highlighted their unique mission and approach. While many AI labs have been concentrating on rapid development and market penetration, SSI has consistently focused on safety and ethical implications. This funding round and the subsequent plans align with their long-term strategy to ensure AI advancements are secure and responsible. The company’s emphasis on safety measures indicates a strong commitment to ethical AI development, a stance that has been reiterated in their public statements and funding strategies.
SSI’s commitment to developing safe superintelligence is clear from its strategic choices and the team’s expertise. By prioritizing both safety and capabilities, the company aims to lead in the AI sector in a responsible and innovative manner. The collaboration with cloud providers and chip companies will provide the necessary infrastructure to advance their mission. Potential candidates looking to work in a pioneering and ethically focused AI lab will find SSI’s approach appealing. This move positions SSI at the forefront of AI innovation, focusing on revolutionary engineering and scientific breakthroughs.
SSI’s alignment of mission, name, and product roadmap highlights its focused goal. Unlike many tech companies that often juggle multiple objectives, SSI remains dedicated to building a superintelligence that benefits society. Their approach, which balances rapid capability development with stringent safety measures, promises a transformative impact on the AI landscape. As SSI continues to grow and attract talent, their progress and methodologies will likely influence the broader AI community, encouraging a shift towards safer and more responsible AI development practices.