Eric Schmidt, former CEO of Google (NASDAQ:GOOGL), is making a significant investment in artificial intelligence (A.I.) safety research through a new $10 million initiative. The funding will support academic studies focusing on the scientific foundations of A.I. safety rather than just highlighting potential risks. The initiative is part of Schmidt Sciences, a nonprofit he co-founded with his wife Wendy, aiming to advance scientific discoveries. This effort stands apart from his previous investments in A.I. startups such as Stability AI, Inflection AI, and Mistral AI, as it specifically targets safety challenges in the rapidly evolving A.I. landscape.
Previous discussions on A.I. safety have often centered around regulatory concerns and ethical implications, with limited direct funding for deep technical research. Some past initiatives focused on policy recommendations, whereas Schmidt’s program prioritizes advancing safety science through academic research. This shift could provide more concrete solutions to A.I. safety challenges, addressing systemic risks at a technical level rather than just from a regulatory perspective.
What does the new program aim to accomplish?
The initiative will provide grants of up to $500,000 to more than two dozen researchers, offering additional computational resources and access to A.I. models. The goal is to investigate systemic safety risks in modern A.I. rather than outdated models like OpenAI’s GPT-2.
“We want to be solving for problems that are of the current systems that people use today,”
said Michael Belinsky, the program’s head. This strategic focus ensures that the research remains relevant to today’s rapidly evolving A.I. technologies.
Who are the key researchers involved?
Several prominent figures in the A.I. research community are among the funding recipients. Yoshua Bengio, known for his work in deep learning, will develop risk mitigation strategies for A.I. systems. Zico Kolter, a professor at Carnegie Mellon University and an OpenAI board member, will explore adversarial transfer, a phenomenon where A.I. model vulnerabilities can be exploited across different systems. Additionally, Daniel Kang from the University of Illinois Urbana-Champaign will investigate whether A.I. can autonomously conduct cybersecurity attacks.
Kang emphasized the broader implications of his research, noting that if A.I. can execute cyberattacks independently, it might also be capable of replicating itself across the internet.
“If A.I. can autonomously perform cyberattacks, you could also imagine this being the first step of A.I. potentially escaping control of a lab and being able to replicate itself on the wider internet,”
Kang said. Such concerns highlight the urgency of developing robust safety mechanisms.
The initiative arrives at a time when industry leaders and policymakers are increasingly focusing on the economic benefits of A.I., sometimes at the expense of safety discussions. A recent A.I. summit in Paris, for instance, removed “safety” from its title, emphasizing optimism about A.I.’s economic impact. Some researchers worry that competitive pressures might deprioritize safety measures.
“I really, really hope that the major labs take their responsibility very seriously and use some of the work that has been coming out of my lab and other labs to accurately test their models,”
Kang stated.
While efforts to regulate A.I. have often faced delays, this initiative directly funds technical research that could lead to practical safety applications. By addressing systemic risks in existing models, it could complement regulatory efforts and help bridge the gap between academic research and industry practices. The effectiveness of this program will depend on how its findings are integrated into the development of next-generation A.I. systems.