OpenAI Establishes Team to Investigate 'Catastrophic' AI Risks, Including Nuclear Threats

OpenAI has established the Preparedness team, led by Aleksander Madry, to address potential catastrophic risks associated with AI. The team's responsibilities encompass tracking, forecasting, and safeguarding against AI's various risks, from manipulating humans to posing threats in areas like "chemical, biological, radiological, and nuclear" domains. OpenAI's commitment to studying these risks aligns with the need for responsible AI development. Additionally, they are actively seeking ideas from the community through a competition. The Preparedness team will develop a comprehensive risk-informed development policy to ensure AI safety, focusing on pre- and post-model deployment phases. OpenAI recognizes the potential benefits of advanced AI models for humanity but emphasizes the need for infrastructure and understanding to manage their increasing risks.

OpenAI Establishes Team to Investigate 'Catastrophic' AI Risks, Including Nuclear Threats
OpenAI logo

OpenAI has unveiled a new initiative aimed at studying and mitigating the potential dangers posed by AI, particularly the far-reaching and "catastrophic risks" it might present. This move underscores OpenAI's commitment to ensuring the responsible development of artificial intelligence.

OpenAI, known for its pioneering work in the AI domain, recently introduced the "Preparedness" team, led by Aleksander Madry, who serves as the director of MIT's Center for Deployable Machine Learning. Madry, who assumed the role of "head of Preparedness" at OpenAI in May, is tasked with overseeing this groundbreaking effort.

The core mission of the Preparedness team encompasses monitoring, forecasting, and safeguarding against a range of potential threats arising from AI systems. These threats span from AI's capacity to deceive and manipulate humans, as seen in phishing attacks, to its potential for generating malicious code. While some of these risks may appear speculative, OpenAI, in a blog post, highlights areas such as "chemical, biological, radiological, and nuclear" threats as top concerns in relation to AI models.

OpenAI's CEO, Sam Altman, is recognized for his vigilant stance on AI's potential negative consequences, sometimes expressing concerns about AI's implications for humanity. However, the commitment to dedicate resources to study these somewhat futuristic scenarios represents a significant step forward.

OpenAI is actively seeking input from the wider community.

In addition to exploring "less obvious" and more immediate AI risks, OpenAI is actively seeking input from the wider community. They have announced a competition soliciting ideas for risk studies, offering a prize of $25,000 and an opportunity to join the Preparedness team for the ten most promising submissions.

One of the competition questions poses a scenario: "Imagine we gave you unrestricted access to OpenAI's Whisper (transcription), Voice (text-to-speech), GPT-4V, and DALLE·3 models, and you were a malicious actor. Consider the most unique, while still being probable, potentially catastrophic misuse of the model."

OpenAI asserts that the Preparedness team will also formulate a "risk-informed development policy." This policy will outline OpenAI's approach to evaluating AI models, monitoring tools, risk mitigation strategies, and governance structures for oversight throughout the model development process. The objective is to complement OpenAI's ongoing efforts in AI safety, with a focus on both pre- and post-model deployment phases.

In a blog post, OpenAI states, "We believe that AI models, which will exceed the capabilities currently present in the most advanced existing models, have the potential to benefit all of humanity. But they also pose increasingly severe risks. We need to ensure we have the understanding and infrastructure needed for the safety of highly capable AI systems."

The announcement of the Preparedness initiative coincides with a significant event: a prominent U.K. government summit on AI safety. OpenAI's proactive stance on AI safety and risk mitigation aligns with the growing recognition of AI's transformative potential.

OpenAI's leaders, Sam Altman and Ilya Sutskever, OpenAI's chief scientist and co-founder, share the belief that AI with intelligence surpassing that of humans may become a reality within the decade. They emphasize the need for research and measures to control and restrict such "superintelligent" AI to ensure its responsible and beneficial use.