The federal government of Singapore launched a blueprint as we speak for world collaboration on artificial intelligence security following a gathering of AI researchers from the US, China, and Europe. The doc lays out a shared imaginative and prescient for engaged on AI security by means of worldwide cooperation somewhat than competitors.
“Singapore is without doubt one of the few international locations on the planet that will get alongside properly with each East and West,” says Max Tegmark, a scientist at MIT who helped convene the assembly of AI luminaries final month. “They know that they are not going to construct [artificial general intelligence] themselves—they may have it finished to them—so it is vitally a lot of their pursuits to have the international locations which can be going to construct it speak to one another.”
The international locations thought most certainly to construct AGI are, after all, the US and China—and but these nations appear extra intent on outmaneuvering one another than working collectively. In January, after Chinese language startup DeepSeek released a cutting-edge model, President Trump referred to as it “a wakeup name for our industries” and stated the US wanted to be “laser-focused on competing to win.”
The Singapore Consensus on International AI Security Analysis Priorities requires researchers to collaborate in three key areas: learning the dangers posed by frontier AI fashions, exploring safer methods to construct these fashions, and growing strategies for controlling the habits of probably the most superior AI techniques.
The consensus was developed at a gathering held on April 26 alongside the Worldwide Convention on Studying Representations (ICLR), a premier AI occasion held in Singapore this yr.
Researchers from OpenAI, Anthropic, Google DeepMind, xAI, and Meta all attended the AI security occasion, as did teachers from establishments together with MIT, Stanford, Tsinghua, and the Chinese language Academy of Sciences. Consultants from AI security institutes within the US, UK, France, Canada, China, Japan and Korea additionally participated.
“In an period of geopolitical fragmentation, this complete synthesis of cutting-edge analysis on AI security is a promising signal that the worldwide group is coming along with a shared dedication to shaping a safer AI future,” Xue Lan, dean of Tsinghua College, stated in an announcement.
The event of more and more succesful AI fashions, a few of which have shocking talents, has precipitated researchers to fret a couple of vary of dangers. Whereas some concentrate on near-term harms together with issues attributable to biased AI systems or the potential for criminals to harness the technology, a big quantity consider that AI might pose an existential menace to humanity because it begins to outsmart people throughout extra domains. These researchers, generally known as “AI doomers,” fear that fashions might deceive and manipulate people with a purpose to pursue their very own targets.
The potential of AI has additionally stoked speak of an arms race between the US, China, and different highly effective nations. The know-how is seen in coverage circles as crucial to financial prosperity and navy dominance, and lots of governments have sought to stake out their very own visions and laws governing the way it must be developed.