Tech and AISingapore’s Vision for AI Safety Bridges the US-China Divide

Singapore’s Vision for AI Safety Bridges the US-China Divide

-


The government of Singapore released a blueprint today for global collaboration on artificial intelligence safety following a meeting of AI researchers from the US, China, and Europe. The document lays out a shared vision for working on AI safety through international cooperation rather than competition.

“Singapore is one of the few countries on the planet that gets along well with both East and West,” says Max Tegmark, a scientist at MIT who helped convene the meeting of AI luminaries last month. “They know that they’re not going to build [artificial general intelligence] themselves—they will have it done to them—so it is very much in their interests to have the countries that are going to build it talk to each other.”

The countries thought most likely to build AGI are, of course, the US and China—and yet those nations seem more intent on outmaneuvering each other than working together. In January, after Chinese startup DeepSeek released a cutting-edge model, President Trump called it “a wakeup call for our industries” and said the US needed to be “laser-focused on competing to win.”

The Singapore Consensus on Global AI Safety Research Priorities calls for researchers to collaborate in three key areas: studying the risks posed by frontier AI models, exploring safer ways to build those models, and developing methods for controlling the behavior of the most advanced AI systems.

The consensus was developed at a meeting held on April 26 alongside the International Conference on Learning Representations (ICLR), a premier AI event held in Singapore this year.

Researchers from OpenAI, Anthropic, Google DeepMind, xAI, and Meta all attended the AI safety event, as did academics from institutions including MIT, Stanford, Tsinghua, and the Chinese Academy of Sciences. Experts from AI safety institutes in the US, UK, France, Canada, China, Japan and Korea also participated.

“In an era of geopolitical fragmentation, this comprehensive synthesis of cutting-edge research on AI safety is a promising sign that the global community is coming together with a shared commitment to shaping a safer AI future,” Xue Lan, dean of Tsinghua University, said in a statement.

The development of increasingly capable AI models, some of which have surprising abilities, has caused researchers to worry about a range of risks. While some focus on near-term harms including problems caused by biased AI systems or the potential for criminals to harness the technology, a significant number believe that AI may pose an existential threat to humanity as it begins to outsmart humans across more domains. These researchers, sometimes referred to as “AI doomers,” worry that models may deceive and manipulate humans in order to pursue their own goals.

The potential of AI has also stoked talk of an arms race between the US, China, and other powerful nations. The technology is viewed in policy circles as critical to economic prosperity and military dominance, and many governments have sought to stake out their own visions and regulations governing how it should be developed.



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest news

JBL Tour One M3 Review: Great Noise-Canceling Headphones

There are plenty of sonic-friendly features, including a volume limiter to keep your hearing safe, Smart Talk to...

Trump Allegedly Misled on XRP Crypto Reserve Post, Report Claims

President Donald Trump’s newly announced national crypto reserve made headlines fast, and not just because it included big-name...

A timeline of South Korean telco giant SKT’s data breach

In April, South Korea’s telco giant SK Telecom (SKT) was hit by a cyberattack that led to the...

Advertisement

Congress Urged to Close Dangerous Crypto Oversight Gap Now

An urgent plea from a top former regulator presses Congress to act now, warning that inaction on crypto...

Will Markets Move Higher When $2.6B Bitcoin Options Expire? 

Around 26,000 Bitcoin options contracts will expire on Friday, May 9, and they have a notional value of...

Must read

You might also likeRELATED
Recommended to you