close
close

Biden administration to host international meeting on AI security in San Francisco after election

Biden administration to host international meeting on AI security in San Francisco after election

Government scientists and artificial intelligence experts from at least nine countries and the European Union will meet in San Francisco after the US elections to discuss the safe development of AI technology and how to avert its dangers.

On Wednesday, President Joe Biden's administration announced a two-day international meeting on AI safety for November 20 and 21. It comes nearly a year after delegates at an AI safety summit in the United Kingdom pledged to work together to mitigate the potentially catastrophic risks posed by advances in AI.

U.S. Commerce Secretary Gina Raimondo told the Associated Press it would be the “first hands-on meeting” after the summit in Britain and a follow-up meeting in May in South Korea that created a network of government-backed security institutes to advance research and testing of the technology.

Among the most pressing issues experts are likely to have to grapple with is the steady increase in AI-generated fakes, but also the thorny question of how to tell when an AI system is so powerful or dangerous that it needs guardrails.

“We will think about how we can work with countries to set standards around the risks of synthetic content and the risks of malicious use of AI by malicious actors,” Raimondo said in an interview. “Because if we keep the risks under control, it's incredible what we could achieve.”

The San Francisco meetings are taking place in a city that has become a center of the current wave of generative AI technology. They are designed as technical collaboration on security measures ahead of a broader AI summit in Paris in February, about two weeks after a presidential election between Vice President Kamala Harris – who helped shape the U.S. stance on AI risks – and former President Donald Trump, who has vowed to reverse Biden's AI policies.

Raimondo and Secretary of State Antony Blinken announced that their agencies will co-host the conference, which will involve a network of newly formed national AI safety institutes in the United States and the United Kingdom, as well as Australia, Canada, France, Japan, Kenya, South Korea, Singapore and the 27-country European Union.

Missing from the list of participants is the biggest AI powerhouse, China, which is not part of the network. However, Raimondo said, “We are still trying to figure out who else could participate in terms of scientists.”

“I think there are certain risks that we all want to avoid together, like AIs being used for nuclear weapons or AIs being used in bioterrorism,” she said. “All countries in the world should agree that these are bad things and we should be able to work together to prevent them.”

Many governments have committed to protecting AI technology, but they are taking different approaches. For example, the EU was the first to pass a comprehensive AI law that imposes strict restrictions on the riskiest applications.

Biden signed an executive order on AI last October that requires developers of the most powerful AI systems to share security test results and other information with the government and tasked the Commerce Department with creating standards to ensure AI tools are safe and secure before release.

San Francisco-based OpenAI, maker of ChatGPT, announced last week that it had granted early access to the US and UK national AI security institutes ahead of the release of its latest model, called o1. The new product goes beyond the company's famous chatbot in its ability to “draw complex conclusions” and generate a “long internal chain of thought” when answering a query, and represents a “medium risk” in the weapons of mass destruction category, the company said.

Since generative AI tools began captivating the world in late 2022, the Biden administration has been pushing AI companies to commit to testing their most sophisticated models before unleashing them on the world.

“This is the right model,” Raimondo said. “Other than that, everything is voluntary right now. I think we probably need to go beyond a voluntary system. And we need Congress to take action.”

Tech companies largely agree in principle that AI needs to be regulated, but some bristle at proposals they say could hamper innovation. In California, Governor Gavin Newsom signed three landmark bills on Tuesday to crack down on political deepfakes ahead of the 2024 election. But he has not yet signed or vetoed a more controversial bill that would regulate extremely powerful AI models that don't yet exist but whose development could pose major risks.

Copyright 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.

Related Post