AI poses existential risk, warn tech leaders in open letter
Several artificial intelligence (AI) experts, including chief executives, policymakers, and scientists, have signed an open letter calling attention to the "risk of extinction" from AI. The letter, which was published by the San Francisco-based Center for AI Safety, warns that AI could pose an existential threat to humanity if it is not developed and used responsibly.
OpenAI CEO Sam Altman, Google DeepMind CEO Demis Hassabis, father of AI Geoffery Hinton, Canadian scientist Yoshua Bengio, Microsoft's chief technology officer Kevin Scott, and Stability AI’s Emad Mustaque are some of the AI leaders who signed the open letter.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement read.
In a separate blog the nonprofit Center for AI Safety has outlined eight examples of such risks posed by AI. It includes weaponization of AI for destructive purposes, spreading misinformation, enfeeblement or high dependence on machines, and concentrating power in the hands of few.
In March, the Future of Life Institute issued an open letter calling for a pause on AI innovations for six months. The two letters have a few common signatories, including Yoshua Bengio and Emad Mustaque. The open letters have come under a lot of criticism. While some thought that the letter exaggerated the risk, others disagreed with the remedies suggested in them.
Early this month, Sam Altman testified before the members of a US Senate subcommittee and spoke about the need to regulate AI. He said that AI could potentially go ‘quite wrong’ and that OpenAI wants to work with the government to prevent that from happening.
The European Union lawmakers have recently agreed on updating the EU AI Act, proposed back in 2021. Ahead of the Act, the European Commission is working with Google to develop an AI Pact. EU industry chief Thierry Breton has already met Google CEO Sundar Pichai to discuss the pact.