OpenAI eyes Wikipedia-like collaborative approach for decisions on AI
U.S-based artificial intelligence (AI) firm and creator of ChatGPT, OpenAI, on Tuesday said that the organization is seeking ways to gather diverse input on decisions that impact its AI systems.
During this year’s AI Forward conference held in San Francisco, organised by Goldman Sachs Group and SV Angel, OpenAI president Greg Brockman outlined the general approach OpenAI is adopting to shape AI regulations globally.
Among the insights he shared was an initiative inspired by the Wikipedia model that strives to create an inclusive framework that fosters transparency, accountability, and the broad participation of stakeholders on any topic. A similar collaborative approach hints at the need for inclusive and comprehensive discussions on the regulation of AI, as Brockman said that “OpenAI is actively considering democratic decision-making processes to involve a broader range of stakeholders in shaping the future of AI”.
Further, in a blog published on May 22, OpenAI chief executive officer Sam Altman proposed a collaborative international initiative in which governments globally could participate to ensure safe development of AI. He wrote about the idea of establishing an organisation similar to the International Atomic Energy Agency (IAEA) to impose restrictions on AI deployment, ensure compliance with safety standards, and monitor the utilisation of computing power.
“Major governments around the world could set up a project that many current efforts become part of, or we could collectively agree that the rate of growth in AI capability at the frontier is limited to a certain rate per year,” he wrote, adding that “individual companies should be held to an extremely high standard of acting responsibly”.
Since its launch on November 30 last year, ChatGPT, a generative AI technology capable of producing remarkably authoritative text based on prompts, has gained immense popularity, becoming the fastest-growing application of all time. However, many have raised concerns regarding AI’s potential to generate deepfake images and spread misinformation, making it a vital point of attention.
Last week, Altman also proposed various ideas to US lawmakers for setting guardrails for AI, among them, requiring licences to develop the most sophisticated AI models, and establishing a related governance regime.
Earlier this month, members of the European Parliament reached a preliminary deal this week on a new draft of the European Union (EU)’s Artificial Intelligence Act, first drafted two years ago. China has also developed draft rules to manage the development of generative AI tools by research companies.
In India, IT minister Ashwini Vaishnaw earlier this week emphasised on a framework that needs to be developed. Notably, on Tuesday, a seven working groups constituted under India’s National programme for AI or INDIAai informed the Economic Times that they are likely to submit recommendations for a comprehensive framework governing Al in the next two weeks. It tasked the seven working groups with the creation of a data governance framework for AI to look into the regulatory aspects of Al.
The report further said that the working groups have been constituted with representatives from industry and academia to fine tune the policy in order to make it more realistic.