Google, OpenAI, Microsoft, and Anthropic will be the founding members of the Frontier Model Forum, an umbrella group for the generative AI industry. The group plans to focus on safety research, as well as the identification of best practices, public policy, and use cases for the rapidly advancing technology that can benefit society as a whole.
According to a statement issued by the four companies Wednesday, the Forum will offer membership to organizations that design and develop large-scale generative AI tools and platforms that push the boundaries of what’s currently possible in the field. The group says those “frontier” models require participating organizations to “demonstrate a strong commitment to frontier model safety,” and to be “willing to contribute to advancing the Forum’s efforts by participating in joint initiatives.”
The statement outlines three main goals for the immediate future. The first is identify best practices to mitigate the potential risks from generative AI, including reliance on erroneous results, creation of information designed to accomplish illegal or harmful ends, and more. The second is to coordinate scientific research into safety measures for AI, including mechanistic interpretability, scalable oversight, and anomaly detection. Finally, the forum hopes to promote communication between corporations and governments to bolster safety and development.
“Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control," Brad Smith, Microsoft vice chair and president, said in a statement. "This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”
"It is vital that AI companies — especially those working on the most powerful models — align on common ground and advance thoughtful and adaptable safety practices to ensure powerful AI tools have the broadest benefit possible," said Anna Makanju, OpenAI's vice president of global affairs, in a statement. "This is urgent work and this forum is well-positioned to act quickly to advance the state of AI safety.”
The wildfire-like popularity of generative AI has prompted widespread debate about how the technology should be governed and regulated, with the EU close to becoming the first western government body to pass an AI Act. That law has come under fire from business and tech leaders who decried a “bureaucratic” approach to regulation in an open letter.
That helped lead to the advent of the Frontier Model Forum, which represents the industry’s attempt to regulate itself instead of simply awaiting government edicts. The pace of change in generative AI tools has been rapid, given the legal issues it raises. OpenAI, in particular, has been the target of legal action in the US, notably in civil court over its possible use of copyrighted material for model training.
One such suit, whose plaintiffs include comedian and author Sarah Silverman, alleges that AI companies use copyrighted works for training without credit, compensation or consent. Nor do civil suits represent the limit of generative AI providers’ legal troubles in the US. The US FTC has reportedly opened an investigation into OpenAI for its handling of customer data, and has asked the company to provide a detailed explanation of its business practices.