On Tuesday, OpenAI CEO Sam Altman and other leaders in the AI industry testified in support of additional regulation at a congressional hearing. Altman emphasized the potential dangers of AI, and argued that government intervention was "critical" to prevent negative impacts. IBM Chief Privacy & Trust Officer Christina Montgomery and New York University Professor Emeritus Gary Marcus also testified, with Marcus warning of issues like political manipulation and hyper-targeted advertising. Montgomery suggested that different rules should be applied to different risks, with the strongest regulation reserved for cases with the greatest risk to society. Altman called for the creation of a new federal agency to issue licenses for AI technology, which could be revoked if companies fail to comply with safety standards.
Sen. Dick Durbin (D-Ill.) called the industry leaders' requests for regulation "historic," noting that it was rare for large corporations to plead for regulation. AI technology has received considerable scrutiny from government officials and scientists, who have expressed concerns about privacy, job loss, and potential impacts on elections. Altman co-founded OpenAI in 2015, and the company has since released several AI models, including GPT-4 and DALL-E. OpenAI's ChatGPT was estimated to have reached over 100 million monthly active users in January, making it the fastest-growing consumer application in history, according to UBS.
While some tech companies, such as Apple, Amazon, and Meta, have fought against regulatory intervention, OpenAI is not the first company to deliver a pro-regulation argument to Congress. In 2020, Facebook CEO Mark Zuckerberg called for an updated, more accountable version of Section 230. Overall, some leaders in the AI industry are calling for additional regulation to prevent negative impacts, while others continue to resist government intervention. The hearing is seen as an important step in establishing guidelines for AI development in the future.