OpenAI disbands long-term AI risk team less than one year later

OpenAI, a Microsoft-backed startup focused on artificial intelligence research, has disbanded its team dedicated to long-term AI risks just one year after its inception. The decision follows the recent departures of team leaders Ilya Sutskever and Jan Leike, who cited concerns about the company's priorities shifting away from safety and towards product development.

The Superalignment team, formed last year to address controlling AI systems smarter than humans, was a major initiative for OpenAI. However, the team's dissolution signals a change in direction for the company as it reallocates team members to other projects.

Leike expressed his disagreement with OpenAI's leadership regarding core priorities, advocating for a stronger focus on security, monitoring, preparedness, safety, and societal impact. He emphasized the importance of a "safety-first AGI company" given the inherent dangers of developing superintelligent machines.

The departures and team dissolution come amid a leadership crisis at OpenAI, which saw co-founder and CEO Sam Altman ousted by the board last year. Altman's return to the company, along with other board changes, sparked internal and external upheaval.

Despite these challenges, OpenAI recently launched a new AI model and desktop version of its popular chatbot, ChatGPT. The GPT-4 model offers improved text, video, and audio capabilities, with plans to introduce video chat features in the future.

OpenAI's decision to disband the Superalignment team and the departure of key employees highlight the complex dynamics within the company and the broader AI research community. As OpenAI navigates these changes, its commitment to safety, ethics, and responsible AI development remains a critical focus.


More from Press Rundown