AI safety researchers leave OpenAI over prioritization concerns

Avatar

The entire OpenAI team focused on the existential dangers of AI have either resigned or been reportedly absorbed into other research groups.

Days after Ilya Sutskever, OpenAI’s chief scientist and one of the company’s co-founders, announced his departure, Jan Leike, the former DeepMind researcher who was OpenAI’s super alignment team’s other co-lead, posted on X that he had resigned.

According to Leike, his departure from the company was due to his worries about its priorities, which he thinks are more focused on product development than AI safety.

Source: Jan Leike

Leike, in a series of posts, stated that the OpenAI leadership was wrong in their choice of core priorities and should emphasize safety and preparedness as Artificial General Intelligence (AGI) development moves forward.

AGI is a term for a hypothetical artificial intelligence that can perform the same as or better than humans on a range of tasks.

After spending three years at OpenAI, Leike criticized the company for prioritizing developing flashy products over nurturing a robust AI safety culture and processes. He emphasized the urgent need for resource allocation, particularly computing power, to support his team’s vital safety research, which was being overlooked.

“…I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time until we finally reached a breaking point. Over the past few months, my team has been sailing against the wind…”

OpenAI formed a new research team in July last year to prepare for the emergence of extremely intelligent artificial intelligence that could outsmart and overpower its creators. OpenAI’s chief scientist and co-founder, Ilya Sutskever, was appointed co-lead of this new team, which received 20% of OpenAI’s computational resources.

Related: Reddit shares jump after-hours on OpenAI data-sharing deal

Following the recent resignations, OpenAI has opted to dissolve the ‘Superalignment’ team and integrate its functions into other research projects within the organization. This decision is reportedly a consequence of the ongoing internal restructuring, which was initiated in response to the governance crisis in November 2023.

Sutskever was part of an effort that successfully saw OpenAI’s board briefly push out Altman as CEO in November last year before he was later rehired to the role after backlash from employees.

According to The Information, Sutskever informed employees that the board’s decision to remove Sam Altman fulfilled their responsibility to guarantee that OpenAI develops AGI that benefits all of humanity. As one of the six board members, Sutskever emphasized the board’s commitment to aligning OpenAI’s goals with the greater good.

Magazine: How to get better crypto predictions from ChatGPT, Humane AI pin slammed: AI Eye