Friday, June 14, 2024 Banner
HomeSoftwareAIOpenAI Disbands Long-Term AI Risk Team Less Than a Year After Its...

OpenAI Disbands Long-Term AI Risk Team Less Than a Year After Its Inception.

Just one year after the company launched, OpenAI has dissolved its team devoted to the long-term hazards of artificial intelligence, a person familiar with the matter confirmed to CNBC on Friday. The individual, who requested anonymity, stated that several team members are being moved to various other teams inside the organization.

The announcement of the team leaders’ resignations from the Microsoft-backed startup, OpenAI co-founders Ilya Sutskever and Jan Leike, came a few days ago. Leike stated on Friday that OpenAI’s safety procedures and culture have suffered in favor of flashy new features.

The goal of OpenAI’s Superalignment team, which was revealed last year, has been to “steer and control AI systems much smarter than us through scientific and technical breakthroughs.” OpenAI announced at the time that it would devote 20% of its processing capacity over the next four years to the project. Instead of responding to a request for comment, OpenAI pointed CNBC to a recent post on X by co-founder and CEO Sam Altman, in which he expressed his sadness about Leike’s departure and stated that the business still had work to accomplish.

Co-founder of OpenAI Greg Brockman said on X on Saturday that the business has increased understanding of the opportunities and threats associated with artificial intelligence (AGI) so that the world may better prepare for it.

Wired broke the story of the team’s split first.

Leike and Sutskever announced their exits from the firm on social media site X on Tuesday, a few hours apart. However, on Friday, Leike provided additional information regarding his reasons for leaving.

Leike stated on X, “I joined because I felt OpenAI would be the best place in the world to do this research.” “Yet, we ultimately came to a breaking point because I have been at odds with OpenAI leadership regarding the company’s fundamental priorities for quite some time.”

Leike stated in his letter that he thinks the organization should devote a lot more of its resources to security, monitoring, readiness, safety, and societal impact.

He commented, “I worry that we aren’t on a trajectory to get there. These challenges are pretty hard to get  through.” My squad has been sailing against the wind for the past few months. We occasionally struggled, and we found it more difficult to complete this important research.

According to Leike, OpenAI needs to transform into a “safety-first AGI company.”

It is perilous work to create machines that are intelligenter than humans, the author noted. “OpenAI is taking on a great deal of responsibility for the benefit of all people. However, in recent years, showy items have displaced safety procedures and culture.

When asked for a comment, Leike did not answer right away.

The well-known exits occur months after Altman experienced a leadership crisis at OpenAI.

After claiming in a statement that Altman had not been “consistently candid in his communications with the board,” the OpenAI board dismissed Altman in November.

With The Wall Street Journal and other media outlets reporting that Sutskever focused his attention on making sure artificial intelligence would not harm humans, while others, including Altman, were more eager to push ahead with delivering new technology, the problem appeared to be becoming more complicated by the day.

Following Altman’s dismissal, nearly every employee of OpenAI submitted an open letter threatening to resign, and investors, including Microsoft, voiced their outrage. After a week, Altman returned to the company, and the three board members who had voted to remove Altman—Helen Toner, Tasha McCauley, and Ilya Sutskever—were removed. Sutskever continued to work for the company at the time, but he was no longer a board member. Adam D’Angelo, who had supported Altman’s removal from the board, was kept on.

During a March Zoom call with reporters, Altman said that he had no updates to provide regarding Sutskever’s condition.

The company’s most recent attempt to increase the usage of its well-liked chatbot, OpenAI, recently unveiled a new AI model, a desktop version of ChatGPT, and an overhauled user interface. Days later, news of Sutskever and Leike’s departures, as well as the dissolution of the superalignment team, was announced.

The GPT-4 model is now available to all users, even those who use OpenAI for free, according to a livestreamed event on Monday by technology leader Mira Murati.


Most Popular

Recent Comments