OpenAI has announced the formation of the Safety and Security Committee led by Bret Taylor (Chair of the OpenAI Board), Adam D’Angelo, Nicole Seligman, and Sam Altman (OpenAI CEO). This committee is responsible for advising the company’s board on decisions regarding the security of various projects within OpenAI.
The organization has begun training the next generation AI models, with preliminary results bringing us closer to creating Artificial General Intelligence (AGI) that can be widely used. Therefore, a more thorough evaluation of safety considerations is necessary.
The first task for this committee is to assess and develop safety processes within OpenAI within the next 90 days. Subsequently, recommendations will be presented to the board, with OpenAI planning to publicly disclose these details going forward.
OpenAI has also appointed technical and oversight experts to join this committee, including Aleksander Madry (Head of Readiness), Lilian Weng (Head of Security Systems), John Schulman (Head of Alignment Science), Matt Knight (Head of Security), and Jakub Pachocki (Head of Science), along with the cybersecurity team.
Source: OpenAI
TLDR: OpenAI establishes the Safety and Security Committee to advise on security decisions and enhance safety processes within the organization as it progresses towards developing Artificial General Intelligence.
Leave a Comment