Get updates delivered to you daily. Free and customizable.
Windows Central
Sam Altman re-prioritizes safety processes at OpenAI after it seemingly took a backseat for 'shiny products'
By Kevin Okemwa,
2024-08-01
What you need to know
OpenAI CEO Sam Altman recently highlighted new safety updates for the company.
The ChatGPT maker will allocate up to 20% of its computing resources to safety processes.
The company will give the US AI Institute early access to its next-gen model "to push forward the science of AI evaluations."
OpenAI CEO Sam Altman highlighted new updates for the company's safety policies . The top executive indicated that the ChatGPT maker is living up to its promises and will allocate up to 20% of its computing resources to safety processes across its tech stack.
Additionally, Altman disclosed that OpenAI has been working closely with the US AI Safety Institute and has agreed to grant the institute early access to its next-gen model "to push forward the science of AI evaluations."
And finally, the top executive asks all OpenAI employees (current and former) to openly raise concerns about the company's trajectory and product development.
Will generative AI lead to the end of humanity ? Is AI safe and private? These are some of the questions lingering in concerned users' minds as the technology becomes more prevalent and advanced, with companies like OpenAI, Microsoft , and Google at the forefront.
Days after launching its magical GPT-4o model with reasoning capabilities, OpenAI lost several members from its safety and super alignment team . A former staffer disclosed that he left the ChatGPT maker after constantly disagreeing with top management over core priorities on next-gen models, including safety, preparedness, monitoring, and more.
However, the revelations were short-lived. A report disclosed that OpenAI employees are subjected to nondisclosure and non-disparagement, preventing them from criticizing the company or how it runs its operations even after leaving the company. Even admitting that they were subjected to the agreements is considered a violation of the NDA.
This seemingly caused employees to remain tight-lipped about the company's operations or risk losing their vested equity, with a former employee indicating that working for OpenAI felt like the Titanic of AI .
Sam Altman admits the clause was part of OpenAI's non-disparagement terms. However, it has since been voided. He calls current and former employees to raise concerns about the company's trajectory "and feel comfortable doing so" as their vested equity will remain untouched.
Get updates delivered to you daily. Free and customizable.
It’s essential to note our commitment to transparency:
Our Terms of Use acknowledge that our services may not always be error-free, and our Community Standards emphasize our discretion in enforcing policies. As a platform hosting over 100,000 pieces of content published daily, we cannot pre-vet content, but we strive to foster a dynamic environment for free expression and robust discourse through safety guardrails of human and AI moderation.
Comments / 0