Anxiety about the tech’s rapid ability to self-improve is alarming workers at OpenAI, Anthropic, and several other companies|Jernej Furman|CC BY 2.0
AI researchers and executives at OpenAI, Anthropic, and several other companies are ringing the alarm bells on the dangers they see emerging from artificial intelligence.
Some industry experts have left their organizations after issuing urgent warnings about what they see as flashing red lights.
What are they saying?
Most of the high-profile departures are researchers involved in AI safety. One person at Claude bot maker Anthropic resigned this week, posting a letter that highlighted how AI could distort humanity. It has been viewed 14.5 million times.
At OpenAI, three people either quit or were asked to leave. One was concerned that ChatGPT’s new ad strategy could manipulate users, while an engineer warned of widespread job displacement. Another person was terminated following a dispute over the release of AI erotica on ChatGPT.
Anxiety about the technology’s rapid ability to self-improve is alarming employees. OpenAI’s latest model helped train itself, while Anthropic’s Cowork tool essentially built its own framework without human intervention.
The CEO of Hyperwrite AI assistant posted an essay talking about the numerous jobs that would be lost due to the tech. It went viral on X, garnering 79 million views in three days.
While tech optimists remain bullish, internal reports are beginning to acknowledge darker possibilities. Anthropic’s recent “sabotage report” noted risks, including the potential for AI to assist in creating chemical weapons.
Despite these grave concerns, OpenAI reportedly relinquished its mission alignment team tasked with ensuring AI benefits all of humanity. It is also backing a super PAC that promotes rapid AI advancement.
Meanwhile, Anthropic has pledged $20 million to support pro-safety congressional candidates.