Useful information
Prime News delivers timely, accurate news and insights on global events, politics, business, and technology
Useful information
Prime News delivers timely, accurate news and insights on global events, politics, business, and technology
Silicon Valley leaders, including White House AI and cryptocurrency czar David Sacks and OpenAI chief strategy officer Jason Kwon, caused a stir online this week for their comments about groups promoting AI safety. In separate cases, they alleged that certain AI safety advocates are not as virtuous as they appear and act in the interests of themselves or billionaire puppeteers behind the scenes.
AI security groups that spoke to TechCrunch say the allegations from Sacks and OpenAI are Silicon Valley’s latest attempt to intimidate its critics, but certainly not the first. In 2024, some venture capital firms spread rumors that a California AI safety bill, SB 1047, would send startup founders to jail. The Brookings Institution called the rumor one of many “misrepresentations” over the bill, but Gov. Gavin Newsom ultimately vetoed it anyway.
Regardless of whether Sacks and OpenAI intended to intimidate critics or not, their actions have sufficiently scared several AI safety advocates. Many nonprofit leaders TechCrunch contacted last week asked to speak on condition of anonymity to avoid retaliation against their groups.
The controversy underscores the growing tension in Silicon Valley between developing AI responsibly and turning it into a mass consumer product, a topic my colleagues Kirsten Korosec, Anthony Ha and I explore in this week’s report. Equity podcast. We also dive into a new AI safety law passed in California to regulate chatbots and OpenAI’s approach to erotica on ChatGPT.
On Tuesday, Sacks wrote a publish in X alleging that Anthropic – which has raised concerns about AI’s ability to contribute to unemployment, cyberattacks, and catastrophic damage to society, is simply fear-mongering to get laws passed that benefit it and drown smaller startups in red tape. Anthropic was the only major AI lab to support California Senate Bill 53 (SB 53), a bill establishing security reporting requirements for large AI companies, which became law last month.
Sacks was responding to a viral assay from Anthropic co-founder Jack Clark about his fears regarding AI. Clark delivered the essay as a speech at the Curve AI security conference in Berkeley weeks earlier. Sitting in the audience, it certainly seemed like a genuine account of a technologist’s reservations about his products, but Sacks didn’t see it that way.
Sacks said Anthropic is running a “sophisticated regulatory capture strategy,” though it’s worth noting that a truly sophisticated strategy probably wouldn’t involve turning the federal government into an enemy. in a follow up post on X, Sacks noted that Anthropic has “consistently positioned itself as an enemy of the Trump administration.”
Technology event
san francisco
|
October 27-29, 2025
Also this week, OpenAI Chief Strategy Officer Jason Kwon wrote a publish in X explaining why the company was sending subpoenas to AI safety nonprofits, such as Encode, a nonprofit that advocates for responsible AI policy. (A subpoena is a legal order demanding documents or testimony.) Kwon said that after Elon Musk sued OpenAI, over concerns that the ChatGPT maker had strayed from its nonprofit mission, OpenAI found it suspicious how several organizations also expressed opposition to its restructuring. Encode filed an amicus brief in support of Musk’s lawsuit, and other nonprofits have spoken out publicly against OpenAI’s restructuring.
“This raised questions about transparency about who was funding them and whether there was any coordination,” Kwon said.
NBC News reported this week that OpenAI sent broad subpoenas to Encode and six other nonprofit organizations which criticized the company, requesting its communications related to two of OpenAI’s biggest opponents, Musk and Meta CEO Mark Zuckerberg. OpenAI also requested communications from Encode related to its support of SB 53.
A prominent AI security leader told TechCrunch that there is a growing divide between OpenAI’s government affairs team and its research organization. While OpenAI security researchers frequently publish reports revealing the risks of AI systems, OpenAI’s policy unit lobbied against SB 53, saying it would prefer to have uniform rules at the federal level.
OpenAI’s head of mission alignment, Joshua Achiam, discussed his company’s sending subpoenas to nonprofits in a publish in X this week.
“Even though it’s possibly a risk for my entire career, I will say: This doesn’t sound great,” Achiam said.
Brendan Steinhauser, executive director of the nonprofit Alliance for Secure AI (which has not been cited by OpenAI), told TechCrunch that OpenAI seems convinced that its critics are part of a conspiracy led by Musk. However, he maintains that this is not the case and that much of the AI security community is quite critical of xAI’s security practices, or lack thereof.
“On OpenAI’s part, this is intended to silence critics, intimidate them, and deter other nonprofits from doing the same,” Steinhauser said. “For Sacks, I think he’s concerned that the (AI safety) movement is growing and that people want to hold these companies accountable.”
Sriram Krishnan, senior White House policy advisor for AI and former general partner at a16z, weighed in on this week’s conversation with a social media post own, calling AI safety advocates out of touch. He urged AI security organizations to talk to “real-world people who use, sell and adopt AI in their homes and organizations.”
A recent Pew study found that about half of Americans are more worried than excited about AI, but it’s unclear what exactly they’re worried about. Another recent study went into more detail and found that American voters care more Job losses and deepfakes. than the catastrophic risks caused by AI, which the AI safety movement largely focuses on.
Addressing these security issues could come at the expense of the rapid growth of the AI industry, a trade-off that worries many in Silicon Valley. With investment in AI underpinning much of the US economy, fear of overregulation is understandable.
But after years of unregulated advances in AI, the AI safety movement appears to be gaining real momentum heading into 2026. Silicon Valley’s attempts to fight safety-focused groups may be a sign that they are working.