Anthropic CEO applauds after Trump officials accuse company of fear-mongering with AI

Dario Amodei, CEO of Anthropic published a statement on Tuesday to “set the record straight” about the company’s alignment with the Trump administration’s artificial intelligence policy, in response to what he called “a recent increase in inaccurate claims about Anthropic’s political stances.”

“Anthropic is founded on a simple principle: AI must be a force for human progress, not a danger,” Amodei wrote. “That means creating products that are truly useful, talking honestly about the risks and benefits, and working with anyone who is serious about doing it right.”

Amodei’s response comes after last week’s criticism of Anthropic from AI leaders and senior members of the Trump administration, including AI czar David Sacks and the White House’s senior AI policy advisor. Sriram Krishnan – all accusing the AI ​​giant of stoking fears of damaging the industry.

The first blow came from Sacks after Anthropic co-founder Jack Clark shared their hopes and “appropriate fears” about AI, including the fact that AI is a powerful, mysterious, and “somewhat unpredictable” creature, not a reliable machine that is easily mastered and put to work.

Sacks answer: “Anthropic is executing a sophisticated regulatory capture strategy based on generating fear. It is primarily responsible for the state regulatory frenzy that is damaging the startup ecosystem.”

California Senator Scott Wiener, author of SB 53 on AI safety, anthropic defendeddenouncing President Trump’s “effort to prohibit states from acting on AI without advancing federal protections.” Sacks then doubled down, claiming that Anthropic was working with Wiener to “impose the left’s view on AI regulation.”

More comments followed, with regulation advocates such as Groq COO Sunny Madra. saying that Anthropic was “causing industry-wide chaos” by advocating for minimal AI safety measures rather than unfettered innovation.

Technology event

san francisco
|
October 27-29, 2025

In his statement, Amodei said managing the societal impacts of AI should be a matter of “policy over policy” and that he believes everyone wants to ensure the United States ensures its leadership in the development of AI while also developing technology that benefits the American people. He defended Anthropic’s alignment with the Trump administration in key areas of AI policy and cited examples of times when he personally toyed with the president.

For example, Amodei pointed to Anthropic’s work with the federal government, including the company’s offer of Claude to the federal government and Anthropic’s $200 million deal with the Department of Defense (which Amodei called “the War Department,” echoing Trump’s preferred terminology, although the name change requires congressional approval). He also noted that Anthropic publicly praised Trump’s AI Action Plan and has supported Trump’s efforts to expand energy supplies to “win the AI ​​race.”

Despite these displays of cooperation, Anthropic has received criticism from industry peers for straying from the Silicon Valley consensus on certain policy issues.

The company first drew the ire of officials with ties to Silicon Valley when it opposed a proposed 10-year ban on AI regulation at the state level, a provision that faced widespread bipartisan pushback.

Many in Silicon Valley, including leaders at OpenAI, have claimed that state regulation of AI would slow down the industry and give China the lead. Amodei responded that the real risk is that the United States will continue to fill China’s data centers with powerful Nvidia AI chips, adding that Anthropic restrict selling its artificial intelligence services to Chinese-controlled companies despite falling revenues.

“There are products we won’t build and risks we won’t take, even if they made money,” Amodei said.

Anthropic also fell out of favor with certain powerful players when it supported California’s SB 53, a lightweight security bill that requires major AI developers to make security protocols for frontier models public. Amodei noted that the bill has an exception for companies with annual gross revenues less than $500 million, which would exempt most startups from any undue burden.

“Some have suggested that we are somehow interested in damaging the startup ecosystem,” Amodei wrote, referring to Sacks’ post. “Startups are among our most important customers. We work with tens of thousands of startups and partner with hundreds of accelerators and venture capitalists. Claude is powering a whole new generation of AI-native companies. Damaging that ecosystem makes no sense to us.”

In its statement, Amodei said it has grown from a $1 billion run rate to $7 billion over the past nine months while managing to implement “AI carefully and responsibly.”

“Anthropic is committed to constructive engagement on public policy issues. When we agree, we say so. When we don’t, we propose an alternative for your consideration,” Amodei wrote. “We will continue to be honest and forthright, and we will defend the policies we believe are right. The risks of this technology are too great for us to do otherwise.”

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *