Useful information
Prime News delivers timely, accurate news and insights on global events, politics, business, and technology
Useful information
Prime News delivers timely, accurate news and insights on global events, politics, business, and technology
Join our daily and weekly newsletters to get the latest updates and exclusive content on industry-leading AI coverage. More information
While the 2024 US election focused on traditional issues like the economy and immigration, its quiet impact on AI policy could prove even more transformative. Without a single debate question or major campaign promise on AI, voters inadvertently tipped the scales in favor of accelerationists, those who advocate rapid development of AI with minimal regulatory hurdles. The implications of this acceleration are profound, heralding a new era of AI policies that prioritize innovation over caution and signaling a decisive shift in the debate between the potential risks and rewards of AI.
President-elect Donald Trump’s pro-business stance leads many to assume that his administration will favor those who develop and commercialize AI and other advanced technologies. your party platform has little to say about AI. However, it does emphasize a policy approach focused on repealing AI regulations, particularly targeting what they described as “radical left-wing ideas” within the outgoing administration’s existing executive orders. Instead, the platform supported AI development aimed at fostering freedom of expression and “human flourishing”, calling for policies that enable innovation in AI while opposing measures perceived as hindering the development of AI. technological progress.
Early indications based on appointments to senior government positions underline this direction. However, a larger story is unfolding: the resolution of the intense debate over the future of AI.
Since ChatGPT appeared in November 2022, there has been a heated debate between those in the AI field who want to accelerate AI development and those who want to slow it down.
Famously, in March 2023, the latter group proposed a six-month pause on AI in the development of the most advanced systems, warning in an open letter that AI tools present “profound risks to society and humanity.” This letter, headed by Future of Life Institutewas driven by OpenAI’s release of the GPT-4 large language model (LLM), several months after the release of ChatGPT.
The letter was initially signed by more than 1,000 technology leaders and researchers, including Elon Musk, Apple co-founder Steve Wozniak, 2020 presidential candidate Andrew Yang, podcaster Lex Fridman, and artificial intelligence pioneers Yoshua Bengio and Stuart Russell. The number of signatories to the letter eventually rose to more than 33,000. Collectively, they became known as “doomers,” a term to express their concerns about the potential existential risks of AI.
Not everyone agreed. OpenAI CEO Sam Altman did not sign. Neither does Bill Gates and many others. Their reasons for not doing so varied, although many expressed concern about the potential harm of AI. This led to many conversations about the possibility of AI going crazy and causing a disaster. For many in the AI field, it became fashionable to talk about their fatality probability assessmentoften called the equation: p(perdition). However, work on the development of AI did not stop.
For the record, my p(loss) in June 2023 was 5%. It might seem like little, but it wasn’t zero. I felt that leading AI labs were sincere in their efforts to rigorously test new models before their release and to provide important guardrails for their use.
Many observers concerned about the dangers of AI have rated the existential risks above 5%, and some have scored much higher. AI security researcher Roman Yampolskiy rated the probability of AI end humanity by more than 99%. That said, a study Published earlier this year, well before the election and representing the opinions of more than 2,700 AI researchers, it showed that “the average prediction of extremely bad outcomes, such as human extinction, was 5%.” Would you get on a plane if there was a 5% chance it would crash? This is the dilemma facing AI researchers and policymakers.
Others have openly dismissed concerns about AI and have instead pointed out what they perceive to be the technology’s enormous positive side. These include Andrew Ng (who founded and led the Google Brain project) and Pedro Domingos (professor of computer science and engineering at the University of Washington and author of “The master algorithm“). Instead, they argued that AI is part of the solution. As Ng pointed out, there are indeed existential dangers, such as climate change and future pandemics, and AI can be part of how to address and mitigate them.
Ng argued that AI development should not be stopped, but rather accelerated. This utopian vision of technology has been shared by others who are known collectively as “effective accelerationists” or “e/acc” for short. They argue that technology (and especially AI) is not the problem, but the solution to most, if not all, of the world’s problems. start accelerator Y Combinator CEO Garry Tan, along with other prominent Silicon Valley leaders, included the term “e/acc” in their X usernames to show their alignment with the vision. New York Times journalist Kevin Roose captured the essence of these accelerationists saying they have an “all gas, no brakes approach.”
a substack fact sheet a couple of years ago described the principles underlying effective accelerationism. Here’s the summary they offer at the end of the article, plus a comment from OpenAI CEO Sam Altman.
The outcome of the 2024 election can be seen as a turning point, putting the accelerationist view in a position to shape US AI policy for years to come. For example, the president-elect recently named tech entrepreneur and venture capitalist David Sacks as “AI czar.”
Sacks, an outspoken critic of AI regulation and advocate of market-driven innovation, brings his experience as a technology investor to this role. He is one of the leading voices in the AI industry, and much of what he has said about AI aligns with the accelerationist views expressed by the incoming party’s platform.
In response to the Biden administration’s AI executive order in 2023, Sacks tweeted: “The United States’ political and fiscal situation is hopelessly broken, but we have an incomparable asset as a country: cutting-edge innovation in AI driven by a completely free and unregulated market for software development. “That just ended.” While the influence Sacks will have on AI policy remains to be seen, his appointment signals a shift toward policies that favor industry self-regulation and rapid innovation.
I doubt that most of the voting public has given much thought to the implications of AI policy when casting their vote. However, very tangibly, accelerationists have won as a result of the election, potentially sidelining those who advocate for a more cautious approach by the federal government to mitigate the long-term risks of AI.
As accelerationists chart the path forward, the stakes could not be higher. It remains to be seen whether this era marks the beginning of unprecedented progress or unintended catastrophe. As AI development accelerates, the need for informed public discourse and vigilant oversight becomes increasingly paramount. How we navigate this era will define not only technological progress but also our collective future.
As a counterbalance to the lack of action at the federal level, it is possible that one or more states will adopt various regulations, which has already occurred to some extent in California and Colorado. For example, California’s AI safety bills focus on transparency requirements, while Colorado addresses AI discrimination in hiring practices, offering models for state-level governance. Now, all eyes will be on voluntary testing and self-imposed guardrails by Anthropic, Google, OpenAI and other AI model developers.
In short, accelerationist victory means fewer restrictions on AI innovation. Indeed, this greater speed can lead to faster innovation, but it also increases the risk of unintended consequences. Now I’m checking my p(doom) to 10%. What is yours?
Gary Grossman is executive vice president of the technology practice at Edelman and global leader of the Edelman AI Center of Excellence.
DataDecisionMakers
Welcome to the VentureBeat community!
DataDecisionMakers is the place where experts, including data technicians, can share data-related knowledge and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data technology, join us at DataDecisionMakers.
You might even consider contributing an article of your own!
Read more from DataDecisionMakers