Useful information

Prime News delivers timely, accurate news and insights on global events, politics, business, and technology

2024 showed that it is really possible to control AI

Almost all of the big AI news this year was about how quickly the technology is progressing, the damage it is causing, and speculation about how soon it will grow beyond the point where humans can control it. But in 2024, governments also made significant progress in regulating algorithmic systems. Below is a breakdown of last year’s most significant AI legislation and regulatory efforts at the state, federal, and international levels.

Table of Contents

State

US state lawmakers took the lead on AI regulation in 2024, introducing hundreds of bills—some had modest goals, such as creating study committees, while others would have imposed serious civil liability on AI developers should their creations cause catastrophic harm to society. The vast majority of bills failed to pass, but several states enacted significant laws that could serve as a model for other states or Congress (assuming Congress ever functions again).

As AI flooded social media ahead of the election, politicians from both parties backed anti-deepfake laws. More than 20 states We now have bans against misleading AI-generated political ads in the weeks immediately leading up to the election. Bills aimed at curbing AI-generated pornography, particularly images of minors, also received strong bipartisan support in states including Alabama, California, Indiana, North Carolina and South Dakota.

Unsurprisingly, given that it’s the tech industry’s backyard, some of the most ambitious AI proposals emerged from California. A high-profile bill would have forced AI developers to take safety precautions and held companies liable for catastrophic damage caused by their systems. That bill passed both bodies of the legislature amid a fierce lobbying effort, but was ultimately vetoed by Gov. Gavin Newsom.

Newsom, however, signed more than a dozen other invoices aimed at less apocalyptic but more immediate damage to the AI. A new California law requires health insurers to ensure that the artificial intelligence systems they use to make coverage determinations are fair and equitable. Another requires generative AI developers to create tools that label content as AI-generated. And a pair of bills prohibit the distribution of AI-generated images of dead people without prior consent and require that agreements for AI-generated images of living people must clearly specify how the content will be used.

Colorado spent a first of its kind in US law. require companies that develop and use artificial intelligence systems to take reasonable steps to ensure that the tools are non-discriminatory. Consumer advocates called the legislation a important baseline. Similar bills are likely to be hotly debated in other states in 2025.

And, in a middle finger gesture to our future robotic overlords and the planet, Utah enacted a law which prohibits any government entity from granting legal personality to artificial intelligence, inanimate objects, bodies of water, atmospheric gases, weather, plants and other non-human things.

Federal

Congress talked a lot about AI in 2024, and the House ended the year by releasing a 273-page bipartisan report outlining guiding principles and recommendations for future regulations. But when it came time to pass legislation, federal lawmakers did very little.

Federal agencies, on the other hand, were busy all year trying to meet the goals set out in President Joe Biden’s 2023 executive order on AI. And several regulators, notably the Federal Trade Commission and the Department of Justice, have cracked down on deceptive and harmful AI systems.

The work agencies did to comply with the AI ​​executive order was not particularly sexy or headline-grabbing, but it laid important foundations for the governance of public and private AI systems in the future. For example, federal agencies embarked on a hiring spree for AI talent and created standards for responsible model development and damage mitigation.

And, in a big step toward greater public understanding of how the government uses AI, the Office of Management and Budget forced (most of) its partner agencies to disclose critical information about the AI ​​systems they use and that may affect the rights and safety of people.

When it comes to law enforcement, the FTC Operation AI Comply Targeted companies that use AI in deceptive ways, such as writing fake reviews or providing legal advice, and sanctioned AI weapons detection company Evolv for making misleading claims about what its product could do. The agency also established an investigation with facial recognition company IntelliVision, which he accused of falsely saying its technology was free of racial and gender bias, and forbidden pharmacy chain Rite Aid to use facial recognition for five years after an investigation determined the company was using the tools to discriminate against shoppers.

Meanwhile, the Justice Department joined state attorneys general in a lawsuit accusing real estate software company RealPage of a massive algorithmic price-fixing scheme that raised rents across the country. He also won several antitrust lawsuits against Google, including one related to the company’s monopoly on Internet search that could significantly change the balance of power in the burgeoning artificial intelligence search industry.

Global

In August, the European Union AI Law came into force. The law, which is already serving as a model for other jurisdictions, requires that artificial intelligence systems that perform high-risk functions, such as assisting with hiring or medical decisions, undergo risk mitigation and meet certain standards around the quality of training data and human supervision. It also prohibits the use of other artificial intelligence systems, such as algorithms that could be used to assign social scores to a country’s residents that are then used to deny rights and privileges.

In September, China issued a major document on AI safety governance. structure. Like similar frameworks published by the US National Institute of Standards and Technology, it is non-binding, but creates a common set of standards that AI developers must follow when identifying and mitigating risks in their systems.

One of the most interesting pieces of AI policy legislation comes from Brazil. In late 2024, the country’s Senate passed a comprehensive AI security bill. It faces a difficult path forward, but if passed, it would create an unprecedented set of protections for the types of copyrighted material commonly used to train generative AI systems. Developers would have to disclose what copyrighted material was included in their training data, and creators would have the power to prohibit the use of their work to train AI systems or negotiate compensation agreements that would be based, in part, on in the size of the AI. developer and how the material would be used.

Like the EU AI Law, the proposed Brazilian law would also require high-risk AI systems to follow certain security protocols.

Christmas Discounts

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *