Useful information

Prime News delivers timely, accurate news and insights on global events, politics, business, and technology

AI labels must be the new norm in 2025


I’m an AI reporter and next year I want to be really bored. I don’t want to hear about increasing rates of AI-powered scams, boardroom power struggles, or people abusing AI programs to create harmful, misleading, or intentionally inflammatory photos and videos.

It’s a difficult task and I know I probably won’t get my wish. There are simply too many companies developing AI and too little guidance and regulation. But if I had to ask for one thing this holiday season, it’s this: 2025 must be the year we get meaningful AI content tags, especially for images and videos.

AI Atlas Tag

Zooey Liao/CNET

AI-generated images and videos have come a long way, especially over the past year. But the evolution of AI imagers is a double-edged sword. Improvements in the models mean that images appear with fewer hallucinations or coincidences. But those weird things, people with 12 fingers and objects that disappeared, were one of the few things that people could point to and guess whether the image was created by humans or AI. As AI generators improve and those telltale signs disappear, it will be a major problem for all of us.

Legal power struggles and ethical debates over AI images will no doubt continue next year. But for now, AI image generators and editing services are legal and easy to use. That means AI content will continue to flood our online experiences, and identifying the origins of an image will be more difficult (and more important) than ever. There is no magic solution that works for everyone. But I’m sure widespread adoption of AI content labels would go a long way to helping.

The complicated history of the art of AI

From talking refrigerators to iPhones, our experts are here to help make the world a little less complicated.

If there’s one button you can press to infuriate any artist, it’s to open AI image generators. The technology, powered by generative AI, can create entire images from a few simple words in your prompts. I’ve used and reviewed several of them for CNET and I’m still amazed at how detailed and clear the images can be. (Not all of them are winners, but they can be pretty good.)

As my former CNET colleague Stephen Shankland succinctly put it: “AI can let you lie with photos. But you don’t want a photo that hasn’t been touched by digital processing.” Striking a balance between retouching and editing to discard the truth is something photojournalists, editors and creators have been grappling with for years. Generative AI and AI-powered editing only make it more complicated.

Take Adobe for example. This fall, Adobe introduced a ton of new features, many of which are powered by generative AI. Photoshop can now remove distracting wires and cables from images, and Premiere Pro users can lengthen existing movie clips with Gen AI. Generative fill is one of the most popular Photoshop tools, on par with the cropping tool, Adobe’s Deepa Subramaniam told me. Adobe made it clear that their generative editing will be the new norm and the future. And since Adobe is the industry standard, that puts creators in a bind: join AI or be left behind.

Although Adobe promises to never train its users on the job (one of the biggest concerns with generative AI), not all companies do this or even reveal how their AI models are built. Creators who share their work online already have to deal with “art theft and plagiarism,” digital artist René Ramos told me earlier this year, pointing out how image-generating tools grant access to the styles that artists They have spent their lives perfecting.

From talking refrigerators to iPhones, our experts are here to help make the world a little less complicated.

What AI tags can do

AI tags are any type of digital notices that indicate when an image may have been created or significantly altered by AI. Some companies automatically add a digital watermark to their generations (like Meta AI’s Imagine), but many offer the ability to remove them by upgrading to paid tiers (like OpenAI’s Dall-E 3). Or users can simply crop the image to crop the identification mark.

A lot of work has been done this past year to assist in this effort. Adobe’s content authenticity initiative launched a new app this year called Content Credentials that allows anyone to attach invisible digital signatures to their work. Creators can also use these credentials to disclose and track the use of AI in their work. Adobe also has a Google Chrome extension that helps identify these credentials in web content.

Google adopted a new standard for content credentials for Images and ads in Google Search. as part of the Coalition for Content Provenance and Authenticity, co-founded by Adobe. He also added a new section to image information in Google Search that highlights any AI editing for “greater transparency.” Google’s beta program for flagging and identifying AI content, called SynthIDtook a step forward and went released open source for developers this year.

Social media companies have also been working on labeling AI content. People are twice as likely to encounter fake or misleading online images on social media than any other channel, according to a report from Poynter’s MediaWise initiative. Instagram and Meta, Facebook’s parent company, implemented automatic “Made with AI” tags for social posts, and the tags quickly and mistakenly flagged human-shot photos as AI-generated. goal after clarified that the labels apply when it “detects industry-standard AI image indicators” and changed the label to read “AI information” to avoid the implication that an image was generated entirely by a computer program. Other social media platforms, such as Pinterest and TikTok, have AI tags with varying degrees of success; In my experience, Pinterest has been overwhelmingly inundated with AI, and TikTok’s AI tags are ubiquitous but easy to overlook.

Adam Mosseri, head of Instagram, recently shared a series of publications on the topic, saying: “Our role as internet platforms is to label AI-generated content to the best of our ability. But some content will inevitably fall through the cracks, and not all misrepresentations will be AI-generated, so we must also provide context on who shares so you can evaluate for yourself how much you want to trust their content.

If Mosseri has any practical advice other than “consider the source,” which most of us are taught in high school English class, I’d love to hear it. But more optimistically, it could hint at future product developments to give people more context, like the Twitter/X community notes. These things, like AI tags, will be even more important if Meta decides to continue its experiment to add AI-generated suggested posts to our feeds.

What we need in 2025

This is all great, but we need more. We need consistent and glaringly obvious labels in every corner of the Internet. Not buried in a photo’s metadata, but cut through (or over/under). The more obvious, the better.

There is no easy solution to this. That type of online infrastructure would require a lot of work and collaboration between technological, social and probably government and civil society groups. But that kind of investment in distinguishing raw images from those generated entirely by AI and everything in between is essential. Teaching people to identify AI content is great, but as AI improves, it will be more difficult even for experts like me to accurately evaluate images. So why not make it incredibly obvious and give people the information they need about the origins of an image, or at least help them guess when they see something strange?

My concern is that this topic is currently at the bottom of many AI companies’ to-do lists, especially now that the tide appears to be turning towards AI video development. But for the sake of my sanity and everyone else’s, 2025 has to be the year we establish a better system for identifying and labeling AI images.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *