This new Adobe tool will help you spot manipulated images

In the photo, Beyoncé looks beatific, with a closed-lip Mona Lisa smile. But it’s easy enough to give her a toothy grin. Just dial up her “Happiness” to the maximum level using Adobe Photoshop’s Smart Portrait tool, and her face gets a Cheshire cat-like smile, white teeth appearing out of thin air. Smart Portrait, released in beta last year, is one of Adobe’s AI-powered “neural filters,” which can age faces, change expressions, and alter the background of a photo so it appears to have been taken at a different time of year. These tools may seem innocuous, but they provide increasingly powerful ways to manipulate photos in an era when altered media spreads across social media in dangerous ways. For Adobe, this is both a big business and a big liability. The company—which brought in $12.9 billion in 2020, with more than $7.8 billion tied to Creative Cloud products aimed at helping creators design, edit, and customize images and video—is committed to offering users the latest technologies, which keeps Adobe ahead of its competition. This includes both neural filters and older AI-powered tools, such as 2015’s Face-Aware Liquify, which lets people manually alter someone’s face. Adobe executives are aware of the perils of such products at a time when fake information spreads on Twitter six times faster than the truth. But instead of limiting the development of its tools, Adobe is focused on the other side of the equation: giving people the ability to verify where images were taken and see how they’ve been edited. Step one: a new Photoshop tool and website that offer unprecedented transparency into how images are manipulated.

Adobe has been exploring the edge of acceptable media editing for a while now. During the company’s annual Max conference in 2016, it offered a sneak peek of a tool that allowed users to change words in a voice-over simply by typing new ones. It was a thrilling—and terrifying—demonstration of how artificial intelligence could literally put words into someone’s mouth. A backlash erupted around how it might embolden deepfakes, and the company shelved the tool. Two years later, when Adobe again used Max to preview cutting-edge AI technologies—including a feature that turns still photos into videos and a host of tools for video editing—Dana Rao, its new general counsel, was watching closely. After the presentation, he sought out chief product officer Scott Belsky to discuss the repercussions of releasing these capabilities into the world. They decided to take action. Rao, who now leads the company’s AI ethics committee, teamed up with Gavin Miller, the head of Adobe Research, to find a technical solution. Initially, they pursued ways to identify when one of Adobe’s AI tools had been used on an image, but they soon realized that these kinds of detection algorithms would never be able to catch up with the latest manipulation technologies. Instead, they sought out a way to show when and where images were taken—and turn editing history into metadata that could be attached to images. The result is the new Content Credentials feature, which went into public beta this October. Users can turn on the feature and embed their images with their identification information and a simplified record of edits that notes which of the company’s tools have been used. Once an image is exported out of Photoshop, it maintains this metadata, all of which can be viewed by anyone online through a new Adobe website called Verify. Simply upload any JPEG, and if it’s been edited with Content Credentials turned on, Verify will show you its metadata and editing history, as well as before-and-after images. Content Credentials is part of a larger effort by both tech and media companies to combat the spread of fake information by providing more transparency around where images come from online. An industry consortium called the Coalition for Content Provenance and Authenticity (C2PA), which includes Adobe, Microsoft, Twitter, and the BBC, recently created a set of standards for how to establish content authenticity, which are reflected in Adobe’s new tool. Members of the group are also backing a bill in the U.S. Senate that would create a Deepfake Task Force under the purview of the Secretary of Homeland Security. But while Adobe has thrown its weight behind this fledgling ecosystem of companies championing image provenance technologies, it also continues to release features that make it increasingly easy to alter reality. It’s the accessibility of such tools that troubles researchers. “Until recently . . . you needed to be someone like Steven Spielberg” to make convincing fake media, says University of Michigan assistant professor Andrew Owens, who has collaborated with Adobe on trying to detect fake images. “What’s most worrisome about recent advances in computer vision is that they’re commoditizing the process.” For content provenance technologies to become widely accepted, they need buy-in from camera-app makers, editing-software companies, and social media platforms. For Hany Farid, a professor at the University of California, Berkeley, who has studied image manipulation for two decades, Adobe and its partners have taken the first steps, but now it’s up to platforms like Facebook to prioritize content that has C2PA-standardized metadata attached. “You don’t want to get in the business of [saying] ‘This is true or false,'” Farid says. “The best [Adobe] can do is to arm people—the average citizen, investigators—with information. And we use that as a launchpad for what comes next: to regain some trust online.” Three other efforts to authenticate images before they’re released into the wild [Illustration: Kemal Sanli]Truepic Content provenance company Truepic recently announced an SDK that will allow any app with a camera to embed images and videos with verified metadata. [Illustration: Kemal Sanli]Starling Lab A project between Stanford and the USC Shoah Foundation, this lab uses cryptography and decentralized web protocols to capture, store, and verify images and video. [Illustration: Kemal Sanli]Project Origin Microsoft and the BBC joined forces in 2020 to help people understand whether images and videos have been manipulated.

https://www.fastcompany.com/90686493/adobe-is-releasing-a-set-of-tools-to-identify-manipulated-images?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Létrehozva 4y | 2021. okt. 28. 15:21:23


Jelentkezéshez jelentkezzen be

EGYÉB POSTS Ebben a csoportban

How Zipline’s Keller Cliffton built the world’s largest drone delivery network

Zipline’s cofounder and CEO Keller Cliffton charts the company’s recent expansion from transporting blood for lifesaving transfusions in Rwanda to retail deliveries across eight countries—includin

2025. máj. 3. 13:30:10 | Fast company - tech
Skype is shutting down. If you still use it, like I do, here are some alternatives

When Skype debuted in 2003, it was the first time I remember feeling that an individual app—and not just the broader internet—was radically disrupting communications.

Thanks to its imple

2025. máj. 3. 11:20:04 | Fast company - tech
This free app is like Shazam for bird calls

It’s spring, and nature is pulling me away from my computer as I write this. The sun is shining, the world is warming up, and the birds are chirping away.

And that got me thinking: What

2025. máj. 3. 11:20:03 | Fast company - tech
‘Read the room, girl’: Running influencer Kate Mackz faces backlash over her White House interview

Wake up, the running influencers are fighting again. 

In the hot seat this week is popular running influencer Kate Mackz, who faces heavy backlash over the latest guest on her runni

2025. máj. 2. 21:20:07 | Fast company - tech
Half of Airbnb users in the U.S. are now interacting with its AI customer service agent

Half of Airbnb users in the U.S. are now using the company’s AI-powered customer service agent, CEO Brian Chesky said Thursday

2025. máj. 2. 21:20:05 | Fast company - tech
What your emoji use says about your personality

Are you guilty of overusing the monkey covering its eyes emoji? Do you find it impossible to send a text without tacking on a laughing-crying face?

Much like choosing between a full stop

2025. máj. 2. 16:40:07 | Fast company - tech
SAG-AFTRA’s new influencer committee aims to strengthen support for digital creators

SAG-AFTRA is expanding its reach into the influencer economy.

In late April, the u

2025. máj. 2. 14:30:04 | Fast company - tech