This new Adobe tool will help you spot manipulated images

In the photo, Beyoncé looks beatific, with a closed-lip Mona Lisa smile. But it’s easy enough to give her a toothy grin. Just dial up her “Happiness” to the maximum level using Adobe Photoshop’s Smart Portrait tool, and her face gets a Cheshire cat-like smile, white teeth appearing out of thin air. Smart Portrait, released in beta last year, is one of Adobe’s AI-powered “neural filters,” which can age faces, change expressions, and alter the background of a photo so it appears to have been taken at a different time of year. These tools may seem innocuous, but they provide increasingly powerful ways to manipulate photos in an era when altered media spreads across social media in dangerous ways. For Adobe, this is both a big business and a big liability. The company—which brought in $12.9 billion in 2020, with more than $7.8 billion tied to Creative Cloud products aimed at helping creators design, edit, and customize images and video—is committed to offering users the latest technologies, which keeps Adobe ahead of its competition. This includes both neural filters and older AI-powered tools, such as 2015’s Face-Aware Liquify, which lets people manually alter someone’s face. Adobe executives are aware of the perils of such products at a time when fake information spreads on Twitter six times faster than the truth. But instead of limiting the development of its tools, Adobe is focused on the other side of the equation: giving people the ability to verify where images were taken and see how they’ve been edited. Step one: a new Photoshop tool and website that offer unprecedented transparency into how images are manipulated.

Adobe has been exploring the edge of acceptable media editing for a while now. During the company’s annual Max conference in 2016, it offered a sneak peek of a tool that allowed users to change words in a voice-over simply by typing new ones. It was a thrilling—and terrifying—demonstration of how artificial intelligence could literally put words into someone’s mouth. A backlash erupted around how it might embolden deepfakes, and the company shelved the tool. Two years later, when Adobe again used Max to preview cutting-edge AI technologies—including a feature that turns still photos into videos and a host of tools for video editing—Dana Rao, its new general counsel, was watching closely. After the presentation, he sought out chief product officer Scott Belsky to discuss the repercussions of releasing these capabilities into the world. They decided to take action. Rao, who now leads the company’s AI ethics committee, teamed up with Gavin Miller, the head of Adobe Research, to find a technical solution. Initially, they pursued ways to identify when one of Adobe’s AI tools had been used on an image, but they soon realized that these kinds of detection algorithms would never be able to catch up with the latest manipulation technologies. Instead, they sought out a way to show when and where images were taken—and turn editing history into metadata that could be attached to images. The result is the new Content Credentials feature, which went into public beta this October. Users can turn on the feature and embed their images with their identification information and a simplified record of edits that notes which of the company’s tools have been used. Once an image is exported out of Photoshop, it maintains this metadata, all of which can be viewed by anyone online through a new Adobe website called Verify. Simply upload any JPEG, and if it’s been edited with Content Credentials turned on, Verify will show you its metadata and editing history, as well as before-and-after images. Content Credentials is part of a larger effort by both tech and media companies to combat the spread of fake information by providing more transparency around where images come from online. An industry consortium called the Coalition for Content Provenance and Authenticity (C2PA), which includes Adobe, Microsoft, Twitter, and the BBC, recently created a set of standards for how to establish content authenticity, which are reflected in Adobe’s new tool. Members of the group are also backing a bill in the U.S. Senate that would create a Deepfake Task Force under the purview of the Secretary of Homeland Security. But while Adobe has thrown its weight behind this fledgling ecosystem of companies championing image provenance technologies, it also continues to release features that make it increasingly easy to alter reality. It’s the accessibility of such tools that troubles researchers. “Until recently . . . you needed to be someone like Steven Spielberg” to make convincing fake media, says University of Michigan assistant professor Andrew Owens, who has collaborated with Adobe on trying to detect fake images. “What’s most worrisome about recent advances in computer vision is that they’re commoditizing the process.” For content provenance technologies to become widely accepted, they need buy-in from camera-app makers, editing-software companies, and social media platforms. For Hany Farid, a professor at the University of California, Berkeley, who has studied image manipulation for two decades, Adobe and its partners have taken the first steps, but now it’s up to platforms like Facebook to prioritize content that has C2PA-standardized metadata attached. “You don’t want to get in the business of [saying] ‘This is true or false,'” Farid says. “The best [Adobe] can do is to arm people—the average citizen, investigators—with information. And we use that as a launchpad for what comes next: to regain some trust online.” Three other efforts to authenticate images before they’re released into the wild [Illustration: Kemal Sanli]Truepic Content provenance company Truepic recently announced an SDK that will allow any app with a camera to embed images and videos with verified metadata. [Illustration: Kemal Sanli]Starling Lab A project between Stanford and the USC Shoah Foundation, this lab uses cryptography and decentralized web protocols to capture, store, and verify images and video. [Illustration: Kemal Sanli]Project Origin Microsoft and the BBC joined forces in 2020 to help people understand whether images and videos have been manipulated.

https://www.fastcompany.com/90686493/adobe-is-releasing-a-set-of-tools-to-identify-manipulated-images?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

созданный 4y | 28 окт. 2021 г., 15:21:23


Войдите, чтобы добавить комментарий

Другие сообщения в этой группе

WhatsApp’s new ad feature sparks backlash—and a golden opportunity for Signal

Meta’s decision to introduce advertisements into WhatsApp has reignited competition in the secure messaging space, giving rival app Signal a fresh opening to make a pitch for users.

<

17 июн. 2025 г., 20:50:04 | Fast company - tech
Reid Hoffman on Musk vs. Trump and the real AI threat to jobs

Amid global conflict, domestic unrest, and AI’s surging impact in all corners of business, it’s getting harder than ever to decipher noise from substance. To help navigate this challenge,

17 июн. 2025 г., 20:50:03 | Fast company - tech
Why government’s AI dreams keep turning into digital nightmares—and how to fix that

Government leaders worldwide are talking big about AI transformation. In the U.S.,

17 июн. 2025 г., 18:30:13 | Fast company - tech
Influencers are hiring private investigators to unmask anonymous online trolls

Trolls be warned: influencers are now hiring private investigators to expose their anonymous bullies online.

Australian influencer Indy Clinton, who

17 июн. 2025 г., 18:30:11 | Fast company - tech
The Senate is expected to pass this crypto bill without addressing Trump’s investments

The Senate is expected to approve legislation Tuesday that would regulate a form of

17 июн. 2025 г., 18:30:09 | Fast company - tech
Mubi’s funding backlash reveals a crisis in indie film culture

Indie streamer Mubi raised a staggering $100 million from Sequoia Capital. Then, fans started boycotting.

Mubi built a loyal audience of cinephiles through breakout hits like The Sub

17 июн. 2025 г., 18:30:07 | Fast company - tech