Welcome to AI Decoded, Fast Company’s weekly LinkedIn newsletter that breaks down the most important news in the world of AI. If a friend or colleague shared this newsletter with you, you can sign up to receive it every week here.
AI image generators are trained on explicit photos of children, Stanford Internet Observatory says
A new report reveals some disturbing news from the world of AI image generation: A Stanford-based watchdog group has discovered thousands of images of child sexual abuse in a popular open-source image data set used to train AI systems.
The Stanford Internet Observatory found more than 3,200 explicit images in the AI database LAION (specifically the LAION‐5B repository, so named because it contains over 5 billion image-text pairs), which was used to train popular image-maker Stable Diffusion among other tools. As the Associated Press reports, the Stanford study runs counter to conventional belief that AI tools create images of child sexual abuse only by merging adult pornography with photos of children. Now we know it’s even easier for some AI systems that were trained using the LAION database to produce such illegal materials.
“We find that having possession of a LAION‐5B data set populated even in late 2023 implies the possession of thousands of illegal images,” write study authors David Thiel and Jeffrey Hancock, “not including all of the intimate imagery published and gathered non‐consensually, the legality of which is more variable by jurisdiction.”
In response to the Stanford study, LAION announced it was temporarily removing its data sets, and Stability AI—the maker of Stable Diffusion—said it has “taken proactive steps to mitigate the risk of misuse,” namely by enforcing stricter filters on its AI tool. However, an older version of Stable Diffusion, called 1.5, is still “the most popular model for generating explicit imagery,” according to the Stanford report.
The study also suggested that any users who built a tool using the LAION database delete or scrub their work, and encouraged improved transparency around any image-training data sets. “Models based on Stable Diffusion 1.5 that have not had safety measures applied to them should be deprecated and distribution ceased where feasible,” Thiel and Hancock write.
The FTC proposes banning Rite Aid from using facial-recognition tech in its stores
The Federal Trade Commission on Tuesday proposed banning Rite Aid from using facial-recognition software in its stores for five years as part of a settlement.
The FTC alleged in a complaint that Rite Aid had used facial-recognition software in hundreds of its stores between 2012 and 2020 to identify customers suspected of shoplifting or other criminal activity. But the technology generated a number of “false positives,” the FTC says, and led to instances of heightened surveillance, unwarranted bans from stores, verbal harassment from store employees, and baseless calls to the police. “Rite Aid’s failures caused and were likely to cause substantial injury to consumers, and especially to Black, Asian, Latino, and women consumers,” the complaint reads.
The complaint did not specify which facial technology vendors Rite Aid used in its stores. However, it did say that the pharmacy giant kept a database of “at least tens of thousands of individuals” that included security camera footage of persons of interest alongside IDs and “information related to criminal or ‘dishonest’ behavior in which individuals had allegedly engaged.” Rite Aid workers would receive phone alerts “indicating that individuals who had entered Rite Aid stores were matches for entries in Rite Aid’s watchlist database.”
In addition to a five-year ban on any facial-recognition technology, the proposed settlement says Rite Aid has to delete any images already collected by its facial-recognition system and to direct any third parties to do the same. The FTC also called on Rite Aid to create safeguards to prevent any additional customer harm.
Rite Aid, for its part, said in a statement that it used the facial-recognition technology only in “a limited number of stores” and added that it “fundamentally disagree[s] with the facial recognition allegations in the agency’s complaint.” Nonetheless, the drugstore chain said it welcomed the proposed settlement. “We are pleased to reach an agreement with the FTC and put this matter behind us,” it said.
How RAND helped shape Biden’s executive order on AI
Notable D.C. think tank the RAND Corporation had a hand in creating President Joe Biden’s executive order on AI, Politico reported late last week. That revelation, which Politico learned of through a recording of an internal RAND meeting, further cements the link between the AI sector and the people tasked with regulating it.
RAND lobbied hard for including in the executive order a set of reporting requirements for powerful AI systems—a push that aligns with the agenda of Open Philanthropy, a group that gave RAND $15 million this year alone.
Open Philanthropy is steeped in the “effective altruism” ideology, which was made popular by FTX founder Sam Bankman-Fried and advocates for a more metric-heavy approach to charity. Open Philanthropy is funded by Facebook cofounder and Asana CEO Dustin Moskovitz and his wife, Cari Tuna. Effective altruists have long been active in the AI world, but the Politico story shows how the movement is shaping policy via RAND.
Not everyone at RAND is apparently pleased with the think tank’s ties to Open Philanthropy. At the internal RAND meeting, an unidentified person said the Open Philanthropy connection “seems at odds” with the organization’s mission of “rigorous and objective analysis” and asked whether the “push for the effective altruism agenda, with testimony and policy memos under RAND’s brand, is appropriate.”
RAND CEO Jason Matheny countered that it would be “irresponsible . . . not to address” concerns around AI safety, “especially when policymakers are asking us for them.”
More AI coverage from Fast Company:
- What Grok’s recent OpenAI snafu teaches us about LLM model collapse
- Sports Illustrated’s AI scandal highlights the need for authenticity in the LLM era
- These AI tools can help you create something in under a minute
- Pope Francis, once a victim of AI-generated imagery, calls for a treaty to regulate artificial intelligence
From around the web:
- Seeking a big edge in AI, South Korean firms think smaller (The New York Times)
- Generative AI music app Suno comes out of stealth (Axios)
- You can create your own AI songs with this new Copilot extension (The Verge)
- OpenAI releases a plan to prevent a robot apocalypse (The Daily Beast)
Autentifică-te pentru a adăuga comentarii
Alte posturi din acest grup

Are you guilty of overusing the monkey covering its eyes emoji? Do you find it impossible to send a text without tacking on a laughing-crying face?
Much like choosing between a full stop

SAG-AFTRA is expanding its reach into the influencer economy.
In late April, the u

Apple shares fell nearly 3% in premarket trade on Friday after the

European Union privacy watchdogs fined

In American culture, importance and attention are often misaligned. This disconnect is one of the greatest challenges we in the STEM world face.
Too often, society’s most essential stori

