Three years after Meta shut down facial recognition software on Facebook amid a groundswell of privacy and regulator pushback, the social media giant said on Tuesday it is testing the service again as part of a crackdown on “celeb bait” scams.
Meta said it will enroll about 50,000 public figures in a trial which involves automatically comparing their Facebook profile photos with images used in suspected scam advertisements. If the images match and Meta believes the ads are scams, it will block them.
The celebrities will be notified of their enrollment and can opt out if they do not want to participate, the company said.
The company plans to roll out the trial globally from December, excluding some large jurisdictions where it does not have regulatory clearance such as Britain, the European Union, South Korea and the U.S. states of Texas and Illinois, it added.
Monika Bickert, Meta’s vice president of content policy, said in a briefing with journalists that the company was targeting public figures whose likenesses it had identified as having been used in scam ads.
“The idea here is: roll out as much protection as we can for them. They can opt out of it if they want to, but we want to be able to make this protection available to them and easy for them,” Bickert said.
The test shows a company trying to thread the needle of using potentially invasive technology to address regulator concerns about rising numbers of scams while minimising complaints about its handling of user data, which have followed social media companies for years.
When Meta shuttered its facial recognition system in 2021, deleting the face scan data of one billion users, it cited “growing societal concerns”. In August this year, the company was ordered to pay Texas $1.4 billion to settle a state lawsuit accusing it of collecting biometric data illegally.
At the same time, Meta faces lawsuits accusing it of failing to do enough to stop celeb bait scams, which use images of famous people, often generated by artificial intelligence, to trick users into giving money to non-existent investment schemes.
Under the new trial, the company said it will immediately delete any face data generated by comparisons with suspected advertisements regardless of whether it detected a scam.
The tool being tested was put through Meta’s “robust privacy and risk review process” internally, as well as discussed with regulators, policymakers and privacy experts externally before tests began, Bickert said.
Meta said it also plans to test using facial recognition data to let non-celebrity users of Facebook and another one of its platforms, Instagram, regain access to accounts that have been compromised by a hacker or locked due to forgetting a password.
(This story has been refiled to fix a typo in paragraph 2)
—Byron Kaye and Katie Paul, Reuters
Войдите, чтобы добавить комментарий
Другие сообщения в этой группе

The internet-famous TikTok account Sylvanian Drama is now at the center of a real-

Yet another CEO in the artificial intelligence space is warning that m

A Mississippi law that requires social media users to verify their ages can go into effect, a f

The IT company CEO captured in a widely circulated video showing him

Figma is targeting a fully-diluted valuation of up to $16.4 billion in

I’ve been using Comet, Perplexity’s

If you thought gaming was a young person’s activity, think again. Older adults now make up nearly one-third of all U.S. gamers, with 57 million Americans who are 50 or older playing regularly.