Scam calls are turning the world on its head. The Global Anti-Scam Alliance estimates that scammers stole a staggering $1.03 trillion globally in 2023, including losses from online fraud and scam calls. Robocalls and phone scams have long been a frustrating—and often dangerous—problem for consumers. Now, artificial intelligence is elevating the threat, making scams more deceptive, efficient, and harder to detect.
While Eric Priezkalns, an analyst and editor at Commsrisk, believes the impact of AI on scam calls is currently exaggerated, he notes that the use of AI by scammers is focused on producing fake content, which looks real or on varying the content in messages designed to lure potential victims into malicious conversations. “Varying the content makes it much more difficult to identify and block scams using traditional anti-scam controls,” he tells Fast Company.
From AI-generated deepfake voices that mimic loved ones to large-scale fraud operations that use machine learning to evade detection, bad actors are exploiting AI to supercharge these scam calls. The big question is: How can the telecom industry combat this problem head-on before fraudsters wreak even more havoc?
SCAMMERS ARE UPGRADING THEIR PLAYBOOK WITH AI
Until recently, phone scams mostly relied on crude robocalls—prerecorded messages warning recipients about an urgent financial issue or a supposed problem with their Social Security number. These tactics, while persistent, were often easy to recognize. But today’s AI-powered scams are far more convincing.
One of the most alarming developments is the use of AI-generated voices, which make scams feel disturbingly personal. In a chilling case from April 2023, a mother in Arizona received a desperate call from what sounded exactly like her daughter, sobbing and pleading for help. A scammer, posing as a kidnapper, demanded ransom money. In reality, the daughter was safe—the criminals had used AI to clone her voice from a social media video.
These scams, known as “voice cloning fraud,” have surged in recent months. With just a few seconds of audio, AI tools can now create an eerily realistic digital clone of a person’s voice, enabling fraudsters to impersonate friends, family members, or even executives in corporate scams.
Scammers are also using AI to analyze vast amounts of data and fine-tune their schemes with chilling precision. Machine learning algorithms can sift through public information—social media posts, online forums, and data breaches—to craft hyper-personalized scam calls. Instead of a generic IRS or tech support hoax, fraudsters can now target victims with specific details about their purchases, travel history, or even medical conditions.
AI is also enhancing caller ID spoofing, allowing scammers to manipulate phone numbers to appear as if they are coming from local businesses, government agencies, or even a victim’s own contacts. This increases the likelihood that people will pick up, making scam calls harder to ignore.
TELECOM’S COUNTEROFFENSIVE: AI VS. AI
As fraudsters sharpen their AI tools, telecom companies and regulators are fighting back with artificial intelligence of their own—deploying advanced systems to detect, trace, and block malicious calls before they ever reach consumers.
1. Call authentication and AI-based fraud detection
To combat spoofing, telecom carriers are leveraging AI-powered voice analysis and authentication technologies. In the U.S., the STIR/SHAKEN framework uses cryptographic signatures to verify that calls originate from legitimate sources. But as scammers quickly adapt, AI-driven fraud detection is becoming essential.
Machine learning models trained on billions of call patterns can analyze real-time metadata to flag anomalies—such as sudden spikes in calls from specific regions or numbers linked to known scams. These AI systems can even detect subtle acoustic markers typical of deepfake-generated voices, helping stop fraudulent calls before they connect.
2. Carrier-level call filtering and blocking
Major telecom providers are embedding AI-powered call filtering directly into their networks. AT&T’s Call Protect, T-Mobile’s Scam Shield, and Verizon’s Call Filter all use AI to spot suspicious patterns and block high-risk calls before they reach users. The GSMA’s Call Check and International Revenue Share Fraud (IRSF) solutions also provide real-time call protection by verifying legitimacy and combating calling line identity spoofing.
For context, GSMA’s IRSF Prevention leverages first-party International Premium Rate Numbers (IPRN) data and an advanced OSINT (open-source intelligence) platform to deliver real-time, actionable fraud intelligence. It tracks over 20 million IPRNs, hijacked routes, and targeted networks—helping telecoms proactively combat IRSF and Wangiri fraud.
3. AI-powered voice biometrics for caller verification
Another promising line of defense against AI-generated fraud is voice biometrics. Some financial institutions and telecom providers are deploying voice authentication systems that analyze more than 1,000 unique vocal characteristics to verify a caller’s identity. Unlike basic voice recognition, these advanced systems can detect when an AI-generated voice is being used—effectively preventing fraudsters from impersonating legitimate customers.
REGULATORS ARE CRACKING DOWN, BUT IS IT ENOUGH?
It’s one thing to tighten regulations and stiffen penalties—something many government agencies around the world are already doing—but effectively enforcing those regulations is a different ball game altogether. In the U.S., for example, the FCC (Federal Communications Commission) has ramped up penalties for illegal robocalls and is pushing carriers to adopt stricter AI-powered defenses. The TRACED (Telephone Robocall Abuse Criminal Enforcement and Deterrence) Act, signed into law in 2019, gives regulators more power to fine scammers and mandates stronger anti-spoofing measures.
Internationally, regulators in the U.K., Canada, and Australia are working on similar AI-driven frameworks to protect consumers from rising fraud. The European Union has introduced stricter data privacy laws, limiting how AI can be used to harvest personal data for scam operations.
However, enforcement struggles to keep pace with the speed of AI innovation. Scammers operate globally, often beyond the jurisdiction of any single regulator. Many fraud rings are based in countries where legal action is difficult—if not nearly impossible.
Take, for example, countries like Myanmar, Cambodia, and Laos, where organized crime groups have established cyber scam centers that use AI-powered deepfakes to deceive victims worldwide. Operators in these scam centers frequently relocate or shift tactics to stay ahead of law enforcement. They also operate in regions with complex jurisdictional challenges, further complicating enforcement.
Scammers thrive on fragmentation and exploit vulnerabilities—whether that’s a lack of industry coordination or differing regulatory approaches across borders. These regulatory bottlenecks underscore why telecom providers must take a more proactive role in combating AI-driven fraud, rather than relying solely on traditional frameworks which—while helpful—are not always efficient. That’s where the GSMA Call Check technology, developed by German telecom solutions provider Oculeus, could play a vital role.
“The GSMA’s Call Check services provide a simple, fast and low-cost mechanism for the exchange of information about scam phone calls as they occur. This technology is rooted in the cloud, making it future-proof and global in a way that other methods being contemplated by some nations will never be,” Commsrisk‘s Priezkalns says.
FAR FROM OVER
Without question, the battle against AI-powered scams is far from over. As former FCC Chair Jessica Rosenworcel noted last year: “We know that AI technologies will make it cheap and easy to flood our networks with deepfakes used to mislead and betray trust.”
The good news is that the telecom industry isn’t backing down. While scammers are using AI to deceive unsuspecting individuals, the industry is also leveraging AI to protect customers and their sensitive data—through automated call screening, real-time fraud detection, and enhanced authentication measures.
But according to Priezkalns, technology alone isn’t enough to protect people. For him, deterrence—driven by the legal prosecution of scammers—is just as important as technological solutions. “It needs to be used in conjunction with law enforcement agencies that proactively arrest scammers and legal systems that ensure scammers are punished for their crimes,” he says.
One thing is certain: Scammers and scams aren’t going away anytime soon. As Priezkalns points out, people will continue to fall for scams even with high-intensity public awareness training. But as AI continues to evolve, the telecom industry must stay a step ahead—ensuring it becomes a force for protection, not deception. And with tools like the GSMA’s Call Check, that future is within reach.
Chcete-li přidat komentář, přihlaste se
Ostatní příspěvky v této skupině


The 2002 sci-fi thriller Minority Report depicts a dystopian future where a specialized police unit is tasked with arresting people

Continuing from the “year of yeehaw,” professional bull riding is having a moment on TikTok.
Since the beginning of this year, Professional Bull Riding (PBR)—the largest bull riding leag

CrowdStrike reiterated its fiscal 2026 first quarter and annual forecast


The latest TikTok trend is leading to fire evacuations at schools across Connecticut.
As part of the trend, students are filming themselves inserting items such as pencils, paper clips,

Netflix is finally pushing out the major TV app redesign it started testing last year, with a top navigation bar and new recommendation features. It’s also experimenting with generative AI a