AI vs AI: How New Technologies Are Combating Sophisticated Crypto Scams
The burgeoning digital frontier of cryptocurrency has unfortunately become a fertile ground for illicit activities, with scams representing a significant portion of criminal endeavors in the sector. The sheer volume of financial losses is substantial; findings from the Federal Bureau of Investigation (FBI) reveal that U.S. citizens lost an estimated $9.3 billion to crypto scams last year.
The advent of artificial intelligence (AI) has introduced a new layer of complexity, exacerbating the problem. Blockchain analytics firm TRM Labs reported a staggering 456% increase in AI-facilitated scams in 2024 compared to previous years.
As generative AI (GenAI) technologies advance, malicious actors are leveraging sophisticated chatbots, deepfake videos, voice cloning, and automated networks to deploy scam tokens at an unprecedented scale. This evolution has transformed crypto fraud from a predominantly human-driven operation into an algorithmic, rapid, adaptive, and increasingly convincing threat.
The Alarming Velocity of AI-Driven Scams
The speed and scale at which modern crypto scams operate are unparalleled, largely due to the capabilities of generative AI. According to Ari Redbord, global head of policy and government affairs at TRM Labs, generative models are enabling the simultaneous launch of thousands of scams. He highlighted that the criminal ecosystem is now “smarter, faster, and infinitely scalable.”
Redbord elaborated on how GenAI models can tailor their approach by tuning into a victim’s language, location, and digital footprint. In ransomware attacks, for instance, AI is employed to identify victims most likely to pay, draft compelling ransom demands, and automate negotiation chats. Similarly, social engineering tactics now incorporate deepfake voices and videos for convincing “executive impersonation” and “family emergency” scams. On-chain scams leverage AI tools to write scripts that can move funds across hundreds of wallets in mere seconds, facilitating money laundering at a pace far beyond human capability.
AI as a Defense: Countering Fraud with Advanced Technology
In response to the escalating threat of AI-powered fraud, the cryptocurrency industry is deploying its own AI-powered defenses. Blockchain analytics firms, cybersecurity companies, exchanges, and academic researchers are developing sophisticated machine learning systems designed to detect, flag, and mitigate fraudulent activities before significant losses occur.
TRM Labs, for example, integrates AI into every layer of its blockchain intelligence platform. The firm utilizes machine learning to process trillions of data points across more than 40 blockchain networks. This extensive analysis allows TRM Labs to map wallet networks, identify typical scam typologies, and pinpoint anomalous behavior indicative of illicit activity. Redbord noted that these systems do not merely detect patterns; they learn and adapt as data changes, responding to the dynamic nature of crypto markets. This adaptive capability enables TRM Labs to uncover complex fraud operations, such as thousands of small, seemingly unrelated transactions forming a larger scam or money laundering network, which human investigators might otherwise overlook.
Sardine, an AI risk platform founded in 2020 amidst the rise of prominent crypto scams, employs a multi-layered AI-fraud detection strategy. Alex Kushnir, Sardine’s head of commercial development, explained that their approach involves:
- Capturing deep signals from every user session on financial platforms, including device attributes, signs of app tampering, and user behavioral patterns.
- Integrating data from a wide network of trusted data providers.
- Leveraging consortium data, which enables companies to share information about bad actors with other entities, proving crucial in collective fraud prevention.
Sardine utilizes a real-time risk engine to act on these indicators as scams unfold. Kushnir also noted that while agentic AI and large language models (LLMs) are primarily used for automation and efficiency—allowing users to define and deploy fraud detection rules more easily—machine learning remains the benchmark for predicting risk.
Real-World Applications: AI vs. AI in Action
The effectiveness of these AI-powered defenses is already being demonstrated through various use cases. Matt Vega, Sardine’s chief of staff, highlighted that once a pattern is detected, their AI can perform a deep analysis to recommend preventative measures against an attack vector in seconds—a task that would typically take a human a full day to complete. Sardine collaborates with leading crypto exchanges to flag unusual user behavior, running transactions through its decision platform where AI analysis determines transaction outcomes and provides exchanges with early warnings.
TRM Labs also shared an instance where they encountered a live deepfake during a video call with a suspected financial grooming scammer. The unnatural appearance of the individual’s hairline raised suspicion, and AI detection tools were used to confirm that the image was likely AI-generated. Despite this successful identification, this particular scam typology and similar ones have collectively defrauded unsuspecting victims of approximately $60 million.
Cybersecurity company Kidas is another player employing AI in the fight against scams. Ron Kerbs, founder and CEO of Kidas, explained that their proprietary models can analyze content, behavior, and audio-visual inconsistencies in real-time to identify deepfakes and LLM-crafted phishing attempts at the point of interaction. This capability facilitates instant risk scoring and real-time intervention, which is essential for countering automated, scaled fraud operations. Kerbs cited a recent success where Kidas’ tool intercepted two distinct crypto-scam attempts on Discord, showcasing its real-time behavioral analytics and preventing potential account compromises and financial losses.
Navigating the Future: Protecting Against Evolving AI-Powered Scams
While AI-driven tools are proving effective in detecting and preventing sophisticated scams, the threat landscape is expected to continue evolving. Kerbs anticipates that AI will lower the entry barrier for complex crimes, making scams highly scalable and personalized, thus increasing their prevalence. He suggests that semi-autonomous malicious AI agents could soon orchestrate entire attack campaigns with minimal human oversight, potentially employing untraceable voice-to-voice deepfake impersonations in live calls.
Despite these concerning projections, there are practical steps users can take to protect themselves. Vega advises vigilance against website spoofing, where attackers create fake sites that users might inadvertently visit. He recommends looking for subtle discrepancies, such as Greek alphabet letters (homoglyphs) used to mimic legitimate brand names in URLs, and exercising caution with sponsored links. Users should always carefully inspect URLs before clicking.
Beyond individual vigilance, companies like Sardine and TRM Labs are actively collaborating with regulators to establish guardrails that leverage AI to mitigate the risks posed by AI-powered scams. Redbord emphasized that they are developing systems that equip law enforcement and compliance professionals with the same speed, scale, and reach that criminals currently possess. This includes capabilities for detecting real-time anomalies and identifying coordinated cross-chain money laundering schemes. Ultimately, AI is enabling a shift from reactive risk management to a more predictive approach.
Conclusion
The battle against sophisticated crypto scams has entered a new era, characterized by a dynamic “AI vs. AI” confrontation. As malicious actors increasingly harness the power of artificial intelligence to create convincing and scalable fraud schemes, the industry is responding with equally advanced AI-driven defenses. From real-time anomaly detection and deepfake identification to predictive risk management and collaborative data sharing, AI is becoming an indispensable tool in safeguarding the integrity of the cryptocurrency ecosystem. While the challenges are significant and evolving, the continuous innovation in AI security offers a critical line of defense against the ever-present threat of digital deception.
-
Entertainment4 days ago
Beauty buffs are flocking to Aldi to snap up a hair wrap they reckon is better than viral Dyson – & it’s over 8x cheaper
-
Entertainment3 days ago
Giuseppe Tornatore’s Doc on Brunello Cucinelli
-
Entertainment4 days ago
‘The Silent Run’ Debuts Trailer Ahead of World Premiere at Cairo Film Festival (EXCLUSIVE)
-
Entertainment4 days ago
Tencent Music Entertainment Reports 27% Revenue Growth in Q3, Driven by Subscriptions and Live Events
-
Entertainment4 days ago
Donald Trump Says He Has an ‘Obligation’ to Sue the BBC for $1 Billion: ‘They Defrauded the Public and They’ve Admitted It’
-
Entertainment4 days ago
The coin trick that means you’ll have an extra £250 cash by Christmas, it’s ZERO effort & no bank or spreadsheets needed
-
Entertainment4 days ago
I was snatched at my local pool at 11 by murderous stranger & violently abused – my haunting flashbacks helped catch him
-
Entertainment3 days ago
Airbnb Success Story: Friends Earn 5 Figures Monthly
