The AI Data Broker Economy: How Underground Organizations Capitalize on Information in 2025
Welcome to the Wild West of AI data trading, where your personal information is the new gold rush.

According to Gartner’s 2024 AI Data Breach Report, 73% of enterprises have experienced at least one AI-related security incident in the past 12 months, with an average cost of $4.8 million per breach. Somewhere in the digital shadows, a criminal organization is processing someone’s morning coffee order data alongside their browsing history, crafting the perfect social engineering attack. Welcome to the age of AI-powered data brokerage, where the line between legitimate business intelligence and criminal enterprise has become as blurry and commonplace as a deepfake video.
The Perfect Storm: When Criminal Activity Meets AI

AI programs scrape as much data as they can from public sources to train their algorithms, and this insatiable hunger for information has created what experts are calling the “AI data renaissance.” But here’s the kicker: This multi-million dollar industry assembles, analyzes, and sells data from mobile apps, cookies, and other sources to create detailed dossiers on millions of Americans.
Large language models like GPT-4 are estimated to be trained on 13 trillion tokens, which is about 10 trillion words. To put that in perspective, that’s like reading the entire Wikipedia a thousand times while simultaneously consuming every social media post, news article, and digital breadcrumb you’ve ever left online. Legitimate, commercial-grade AI companies are paying eight figures annually for historical and ongoing access to data, but what happens when criminal organizations decide they want a piece of this action?
Enter the Digital Mafia: Criminal AI Enterprise

Remember when hackers were lone wolves typing in dark basements? Those days are long gone. Today’s cybercriminal organizations are international-scale companies, complete with R&D departments, customer service (yes, really), and subscription models. The emergence of products like DarkGPT, WormGPT, FraudGPT, DarkBERT, and DarkBARD represent a new sector of AI-powered criminal enterprise (Source: Justice.gov, March 2025).
By mid-2023 law-enforcement bulletins flagged names like WormGPT and FraudGPT as turnkey kits for crime. These aren’t just fancy chatbots with the safety guardrails removed; many of these “sophisticated criminal AI platforms” turn out to be elaborate scams or what security researchers now call “jailbreak-as-a-service.” These services offer:
• Indirect connections to commercially-available LLMs (usually ChatGPT)
• False privacy guarantees claiming anonymity, often logging everything to be sold to foreign adversaries
• Jailbroken prompts with low success rates, due to OpenAI and other providers actively patching jailbreaks. These “guaranteed” hacks become useless within days to weeks
• Scam-on-Scam Action: Fraudulent offerings have increased in number, consisting of criminal offerings that only mention supposed capabilities without showing any demos or evidence
• Legal Liability: Users are still subject to tracking by law enforcement, regardless of what these services advertise

Many “Dark LLMs” try to trick the potential customer into believing it is a whole new criminal LLM and may go out of their way to provide demo videos on niche forums. Upon actual inspection of data payload, the truth is obviously apparent: many of these products are nothing more than a user interface sending a patched jailbreaking prompt to OpenAI’s API.
The Underground Economy: How It Works

The Supply Chain of Stolen Information
Initial Access Brokers (IABs) are intermediaries or groups specialized in infiltrating corporate networks, then selling access to ransomware groups. They optimize their methods using AI to clean, validate, and classify stolen information. Think of it as Facebook Marketplace for cybercriminals, complete with reviews and endorsements.

Information can be sold for anywhere between hundreds to thousands of dollars, depending on the target’s size and value. When it comes to online fraud, what used to require a team of experienced hackers can now be executed by script kiddies with access to AI-powered tools and a modest subscription fee.
The Jailbreak Economy

In 2023, Kaspersky Digital Footprint Intelligence found 249 offers to distribute and sell such prompt sets. These “jailbreaks” are carefully crafted prompts designed to make commercially available AI systems behave badly. It’s like teaching a well-behaved AI to have a criminal alter ego.
Our hypothesis is that these most likely work as wrapper services that would redirect requests to either legitimate ChatGPT, Mistral AI, or other vanilla-available AI tools by using stolen accounts through, a VPN connection, and “jailbroken” user prompts.
The 2025 Reality Check: Numbers Don’t Lie


The statistics from this year’s threat reports paint a sobering picture:
- Nearly 70% of organizations view the rapid pace of AI development—particularly in generative AI—as the leading security concern. (Source: Thales Data Threat Report, May 2025)
- 82% of financial institutions experienced attempted AI prompt injection attacks. (Source: Treasury.Gov, March 2025).
- 76% of penalties involved inadequate security measures around sensitive data used for AI training or inadequate controls on AI outputs (Source: Thomson Reuters Global Regulatory Intelligence Q2 Report, March 2025).
The Attack Vectors Targeting Generative AI

Data Poisoning: Attackers inject malicious content into web-scale training datasets by purchasing expired domains from training data URLs or submitting adversarial examples to data collection platforms, enabling backdoor triggers or targeted model misbehavior.
Model Poisoning: Adversaries distribute pre-trained foundation models containing hidden backdoors that persist even after downstream fine-tuning, allowing attackers to trigger malicious behavior in deployed applications that use these poisoned models.
Direct Prompting Attack: Attackers use techniques like role-playing (“Do Anything Now”), prefix injection, character transformations (ROT13, base64), and optimization-based methods to bypass safety guardrails and force models to generate harmful, restricted, or unintended outputs.
Fine-tuning Circumvention: Malicious actors remove safety fine-tuning from aligned language models through targeted fine-tuning processes, restoring harmful capabilities that were previously suppressed during safety training.
Leaking Data From User Information: Attackers exploit GenAI systems to extract sensitive information by prompting models to repeat private data from context (like RAG databases), reveal system prompts, or incrementally extract copyrighted content using techniques like seeded text completion.
The Dark Side of Innovation

“Dark LLMs” have been advertised as an all-in-one solution for cyber-criminals, promising “exclusive tools, features and capabilities” and “no boundaries.” But here’s where it gets interesting: the numerous new tools this year promising an darker, “unchained”, alternative to existing large language models are actually carefully crafted applications optimized to benefit specific shady criminal enterprises.

The reality? When it comes to “dark” generative artificial intelligence tools designed to help criminals more quickly and easily amass victims, let the buyer beware. Many of these tools turned out to be sophisticated, multi-level scams. Criminals scamming criminals, it’s almost poetic.
Additional Threat: the Regulatory Wild West

The next frontier for criminals is autonomy. Just as corporations strive to automate tasks, data brokers develop AI agents capable of operating completely independently. According to Grand View Research’s 2025 Data Broker Market Report, the data broker market was estimated at USD 277.97 billion in 2024 and is projected to reach USD 512.45 billion by 2033, providing financial benefits to minimizing privacy protections provided by regulatory policy. While legitimate companies pay premium prices for clean, well-sourced data, criminal organizations are building parallel economies around stolen information.

AI significantly lowers the barriers to finding and collecting personal data, making it easier for criminals to exploit. What once required sophisticated technical skills now requires little more than knowing how to copy and paste a jailbreak prompt.
Welcome to the Arms Race

The arms race won’t slow. Each side improves their prompts, trains sharper models, hunts bigger datasets. The difference will hinge on discipline. The criminals have AI, we have AI. They have automation, we have automation. A strong defense isn’t about the most advanced technology: it’s about who can adapt and design a defense strategy.
This AI evolution transforms the shape and scale of criminal operations, making them faster, more efficient, and alarmingly difficult to detect or counteract.
But here’s the thing that should keep security researchers optimistic: the current crop of rogue models is sloppy. Most rely on outdated weights, stolen keys, or static prompts. Criminal innovation is often driven by opportunity rather than engineering excellence. They’re fast and adaptable, but they’re also prone to making mistakes that well-prepared defenders can exploit.
Your Move, Security Professional

As we navigate this brave new world of AI-powered threats, remember that every technology that empowers criminals also empowers defenders. The organizations that survive and thrive will be those that embrace AI not as a magic bullet, but as another tool in an ever-expanding arsenal.
Stay curious, stay vigilant, and remember: in the war between security researchers and criminal AI organizations, while criminals optimize for quick wins, the professionals aim for resilience, and that’s why defense always wins.
Are you a security professional wanting to dive deeper into the technical details? Check out the NIST AI 100-2e2025 “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations” for a comprehensive breakdown of AI-centered attack vectors and defense strategies. If knowledge is half the battle, the remaining half is implementation.
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation.
Ready to fortify your cybersecurity posture? Consider enrolling in our cybersecurity certification programs at Claremont Graduate University, Center for Information Systems and Technology. →
Share