The State of Criminal AI: Market Consolidation and Operational Reality
They Don’t Build the Gun, They Sell the Bullets
An Update on the State of Criminal AI
The 2025 criminal AI battlefield underscores the robust weaponization of the rapidly growing technology. From criminal large language models (LLMs) and AI-powered malware to deepfakes at scale, threats are advancing on every front. Looking back on the year is crucial for hardening defenses and preparing for what’s next.
By Vincenzo Ciancaglini and David Sancho (Forward-Looking Threat Research Team, TrendAI™ Research)
Key takeaways:
- In 2025, criminal AI schemes that involved AI-powered malware, criminal large language models (LLMs), and deepfakes proliferated in the landscape, highlighting their increased capability, accessibility, and scope. This report provides a panoramic view of the cybercriminal landscape through the lens of how AI was used.
- Criminals continue to rely on jailbreaking commercial LLMs rather than building their own, using prompt engineering and fine-tuning to bypass safeguards. The main challenge ahead is whether AI providers can outpace these evolving jailbreaking techniques, as the criminal ecosystem remains dependent on exploiting legitimate models.
- AI-driven, on-the-fly code generation in malware, while novel and potentially exciting for cybercriminals, is limited by API key revocation and unpredictability, making it unlikely to become mainstream. While innovative, these techniques face practical hurdles and are less reliable than traditional methods.
- Trust needs to be revisited in the era of deepfakes as synthetic media now enables a range of attacks from personal exploitation to complex corporate and financial fraud.
When TrendAI™ Research published its first assessment of AI in the cybercriminal underground in 2023, the question was whether generative AI would live up to its threatening potential. At the time of TrendAI™ Research’s second report in early 2024, we observed that scattered experiments had begun to coalesce into recognizable patterns. Our third update in mid-2024 documented a surge in both capability and adoption. The question now is no longer whether criminals will weaponize AI at scale; it’s how effectively they already are doing it.
This fourth installment from TrendAI™ Research on the criminal misuse of AI marks an inflection point where it has moved decisively from experimentation to industrialization: Cybercriminals don’t build the gun, they now sell the bullets. What began as jailbroken chatbots generating phishing emails and crude deepfakes has matured into a resilient underground ecosystem that increasingly generates real-world harm at an industrial scale.
Criminal actors no longer need nation-state resources to wield AI as a weapon; they rent it, hijack it, or simply prompt it. The barrier to entry has collapsed, the tooling has professionalized, and the attack surface has expanded across every domain. TrendAI™ Research sees that criminal AI won’t explode in 2026, but it will get better.
This report examines the state of that ecosystem at the close of 2025, drawing on technical analysis of underground services, malware samples, and documented attack campaigns. We investigate three parallel developments that define the current threat landscape:
- The consolidation of the “criminal LLM” market around durable jailbreak-as-a-service providers rather than genuine independent models: Despite recurring claims of “uncensored” alternatives, the underground continues to parasitically exploit commercial AI platforms through increasingly sophisticated prompt engineering and API abuse.
- The arrival of the first malware families that dynamically query or embed LLMs to generate malicious code on the fly: While still nascent and operationally limited, these samples represent a genuine technical evolution in adaptive malware design.
- The explosive democratization of deepfake technology, from free “nudifying” apps and virtual-kidnapping scams to state-sponsored corporate infiltration: Synthetic media has moved from novelty to commodity, with profound implications for identity verification, corporate security, and individual safety.
Taken together, these trends reveal a critical inflection point: While defenders currently maintain the upper hand, backed by AI-powered security information and event management (SIEM) platforms and sophisticated agentic hunting tools, the pace of AI innovation in the criminal underground threatens to tip this balance.
The real danger isn’t that attackers have already won, but that once these AI-enabled techniques escape into the broader cybercrime ecosystem, they never disappear — they only become more refined and accessible. The advantage still sits with defenders today, but without proactive investment in AI-driven defense capabilities and a willingness to match the underground’s pace of innovation, this window won’t remain open indefinitely.
The criminal LLM market
The underground market for criminal LLM services, whether jailbreaks or alleged self-hosted LLMs, continues to thrive, showing no signs of slowing down. While we’re still seeing new names emerge across various underground forums, the landscape has notably matured. Unlike with the chaotic proliferation of short-lived offerings we witnessed in previous years, the market has begun to consolidate around a handful of established players that demonstrate actual staying power.
WormGPT: A brand hijacked by opportunists
The WormGPT name has been used and reused in the criminal AI space. Since the original service launched in March 2023, countless copycat operations have appropriated the brand, capitalizing on its notoriety. The likelihood that any of these newer services share DNA with the original WormGPT is vanishingly small.
Our research uncovered multiple Telegram channels and several chatbots on platforms like FlowGPT all trading on the WormGPT name. This proliferation is hardly surprising: When a brand achieves this level of recognition in underground circles, opportunistic actors inevitably emerge to exploit that popularity for their own gain.
One particularly ambitious WormGPT-adjacent operation has even established a web presence at wrmgpt[.]com, where its operators actively solicit investment for what they claim will be a “next-generation unconstrained LLM.” The site boldly asserts that over US$3 million has been raised from 147 investors. However, this figure warrants significant scrutiny, and we’re conducting further analysis to determine whether these claims hold water or merely present yet another layer of deception.
It’s worth noting that not every service sporting the -GPT suffix actually delivers what traditional criminal LLM offerings promise. Take DevilGPT, for instance: Despite its provocative branding and active Telegram presence, it primarily aggregates various AI-related tools rather than providing a genuinely unlocked LLM. It’s a reminder that in this space, branding often outpaces substance.
At the end of November 2025, a number of reports from cybersecurity companies also informed about another criminal LLM on offer, KawaiiGPT. This API wrapper is freely available on GitHub and offers functionality similar to that of WormGPT and the rest. Like what we have seen in other cases, this one also relies on commercial AI models, such as DeepSeek, Gemini, and Kimi-K2.
Xanthorox: A case study in misdirection
When it comes to services claiming to offer truly unbounded LLMs, Xanthorox represents one of the more intriguing recent developments. This relatively new player claims to provide an uncensored LLM running on local infrastructure, with pricing set at a seemingly accessible US$300 per month.
The backstory adds an interesting wrinkle: According to an investigative post, the operator behind Xanthorox emerged from a failed partnership with one of the many groups attempting to capitalize on the WormGPT brand. The service offers two primary products: a chatbot interface, which we’ve analyzed extensively in a separate publication, and a coding agent designed to run on the customer’s own hardware.
Our in-depth analysis revealed a significant gap between Xanthorox’s marketing claims and operational reality. Despite emphatic assertions about local infrastructure hosting, the evidence strongly suggests the service is anything but locally hosted. Instead, our findings indicate that Xanthorox opportunistically leverages one or more mainstream LLMs from major providers like Google, routing requests through obfuscated channels to mask the true source.
It should be noted that there are several uncensored LLMs running on local infrastructure that are offered on Hugging Face. These “unaligned” LLMs can be downloaded and run locally, but none is as powerful as a fine-tuned Gemini model like Xanthorox.
Malicious agentic AI at large
Anthropic's recent security disclosure reveals a sophisticated espionage campaign where alleged state-sponsored threat actors leveraged Claude, Anthropic’s family of LLMs, to orchestrate multistage cyberattacks. The report details how an advanced persistent threat (APT) group with suspected ties to China systematically exploited Claude to conduct reconnaissance, craft spear-phishing campaigns, and develop malicious scripts targeting government entities and critical infrastructure.
This operation, which we commented on in our article, demonstrates the growing appetite among malicious actors for uncensored, unrestricted AI capabilities that can facilitate complex attack chains without ethical guardrails. However, it’s crucial to distinguish between APT operations and conventional cybercrime. APT campaigns are characterized by their targeted nature, substantial resources, and long-term strategic objectives — often backed by nation-state funding and focused on specific high-value targets. In contrast, typical criminal activities operate with broader, opportunistic motives and limited resources.
While this incident certainly signals demand for “criminal LLMs” among sophisticated threat actors, it doesn’t necessarily indicate that everyday cybercriminals will exhibit the same requirements or possess the technical sophistication to exploit AI in similar ways. The gap between state-sponsored cyberespionage and street-level cybercrime remains significant, even as AI tools become more accessible.
Takeaway: The evolution of criminal AI services
While the criminal LLM marketplace has achieved a degree of stabilization, we’re no longer seeing the weekly parade of new brands that characterized earlier periods. The fundamental approach to delivering “uncensored” capabilities remains largely unchanged. The dominant model continues to be jailbreak-as-a-service.
Notably absent from the current landscape are genuinely homegrown models running on dedicated criminal infrastructure. Even in cases like Xanthorox, where we had the opportunity for detailed technical examination, the pattern holds: Threat actors are leveraging sophisticated jailbreaking techniques, custom system prompts, and targeted fine-tuning to unlock and uncensor commercial models from providers like Google Gemini and OpenAI’s ChatGPT.
This approach makes economic and technical sense. Building and training a competitive LLM from scratch requires enormous resources: computational power, training data, and specialized expertise. It’s far more efficient for criminal operators to exploit the billions of dollars in research and development (R&D) that legitimate companies have already invested, using clever prompt engineering and fine-tuning to circumvent safety guardrails.
The question moving forward isn’t whether we’ll see truly independent criminal LLMs emerge, but rather how long major AI providers can stay ahead of increasingly sophisticated jailbreaking techniques. As the cat-and-mouse game continues, the criminal AI ecosystem will likely keep evolving within these existing constraints, rather than breaking free of them entirely.
AI malware is already here
With the advent of AI chatbots with very advanced code generation capabilities, many programmers have started to rely on them to speed up development. Malware writers have also started to do the same thing. The next level of sophistication is, of course, automating malware that just asks a chatbot for code that does whatever feature is needed at the moment.
- The first of its kind was MalTerminal, first discovered by SentinelLabs in September 2025. But it uses an AI endpoint that was deprecated in November 2023, so it can be assumed that it existed from before that date. MalTerminal calls on OpenAI’s ChatGPT to generate either ransomware code or a reverse shell. MalTerminal spreads as a compiled Python executable.
- In July 2025, the Computer Emergency Response Team of Ukraine (CERT-UA) reported a new attack that some Ukrainian state agencies were receiving from Russian sources. This involved a compiled Python executable that calls an AI model hosted on Hugging Face. The malware has hard-coded static prompts that ask the AI model to generate code that searches for information within the infected computer and exfiltrates it out of the network. CERT-UA named the malware “LameHug” and claimed with “moderate” confidence that it came from APT28, an APT group that possibly worked closely with the Russian army. Google called the malware “PROMPTSTEAL”.
- In August 2025, researchers at ESET uncovered a ransomware sample developed in Go that built itself from prompting an AI model. In this case, the first dropper downloads the whole model from Hugging Face, then proceeds to prompt the model for source code in Lua with ransomware functionality, and then compiles it. The result is that with every new infection, the ransomware generated is different. All of this comes at the cost of having to download an AI model of 11 GB with every single infection. Two days after the ESET announcement, researchers at the New York University Tandon School of Engineering published an academic paper explaining the proof-of-concept (PoC) ransomware they had developed. The researchers named their creation “Ransomware 3.0”. It turns out that this academic PoC is the very same thing that ESET dubbed PROMPTLOCK, but it is unclear if the PoC was used by threat actors as a reference to develop the ransomware that ESET was able to get a sample of.
- In November 2025, Google released a report on two more pieces malware it uncovered that have AI prompting capabilities to generate malicious code. The first of the two is PROMPTFLUX, whose Gemini-directed prompts Google first detected in June 2025. This malware is written in VBScript and, upon execution, it requests specific VBScript obfuscation and evasion techniques to facilitate “just-in-time” self-modification. According to Google, this malware family is still in the early stages because the code looks immature and probably is in a development/testing phase. For example, some features are still commented out.
- The second piece of malware from Google’s report is QUIETVAULT. According to Google, it queries the Google Gemini or Claude API in a similar way. This malware seems to be JavaScript code with infostealing capabilities. In order to help with these features, it uses static prompts to ask Gemini or Claude to generate code to search for other potentially interesting data on the infected system and exfiltrate any found files to GitHub.
| Malware name | Language | Capabilities | AI Provider |
|---|---|---|---|
| MalTerminal | Compiled Python | Reverse shell or ransomware | OpenAI |
| LameHug/PROMPTSTEAL | Compiled Python | Infostealer | Hugging Face model |
| Ransomware 3.0/PROMPTLOCK | Go | Ransomware | OpenAI local model |
| PROMPTFLUX | VBScript | Malware dropper | |
| QUIETVAULT | JavaScript | Infostealer | Google or Anthropic |
Takeaway: Will AI malware take off anytime soon?
This strategy of querying AI models to generate code on the fly is interesting and innovative, but it has one huge disadvantage: When the AI company discovers the attacks, they will immediately revoke the API key that the malware is using. This is effectively a kill switch for the malware, so it’s unlikely that any malware will live very long by using this technique.
The solution to this problem could be what the researchers from New York University did in their “Ransomware 3.0” PoC malware: to download the whole model locally. This, of course, has its own drawbacks. A 10-plus GB download upon infection is unlikely to stay undetected for long, and this will surely limit the success of such malware.
The term “AI malware” is a very catchy one for the media, and it has a disruptive aspect. However, as cybercrime experts, we ultimately doubt that many malware groups are going to adopt this “AI generation on the fly” technique. It could be feasible in small, very targeted attacks and perhaps for very narrow uses, but in the broader context, AI-based code generation is just another malware feature that faces the competition of many better, more tested alternatives, with fewer shortcomings.
For example, if a malware author wanted to make this technique a lot more resilient, the API keys used by the malware could be updated dynamically. This would only force the AI provider to look a bit harder at the kind of queries they are seeing on their models and, ultimately, blacklist them. As interesting as it is, we do not expect dynamic AI-based code generation to become mainstream in the malware toolkit until malware writers can overcome this hurdle. The best bet for a real-world implementation of this feature would be a super-flexible wild card option in a regular non-AI malware. This “old school” botnet or infostealer, on top of its regular functionality, could have a special AI command sent to each bot in the network like “API key:command”, where the botmaster can use one-time API keys to query a commercial chatbot.
The main problem with this kind of strategy is that the results of the code generated are not going to be the same for every bot, and this unpredictability can be a real limiting factor to the feature. Let’s say the botmaster wants to implement a new denial-of-service feature with a new method. They could issue the command “AI chatbot API key:give me source code for a denial-of-service attack that doesblah”. The bot would compile the code returned by the chatbot and run it.
However, for every 100 bots doing this, not all of them would do it right or even the same way, and some would return compilation errors. This means that the attack may not succeed, or it can only partially work, adding an unpredictability factor that may be undesirable to the attacker.
Deepfake update
The deepfake landscape has undergone a dramatic transformation, with barrier-to-entry costs plummeting to unprecedented lows — some services now even operate entirely free of charge. This democratization of synthetic media technology has effectively placed powerful manipulation tools in the hands of virtually anyone with an internet connection. Our comprehensive analysis reveals the full scope of this evolving threat landscape, examining how criminals are weaponizing AI across multiple attack vectors.
Our report details the proliferation of accessible deepfake creation platforms across the dark web and surface internet, the emergence of specialized criminal services offering “deepfake-as-a-service” business models, and the troubling trend of AI-generated content being used to bypass traditional security measures.
The report also highlights how threat actors are combining multiple AI technologies like voice cloning, face swapping, and text generation to create increasingly sophisticated attack chains.
The rise of nudification technology
Perhaps one of the most disturbing trends we’ve documented is the explosive growth of so-called “nudifying” applications, malicious tools that exploit deepfake technology to digitally remove clothing from photographs of unsuspecting subjects. These applications represent a particularly insidious evolution in image-based abuse, requiring nothing more than a standard social media photo to generate nonconsensual intimate imagery.
The accessibility of these tools has reached alarming levels, with platforms like DeepNude, the “Undress AI” app, and various Telegram bots offering their services either for free or at a minimal cost. This technological capability has fundamentally altered the threat landscape of image-based sexual abuse: We’re witnessing the emergence of novel sextortion schemes that traffic in this technology, where criminals can now threaten victims with fabricated intimate images rather than needing to obtain genuine compromising material.
This shift creates unprecedented vulnerabilities for demographics previously less targeted by such attacks, particularly teenagers active on social media platforms and elderly individuals who might be less aware of these technological capabilities. The psychological impact on victims remains devastating regardless of whether the images are authentic, while the legal and social frameworks for addressing this abuse continue to lag behind the technology’s rapid advancement.
Furthermore, the rise of AI-powered nudifying apps and deepfake technology has led to a significant increase in computer-generated child sexual abuse material (CSAM). This creates three major problems for protecting children:
- These tools have made it much easier for people to access this type of content. What once required technical skills can now be done with simple apps available online.
- The large volume of AI-generated material is pushing people toward more extreme content, raising the baseline of what’s considered “normal” on scales like COPINE that measure severity.
- Most importantly, the flood of fake images makes it much harder for law enforcement to identify real victims who need to be rescued. When investigators sift through evidence, they now face the challenge of separating AI-generated content from images of actual children being abused. As the International Criminal Police Organization (Interpol) and the United Nations Interregional Crime and Justice Research Institute (UNICRI) have noted in their research on AI for law enforcement, while automated systems can help sift through harmful materials, deepfake technology represents a new obstacle that makes it harder to find and save real victims who are currently suffering abuse.
Corporate infiltration: The new frontier for deepfake-enabled crimes
On the enterprise front, we’re observing a flourishing ecosystem of deepfake-enabled attacks that build upon tactics that have proven successful over the past few years. CEO fraud schemes, where attackers impersonate C-suite executives to authorize fraudulent wire transfers, have become increasingly sophisticated with the integration of voice and video deepfakes.
The latest development involves employment scams, where threat actors successfully pose as legitimate job candidates, pass through hiring processes at major technology companies, and gain insider access to corporate infrastructure.
A particularly illuminating case study involves North Korean state-sponsored actors who have refined this approach into a systematic operation. These operatives create elaborate false identities complete with fabricated work histories, stolen credentials, and AI-enhanced profile photos; they leverage remote work arrangements to avoid in-person verification while using virtual private networks (VPNs) and proxy services to mask their true locations; and once embedded within organizations, they exfiltrate sensitive data, steal intellectual property, and generate revenue that flows back to the North Korean regime. The scheme demonstrates remarkable sophistication, with some operatives maintaining employment for months or even years before detection, all while their salaries directly fund state programs.
Banking under siege: The KYC challenge
The financial services sector faces a new threat as deepfake technology increasingly targets Know Your Customer (KYC) protocols, the digital identity verification systems that banks rely upon when customers open new accounts remotely. The attacks exploit the fundamental tension between user convenience and security verification, using sophisticated face-swapping and liveness detection bypass techniques to circumvent safeguards designed to prevent identity fraud.
Our analysis shows how criminals are successfully using deepfake technology to impersonate legitimate account holders during video verification calls, often combining stolen identity documents with real-time face manipulation; the emergence of specialized services offering “KYC bypass” capabilities on underground forums, complete with tutorials and customer support; and the development of adversarial techniques specifically designed to defeat AI-based liveness detection systems that banks deploy to distinguish real humans from synthetic media.
The arms race between defensive AI systems and offensive deepfake capabilities continues to escalate, with financial institutions investing heavily in next-generation biometric verification while attackers rapidly adapt their techniques. This cat-and-mouse dynamic threatens to undermine the entire foundation of digital banking identity verification.
Consumer-facing threats: Romance scams and virtual kidnapping
Individual consumers face their own unique set of deepfake-enabled threats, with romance scams and cryptocurrency fraud representing the most prevalent attack vectors. However, the emergence of “virtual kidnapping” schemes marks a particularly disturbing evolution in social engineering tactics, as we’ve detailed in a report.
These attacks leverage audio deepfake technology to clone the voices of the victims’ loved ones with startling accuracy, creating panic-inducing scenarios where criminals impersonate family members claiming to have been kidnapped or in urgent distress.
The mechanics of these scams have become frighteningly efficient: Attackers can generate convincing voice clones from as little as a few seconds of audio, easily obtained from social media videos, voicemail messages, or public recordings. During attacks, victims receive calls featuring what sounds unmistakably like their child, spouse, or parent in distress, pleading for immediate ransom payment to resolve a fabricated emergency. The emotional manipulation is crushing, with the synthetic voice creating a sense of urgency that bypasses rational decision-making.
This represents a technologically enhanced version of the classic “stranded traveler” scam, but with far greater psychological impact and the ability to target demographics well beyond the elderly population traditionally vulnerable to such schemes. The attacks exploit people’s deepest emotional bonds, making them particularly effective even against security-conscious individuals.
The Sora question: What’s next for video deepfakes?
As advanced video generation engines like OpenAI’s Sora enter the market, a critical question emerges: Are we approaching a new wave of video-based deepfake attacks?
The answer remains uncertain. While text-to-video generation tools represent a significant technological leap, their immediate weaponization faces practical barriers: computational requirements, usage restrictions, and watermarking systems implemented by major platforms. However, history suggests these barriers are temporary.
What makes tools like Sora particularly concerning is their ability to generate entirely synthetic scenarios from text prompts alone, eliminating the need for source footage. This capability could enable attack categories we haven’t yet conceptualized: fabricated evidence in legal disputes, synthetic “proof” of events that never occurred, or highly convincing fraud scenarios created with minimal technical expertise.
As these technologies inevitably become more accessible through leaked models and open-source alternatives, organizations and individuals must prepare for a future where video evidence requires the same skeptical scrutiny we now apply to suspicious emails and phone calls. The question isn’t whether video deepfakes will become a major threat vector; it’s simply a matter of when.
Takeaway: Revisiting trust and identity in the era of deepfakes
The deepfake threat landscape of 2025 represents a fundamental shift in how we must approach digital trust and verification. From nudifying apps targeting vulnerable individuals to sophisticated corporate infiltration schemes and banking fraud, synthetic media has become a versatile weapon in the cybercriminal arsenal.
As technology continues to advance, particularly with the emergence of powerful video generation tools, the gap between defensive capabilities and offensive innovations shows no signs of closing. The challenge ahead lies not just in developing better detection technologies, but also in fostering a culture of digital skepticism and implementing verification processes that can withstand the onslaught of increasingly convincing synthetic content.
Conclusion
Three years after the first “uncensored GPT” advertisements appeared on the BreachForums criminal discussion site, the criminal AI landscape has co mature illegal markets always do. Imitation has given way to specialization. Fly-by-night scams have been crowded out by reliable service providers. The most successful actors no longer build the gun; they simply sell the bullets.
WormGPT clones, on-the-fly code-generating malware, and US$5 nudifying bots are not separate phenomena; they are the same economic logic applied at different points of the attack chain. The result is an underground AI stack that is cheaper, more resilient, and more accessible than most defenders expected. The criminal ecosystem has learned to extract maximum value from commercial AI platforms without bearing the cost of development, training, or infrastructure. They’ve turned the AI industry’s own R&D investments against it.
The 2026 outlook is not apocalyptic; it’s incremental. We will not see a sudden explosion of AI-driven chaos. Instead, we will witness the steady, professional refinement of the toolkit we already see today. Jailbreak-as-a-service providers will become more reliable. Deepfake quality will continue to improve while costs continue to fall. Malware authors will experiment with LLM integration, although widespread adoption remains constrained by practical limitations. The underground will continue doing what it does best: optimizing for profit, resilience, and scale.
For defenders, the implications are clear. If you work at a major AI provider and still believe “a few abused API keys” or “some nudifying apps on Telegram” are edge cases rather than core product risk, 2025 should have cured you of that illusion. Every dollar of safety R&D you proudly announce is being amortized across criminal campaigns within weeks. The underground does not respect your terms of service, your moderated endpoints, or your responsible-disclosure timelines.
The only remaining question is whether the legitimate industry will continue reacting quarter by quarter to the latest embarrassment or finally accept that offensive use is now a primary, not incidental, use case and engineer (and regulate) accordingly.
About the authors
TrendAI™’s Forward-Looking Threat Research Team is a group that specializes in scouting technology for one to three years in the future, with a focus on three distinct aspects: technology evolution, its social impacts, and criminal applications. As such, it has been keeping a close look at AI and its potential misuses since 2020, when it authored a research paper in collaboration with Europol and UNICRI on this very topic.
Like it? Add this infographic to your site:
1. Click on the box below. 2. Press Ctrl+A to select all. 3. Press Ctrl+C to copy. 4. Paste the code into your page (Ctrl+V).
Image will appear the same size as you see above.
Related Posts
- How Unmanaged AI Adoption Puts Your Enterprise at Risk
- Estimating Future Risk Outbreaks at Scale in Real-World Deployments
- The Next Phase of Cybercrime: Agentic AI and the Shift to Autonomous Criminal Operations
- Reimagining Fraud Operations: The Rise of AI-Powered Scam Assembly Lines
- The Devil Reviews Xanthorox: A Criminal-Focused Analysis of the Latest Malicious LLM Offering
Recent Posts
- The State of Criminal AI: Market Consolidation and Operational Reality
- How Unmanaged AI Adoption Puts Your Enterprise at Risk
- Estimating Future Risk Outbreaks at Scale in Real-World Deployments
- The Next Phase of Cybercrime: Agentic AI and the Shift to Autonomous Criminal Operations
- Reimagining Fraud Operations: The Rise of AI-Powered Scam Assembly Lines
Complexity and Visibility Gaps in Power Automate
AI Security Starts Here: The Essentials for Every Organization
The AI-fication of Cyberthreats: Trend Micro Security Predictions for 2026
Stay Ahead of AI Threats: Secure LLM Applications With Trend Vision One