By Josiah Hagen, Vladimir Kropotov, Robert McArdle, and Fyodor Yarochkin
AI systems, including large language models (LLMs), are taking on a larger role in business processes. They are used from content generation to customer-facing interactions. However, even though AI responses can sound objective and authoritative, our research shows they are not inherently reliable and need adequate validation.
AI is not neutral or deterministic. LLMs reflect the data they are trained on, including their gaps, biases, and outdated information. As a result, AI systems can:
- Reflect cultural, societal, or political bias
- Produce inconsistent or contradictory outputs
- Make confident mistakes without any hint of uncertainty
When organizations take AI outputs as reliable by default, technical limitations and biases can turn into enterprise risks. In this research, we test how AI bias and failures manifest in real-world use and examine how they can have a detrimental impact on enterprises.
From AI limitations to business risks
We ran thousands of repeated experiments across nearly 100 AI models, using a dataset of more than 800 deliberately provocative questions. In total, we analyzed over 60 million input tokens and more than 500 million output tokens.
Our tests highlight AI limitations that could translate into potential operational, reputational, and financial enterprise risks.
1. Failure to separate related and unrelated information
AI models often struggle to distinguish relevant from irrelevant details. Unrelated information included in a prompt led to skewed or incorrect outputs for most of the models we tested. Only 43% of the models gave the correct answer.
Business risk
This limitation can be exploited to manipulate outcomes, leading to incorrect financial calculations, misclassification of data, or flawed automated decisions.
2. Limited cultural, societal, and religious awareness
AI models trained in one region may generate outputs that conflict with cultural or religious norms elsewhere. This is especially risky for global organizations deploying AI at scale.
Business risk
Misaligned responses can trigger public backlash, alienate customer segments, violate local regulations, or cause lasting reputational damage.
3. Limited political context awareness
AI models often lack awareness of political timelines, legitimacy, or authority, particularly when time-sensitive or region-specific context is required.
Business risk
Incorrect or misleading political outputs can result in legal exposure, compliance failures, or reputational harm, especially when AI-generated content is published under an organization’s name.
4. Overfriendly model behavior
When users repeat or reframe questions, AI models tend to gradually adjust responses to appear more helpful even at the expense of accuracy.
Business risk
This behavior can be exploited in financial, legal, or government contexts, where repeated prompting may coax models into producing increasingly favorable but incorrect answers with real consequences.
5. Limited awareness of what is “current”
Many AI models operate with outdated or inconsistent assumptions about present-day facts, even when real-time data tools are available.
Business risk
Organizations relying on AI for pricing, currency conversion, market analysis, or decision support risk operational errors and loss of credibility if outdated information is presented as current.
6. Mistaken perception of geographic location
Some models attempt to infer user or system location despite lacking reliable or relevant data, producing convincing but entirely fabricated details.
Business risk
Using AI outputs for geolocation, compliance, or personalization without verified inputs can introduce errors that undermine trust and violate regulatory expectations.
Effects across sectors
Unchecked AI adoption does not affect all stakeholders equally, but there are significant consequences across sectors.
Enterprises
For organizations, AI-generated outputs can communicate positions the company does not endorse. Global corporations especially must ensure that AI outputs align with diverse cultures, languages, and religions.
Governments
AI outputs used by government entities can influence public messaging and policies. Any message published by a government body is often regarded as official, so unvetted AI integration can result in significant societal and political repercussions if outputs are biased or misaligned with the current policies, local culture, and traditions.
Individuals
As AI systems become increasingly part of daily life, users may place undue trust in AI responses or share personal information without fully understanding the underlying policies of these systems. Overreliance on AI can lead users to accept responses uncritically, share sensitive information, or receive inappropriate responses, exposing users to privacy, cognitive, and societal risks.
Responsible AI deployment
Our analysis revealed examples of AI bias in the context of regional, geofencing, data sovereignty, and censorship dynamics, all of which influence AI model behavior and outputs. This research challenges common assumptions about LLM capabilities and highlights the risks of relying on these models unilaterally.
Ensuring transparency and accountability in AI technologies is essential. AI is undoubtedly a major enabler of business innovation, but to reap its full potential it must be deployed alongside thorough validation and preemptive risk assessments.
The full report provides detailed examples of our findings, analysis of real responses from different models, as well as further recommendations for mitigating AI bias risks.
Like it? Add this infographic to your site:
1. Click on the box below. 2. Press Ctrl+A to select all. 3. Press Ctrl+C to copy. 4. Paste the code into your page (Ctrl+V).
Image will appear the same size as you see above.
Recent Posts
- How Unmanaged AI Adoption Puts Your Enterprise at Risk
- Estimating Future Risk Outbreaks at Scale in Real-World Deployments
- The Next Phase of Cybercrime: Agentic AI and the Shift to Autonomous Criminal Operations
- Reimagining Fraud Operations: The Rise of AI-Powered Scam Assembly Lines
- The Devil Reviews Xanthorox: A Criminal-Focused Analysis of the Latest Malicious LLM Offering
Complexity and Visibility Gaps in Power Automate
AI Security Starts Here: The Essentials for Every Organization
The AI-fication of Cyberthreats: Trend Micro Security Predictions for 2026
Stay Ahead of AI Threats: Secure LLM Applications With Trend Vision One