By Salvatore Gariuolo and Vincenzo Ciancaglini
Not every AI system claiming to be agentic is. As technology evolves at an unprecedented pace, distinguishing genuine agentic AI systems from those that merely capitalize on industry buzzwords has become increasingly challenging. Defining what makes AI truly agentic is not just an exercise in classification — it is a critical step toward understanding the broader security implications of these systems.
Unlike traditional AI, which typically follows predefined rules and static workflows, agentic systems are designed to make autonomous decisions, operate without supervision, and adapt dynamically over time. While this flexibility unlocks powerful new capabilities, it also introduces unique attack surfaces and cybersecurity challenges that demand urgent attention.
This article introduces a structured framework for understanding agentic AI systems, outlining their core characteristics, and examining their security implications. Specifically, we will provide:
- A definition of an agentic AI system that captures its essential characteristics;
- A detailed breakdown of the key features that set apart agentic systems form traditional AI;
- An overview of the security challenges these features introduce.
What is an agentic AI system?
Agentic AI is a software architecture that aims at solving complex tasks through autonomous agents. Each agent is typically designed to perform specific functions within a particular domain and can leverage tools — such as a web client — to interact with the outside world. These tools enable agents to gather information, act upon their environment, and communicate with other systems. While these agents are not necessarily AI-driven, they normally leverage AI, in which case they are referred to as AI agents.
Agents are managed by an orchestrator — the reasoning engine responsible for identifying goals, formulating a plan, and coordinating the agents' workflow to achieve such goals. Agents, in turn, serve as the fundamental units that perform actions within the agentic system.
While agentic has become a popular term in AI discussions, not all systems labelled as such are necessarily agentic. In fact, many so-called agentic systems are simple agents that automate predefined tasks but lack several other capabilities specific to agentic systems. As the result of this confusion, we hereby introduce a reliable definition — one that is rigorous enough to separate genuine advancements from market hype, yet still flexible enough to accommodate future innovations.
Today, for example, much of the discussion around agentic AI centres on systems powered by large language models (LLMs) to automate tasks such as workflow optimization and process automation. However, looking ahead, we envision agentic systems to expand into cyber-physical domains such as autonomous vehicles, robotics, and industrial automation. In these applications, safety is a primary concern; because of that, other forms of orchestrators will be needed. Our definition of agentic AI also covers these forms of orchestrators.
To formulate a comprehensive definition, we analysed how leading experts and organizations framed the concept of agentic AI. We examined definitions from academic researchers such as Andrew Ng, technology companies like Google, IBM, and NVIDIA, and respected media outlets covering AI advancements. Our goal was to identify the most frequently mentioned features across these sources, uncovering common trends in how these players conceptualized agentic AI. These features, along with the articles referencing them, are summarized in Table 1.
Feature | Definition |
---|---|
Goal-Oriented 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13 | Capable of autonomously pursuing specific objectives (high agency) while acting with minimal user input (high autonomy) |
Context-Aware 1, 2, 3, 4, 5, 6, 7, 10, 11, 12, 13 | Capable of perceiving and interpreting its environment to make informed decisions |
Multi-Step Reasoning 3, 6, 7, 8, 9, 11, 12, 13 | Capable of breaking down complex problems into smaller, more manageable steps, and coordinating specialized agents to execute those steps |
Action-Driven 1, 2, 3, 4, 5, 6, 7, 10, 11, 12 | Capable of executing actions such as calling software APIs, interacting with external systems, or controlling actuators in a cyber-physical environment |
Self-Improving 3, 4, 6, 10, 11, 12, 13 | Capable of continuously learning from past outcomes to improve its performance over time |
Table 1. The defining features of agentic AI
Therefore, an AI system should also meet the five features outlined above to be considered agentic. The lack of more than two of these should raise some flags and put the use of the term “agentic” under scrutiny.
These features are essential not only for distinguishing agentic AI from traditional AI, but also for understanding the unique vulnerabilities of agentic systems. In the next section, we will explore these five features in more detail and examine how they give rise to new cybersecurity risks.
The Cybersecurity Risks of the Agentic Proces
Agentic AI is not just about performing predefined tasks — it entails goal-oriented behaviour, adaptability to dynamic environments, multi-step reasoning, action-driven behaviour, and continuous self-improvement. These five features form what is often referred to as agentic process, enabling agentic AI systems to function in complex settings.
The agentic process is well-described by NVIDIA's four-stage model: Perceive, Reason, Act, and Learn. In the next subsections, we delve into each of these stages and examine how they map to our defining features of agentic AI. We will then explore the unique security challenges these features introduce, offering a comprehensive understanding of how agentic AI reshapes the cybersecurity landscape.
Stage 0: Setting up a goal
Before an agentic AI system can begin its journey through the agentic process, a goal must first be established. This goal becomes the foundation that drives the system’s decisions, actions, and overall functionality. Without a well-defined objective, the system lacks direction and cannot operate effectively in its environment.
However, simply setting a goal does not make a system truly agentic. As we discussed earlier, for a system to be considered agentic, it must be goal-oriented — meaning it doesn’t just work toward a goal but aims to achieve that goal with minimal human intervention. Achieving this feature requires two key elements: agency and autonomy.
- Agency refers to the ability to make independent decisions in pursuit of a goal. In an agentic system, agency is enabled by the orchestrator, which formulates a plan and directs the steps necessary to achieve that goal.
- Autonomy is the ability to execute those decisions without human input. This is where the agents come in; agents are responsible for carrying out the orchestrator's plan and performing the tasks required to achieve the goal without requiring user instructions or external direction.
As an example, AI digital assistants like GPT-4 or Gemini (extensively described here) are not agentic system. These systems are reactive, as they exhibit agency when performing tasks — like deciding on the best way to draft an email — but they lack autonomy, as they cannot act without explicit user requests. An agentic system, by contrast, is proactive as autonomously executes actions as the situation demands.
As we have seen, goal-oriented behavior is not just an isolated characteristic of agentic systems; it serves as a guiding principle throughout every stage of the agentic process, from planning to execution. However, the autonomy that comes with it also introduces significant cybersecurity risks.
A major concern is goal manipulation, where attackers manipulate the goals of the agentic system, steering it toward unintended or malicious outcomes. This could be done by exploiting weaknesses in the orchestrator.
Another important risk is dictated by the lack of user oversight: without supervision, the system might exceed its intended operational boundaries, performing unauthorized actions that violate ethical, legal, or business constraints. Simply restricting the agentic system from performing certain actions like financial transactions or accessing sensitive data is not enough; these boundaries, which act as guardrails for the system’s decisions, can be compromised, leading to unintended or even harmful behavior.
Stage 1: Perceiving the Environment
Stage 1 of the agentic process is Perceive, where agents gather and analyze data to guide the system’s decision-making. Context-awareness is central to this stage, as it enables the system to interpret its physical environment and access external data sources, ensuring that its decisions are driven by real-time information rather than static inputs. In other words, instead of executing predefined tasks, an agentic system monitors external factors, adapting dynamically as those factors change.
For instance, Walmart leveraged agentic AI to transform its inventory management system. The system’s primary goal is to restock shelves based on sales patterns. However, if the system detects a sudden change in weather or identifies a regional event, it can dynamically adjust its priorities to products that are likely to see an increase in demand due to these external factors. Unlike traditional inventory management systems, which rely on rigid pre-set rules, the agentic approach allows the system to adapt in real-time, responding to shifting conditions with greater agility.
However, the ability to dynamically adapt to the environment introduces cybersecurity risks.
A significant concern is data manipulation. Since agentic systems rely on external data to make decisions, attackers could inject false data to mislead the system and cause it to make incorrect assessments. For example, by manipulating weather data or event information, malicious actors could cause Walmart’s system to prioritize incorrect products, leading to inefficiencies or operational failures.
Another risk is denial-of-service. If an attacker prevents agents from accessing the data they need to interpret their environment - for example, by compromising their tools – the orchestrator will not obtain information for decision-making. As a result, the agentic system may fail to respond to shifts in conditions or even suffer a complete operational breakdown.
Stage 2: Reasoning and Planning the Next Steps
In stage 2, or Reason, the orchestrator analyses the information gathered in the previous stage and formulates a strategy to achieve the intended goal. Multi-step reasoning is essential here, as it enables the orchestrator to break down complex tasks into smaller, more manageable steps. Rather than addressing the problem in a single step, the orchestrator tackles various aspects of the problem and adjusts its decisions as the situation evolves.
For example, a recommendation system based on agentic AI, such as that used by Netflix, does more than simply suggest content based on viewing history. Through multi-step reasoning, the system first analyses the user’s past interactions with the streaming service to identify individual preferences. Next, it incorporates broader trends, such as what is currently popular in the user’s region. Finally, it refines its recommendations based on recent engagement patterns and visual content features, ensuring that suggestions remain relevant over time.
Agentic AI could also be applied in the context of cybersecurity, where systems can use multi-step reasoning to autonomously investigate incidents and take action to mitigate their impact. For example, when the agentic system detects unusual patterns of traffic, it first correlates the originating IP with open-source intelligence to identify known vulnerabilities. It then assesses which data within the company’s infrastructure is being targeted. After that, it implements defensive measures, such as blocking the IP and scanning for signs of lateral movement. A similar example is Trend Cybertron, a system that ingests posture and threat data from the monitored environment, automatically infers threat scenarios, and acts accordingly to anticipate future security issues.
While multi-step reasoning offers strong problem-solving capabilities, it also introduces cybersecurity risks in the agentic ecosystem. Attackers could attempt to influence or disrupt the orchestrator’s reasoning, causing it to make biased decisions. This is particularly significant, as the orchestrator is responsible for formulating plans and directing agents: any manipulation at this stage could later result in unintended or harmful actions.
A particular concern arises in agentic systems that use LLMs as orchestrators. These models rely on training data and probabilistic reasoning, making them vulnerable to adversarial attacks, where malicious actors craft inputs designed to fool the orchestrator. LLM orchestrators are also susceptible to backdoor attacks, where attackers embed triggers within the model. These triggers activate only under certain conditions, leading the orchestrator to make decisions it would not normally make.
That said, not all agentic systems will rely on LLMs. Autonomous vehicles, for instance, may use deterministic orchestrators capable of making predictable decisions in safety-critical environments. In these cyber-physical systems, the consequences of a compromised orchestrator would be much more severe: if the orchestrator is breached, the unintended actions of the agentic system could directly affect the physical environment, potentially leading to life-threatening consequences.
Stage 3: From Decisions to Actions
Stage 3 - Act - marks the transition from planning to execution. The agents that previously gathered and analysed contextual data now take on an operational role, executing tasks in their environments. This action-driven capability is achieved with tools like API calls, database queries, or actuators, which enable agents to transform the orchestrator’s strategy into tangible results in both digital and physical environments.
A large-scale example of this is Singapore’s smart city infrastructure, where an agentic system coordinates traffic flow, energy distribution, and public safety. As data is collected and analysed in real-time — e.g., for detecting congestion, fluctuations in electricity demand, or security incidents — the orchestrator determines the optimal response. Agents then take action: adjusting traffic signals to ease bottlenecks, redistributing power to prevent outages, or deploying police vehicles to address potential safety threats. Each agent operates within its domain, using specialized tools to perform its assigned tasks, ensuring that the city remains functioning, efficient, and safe.
Since agents are responsible for performing actions, they become valuable targets for attackers seeking to steal sensitive data or manipulate the system’s outcome. One potential risk is agent impersonation, where malicious actors deceive the system by masquerading as legitimate agents; by doing so, they can exfiltrate sensitive data about its decisions, potentially exposing critical information. Attackers could also manipulate agents to misuse their tools, leading to harmful actions. For example, an agent designed to manage traffic flow could be tricked into altering traffic light timings, causing dangerous congestion.
These risks apply whether the agentic system relies on a single agent or when multiple agents work independently. However, these risks can amplify when agents interact with one another through sequential or hierarchical workflows. In fact, a goal misalignment in one agent could cascade throughout the system, affecting the outcomes of other agents and triggering a chain reaction of unintended actions.
In the future, entirely new threat scenarios may also emerge. Currently, agentic systems rely on a fixed set of agents with predefined roles. However, we may soon see the emergence of a marketplace for agents, where these components can be dynamically integrated into existing systems as needed. While this introduces a new level of flexibility, it also brings risks. For example, market manipulation could occur — a novel form of supply chain attack where attackers influence ratings, reviews, or recommendations to elevate malicious agents or damage the reputation of legitimate ones.
Stage 4: Learning from Experience
Learn represents the final step of the agentic process, where the system evaluates past experiences to optimize its future actions. Self-improvement plays a key role in this stage, ensuring that the agentic system continuously evolves. By reflecting on previous decisions, successes, and failures, the orchestrator adjusts its strategies over time, leading to more accurate decisions and improved system performance.
Unlike traditional systems, which require explicit reprogramming or human intervention to improve, agentic AI systems adapt autonomously. However, true self-improvement is still emerging. In fact, while it is expected to become a key feature of agentic AI, many systems today either possess rudimentary self-learning capabilities or lack these capabilities altogether. Therefore, although the potential for continuous learning exists, it remains an area of development that will advance as the technology matures.
What is certain is that this capability introduces another layer of cybersecurity risks. For example, learning from biased or adversarial data could have unintended consequences: if the self-learning process reinforces harmful patterns, the orchestrator could start making wrong decisions.
On top of the orchestrator, some agentic AI systems may also rely on self-adapting agents, which refine their behaviour based on the interactions they have with their environment. Attackers could inject hidden triggers into the agent’s training dataset, which can lead to malicious behaviour when activated. These behaviours, which might go undetected, could reinforce malicious tendencies and compromise the agent’s intended functionality.
Conclusion
As the excitement surrounding agentic AI keeps growing, it’s important to recognize that not all AI systems are necessarily agentic. To be considered agentic, an AI system must embody five features: it should be goal-oriented, context-aware, rely on multi-step reasoning, be action-driven, and self-improving. These five features define a new technological paradigm — one that opens the doors to groundbreaking possibilities but also introduces a new array of cybersecurity challenges.
In the next article of this series, we will examine the architecture of agentic AI systems, exploring their core components and the specific cybersecurity risks associated with each. We will also highlight how traditional cybersecurity challenges remain relevant in the context of agentic AI, and discuss the steps needed to safeguard these evolving technologies.
References
1. Craig, L. (2024). What is agentic AI? Complete guide. Search Enterprise AI. Available at: https://www.techtarget.com/searchenterpriseai/definition/agentic-AI.
2. G. Davies, A. Bessa. (2024). What is Agentic AI? Definition, features, and governance considerations. KNIME. Available at: https://www.knime.com/blog/what-is-agentic-ai.
3. Stryker, C. (2024). Agentic AI. IBM. Available at:https://www.ibm.com/think/insights/agentic-ai.
4. Bottington, A. (2024). Practical Applications of Prompt Engineering. Integrail.ai. Available at: https://integrail.ai/blog/agentic-ai-examples.
5. Griffith, E. (2024). A.I. Isn’t Magic, but Can It Be ‘Agentic’? The New York Times. Available at: https://www.nytimes.com/2024/09/06/business/artificial-intelligence-agentic.htmls://www.nytimes.com/2024/09/06/business/artificial-intelligence-agentic.html.
6. Vinky, G. (2024). What is Agentic AI? An In-Depth Exploration. RPATech. Available at: https://www.rpatech.ai/agentic-ai/.
7. Kapoor, S., Stroebl, B., Siegel, Z. S., Nadgir, N., & Narayanan, A. (2024). Ai agents that matter. arXiv preprint arXiv:2407.01502.
8. LangChain Blog. (2024). What is an AI agent? LangChain. Available at: https://blog.langchain.dev/what-is-an-agent/.
9. Andrew Ng. (2024). Andrew Ng Explores The Rise Of AI Agents And Agentic. Snowflake Inc.Available at: https://www.youtube.com/watch?v=KrRD7r7y7NY.
10. Lisowski, E. (2024). AI Agents vs Agentic AI: What’s the Difference and Why Does It Matter? Medium. Available at: https://medium.com/@elisowski/ai-agents-vs-agentic-ai-whats-the-difference-and-why-does-it-matter-03159ee8c2b4.
11. Pounds, E. (2024). What Is Agentic AI? NVIDIA Blog. Available at: https://blogs.nvidia.com/blog/what-is-agentic-ai/.
12. Kaggle.com (2025). Agents. Google. Available at: https://www.kaggle.com/whitepaper-agents.
13. Kamal, A., Ansari, T., Chapaneri, K. (2024). Agentic AI-the new frontier in GenAI An executive playbook. PwC. Available at: https://www.pwc.com/m1/en/publications/documents/2024/agentic-ai-the-new-frontier-in-genai-an-executive-playbook.pdf.
14. Luck, M. & d'Inverno, M. (1995). A Formal Framework for Agency and Autonomy. Icmas. Vol. 95. 1995.
15. Ciancaglini, V., Gariuolo, S., Hilt, S., McArdle R. & Vosseler, R. (2024). AI Assistants in the Future: Security Concerns and Risk Management. Trend Micro. Available at: https://www.trendmicro.com/vinfo/gb/security/news/security-technology/looking-into-the-future-risks-and-security-considerations-to-ai-digital-assistants.
16. Ji, J. (2025). Beyond the Chatbot: Agentic AI with Gemma. Google. Available at: https://developers.googleblog.com/en/beyond-the-chatbot-agentic-ai-with-gemma/.
17. Walmart.com (2024). Walmart’s Element: A machine learning platform like no other. Walmart. Available at: https://tech.walmart.com/content/walmart-global-tech/en_us/blog/post/walmarts-element-a-machine-learning-platform-like-no-other.html.
18. Muralidharan, G. (2023). How Netflix Uses Artificial Intelligence - Argoid. Available at: https://www.argoid.ai/blog/netflix-ai.
19. Morin, C. & Morin, C. (2024). What is multi-step reasoning? Sysdig. Available at: https://sysdig.com/blog/what-is-multi-step-reasoning/.
20. rend Micro Newsroom. (2025). Trend Micro Puts Industry Ahead of Cyberattacks with Industry’s First Proactive Cybersecurity AI. Trend Micro. Available at: https://newsroom.trendmicro.com/2025-02-25-Trend-Micro-Puts-Industry-Ahead-of-Cyberattacks-with-Industrys-First-Proactive-Cybersecurity-AI.
21. Do, M. (2024). Artificial Intelligence In Singapore: Transforming Singapore Into The World’s First Smart Nation – SavvycomSoftware. Available at: https://savvycomsoftware.com/blog/artificial-intelligence-in-singapore/.
22. Gautam, A. (2024). Singapore Introduces AI-Powered Traffic Management System: A Step Towards Smart City Success. Aleaitsolutions. Available at: https://www.aleaitsolutions.com/singapore-introduces-ai-powered-traffic-management-system-a-step-towards-smart-city-success/.
23. Anthropic.com. (2024). Building effective agents. Anthropic. Available at: https://www.anthropic.com/research/building-effective-agents.
Like it? Add this infographic to your site:
1. Click on the box below. 2. Press Ctrl+A to select all. 3. Press Ctrl+C to copy. 4. Paste the code into your page (Ctrl+V).
Image will appear the same size as you see above.
Recent Posts
- The Road to Agentic AI: Defining a New Paradigm for Technology and Cybersecurity
- Stay Ahead of AI Threats: Secure LLM Applications with Trend Vision One
- Slopsquatting: When AI Agents Hallucinate Malicious Packages
- Unveiling AI Agent Vulnerabilities Part V: Securing LLM Services
- The Rise of Residential Proxies as a Cybercrime Enabler