Agentic AI: Is Your Business Ready For It?

Posted on:
June 11, 2025
Avatar photo
Author
Keith Ward
Moderator and Editor

TL;DR - Article Summary

Generative AI is reactive, but agentic AI is a newer, more complex technology that can make independent decisions and pursue multi-step goals, as seen in self-driving cars. This autonomy creates significant challenges for IT, including security risks due to extensive network access, integration with existing systems, and the need for robust monitoring of unpredictable behavior. Organizations must also address employee concerns about job security, establish clear governance and ethical guidelines, and navigate a rapidly changing regulatory environment. A cautious, well-planned adoption strategy is essential for successful implementation.

The use cases for generative AI have been well established in the last few years. Most people, whether technically oriented or not, understand what ChatGPT does. They’re used to chatbots and the kind of interaction required.

Agentic AI, on the other hand, is significantly less well-defined. It’s making significant inroads in many areas, but is a much more nascent technology. Although some of that is due to the fact that it’s newer, the greater factor is that, by nature, it’s much harder to define.

Generative AI, as found in services like ChatGPT, is reactive by design. It does not act autonomously, and simply responds to prompts, relying wholly on existing material. Agentic AI goes much further, although it shares some commonalities with its less-intrusive cousin. 

The key differentiator is that agentic AI can make independent decisions, problem solve, and pursue goals that may involve numerous steps. Those features, while making agentic AI harder to pin down definitionally, also broaden its use cases almost infinitely.

Dining in Austin

An example of the distinction between generative AI and agentic AI can be seen in the following scenario: you’re going out with friends after work. You open ChatGPT and type in a prompt: “What are the best restaurants in Hyde Park in Austin?” 

ChatGPT rounds up its resources and produces a map of the area; a list of the restaurants that includes business hours, a price estimator, brief description of the style of food, and more. It even adds a quick public relations-style blurb about what Hyde Park offers food lovers. 

You decide on the restaurant, then order a Waymo self-driving car to pick up your group and take you there. You’ve now left generative AI behind and engaged a classic example of agentic AI.

Self-driving cars, like Waymos, use agentic AI extensively. The vehicles need to operate independently for the most part, which means taking action in critical areas like braking, steering, acceleration, lane changes and more without any human interaction.

This happens, for the most part safely, because the AI algorithms analyze information and work in concert to make real-time decisions about navigation, avoiding road hazards, traffic, and so on. This kind of autonomous behavior is well beyond the abilities of generative AI, which is basically about producing content.

However, these agentic AI abilities make it an enormous challenge for IT professionals to implement, monitor and manage. AI—even the simpler forms—can be disruptive at any level; when considering the power it has to act on its own, the risks (and benefits) of agentic AI ramp up several more levels.

And although agentic AI adoption may still be small, it won’t stay that way. Gartner states that “By 2028, 33% of enterprise software applications will include agentic AI, up from less than 1% in 2024, enabling 15% of day-to-day work decisions to be made autonomously.”

Given that projection, it’s important to understand the challenges agentic AI brings to the table. Planning is always important when adding new technology to the infrastructure stack; but its importance is magnified manyfold when dealing with a technology that sometimes seems almost human in its ability to act.

More Access = More Trouble

The realm in which agentic AI has perhaps the greatest potential to disrupt operations is in security. Agentic AI systems, given their autonomous nature, typically need wide access to a network, including data sources like databases, and application programming interfaces (APIs). 

This is serious access, and offers the power to do a lot of damage if left unchecked and open to compromise. This makes it essential to have reliable and robust safeguards like identity verification (including multi-factor authentication), role-based access control (RBAC), and audit logging to ensure only authorized and vetted users end up on the inside of a network. 

An important part of this overall framework should be the establishment of a “zero trust” mindset on the part of the management, that then filters down to the entire organization. Zero trust starts with the assumption that anything connecting to your network is potentially harmful. Having to prove you are who you say you are goes a long way toward minimizing or outright eliminating those harms.

At the same time, employees need to get their work done, and an overabundance of restrictions can interfere with that ability. Careful planning is needed to balance the need to simultaneously stay secure and be productive.

Systems Integration

Another problem with agentic AI is how to integrate it into existing systems, which can run the gamut from enterprise applications to legacy systems to cloud services and more. This can be a daunting undertaking, given how siloed many older systems are. 

Meshing with APIs can be particularly problematic for agentic AI, especially with poorly- or under-documented APIs. A related issue is the explosion of APIs in the cloud, which creates increasing new complexities that have to be navigated and seamlessly integrated. 

This forces IT teams to design scalable, secure, and modular interfaces while maintaining performance and avoiding disruptions to existing workflows or infrastructure. It’s another tradeoff, and like all balancing acts in IT, it’s easier said than done.  

Watching the Watchers

Once integrated into existing systems, agentic AI presents another challenge: how to monitor it for unpredictable behavior, deviations and failures. This is necessitated by its autonomous nature, since it does more than just regurgitate existing data and other information.

An organization’s first duty here is to know what agentic AI is doing. This means monitoring agent actions in real time, as they occur. This includes metrics for quantifying performance and collecting detailed logs on temporal events and system behaviors. 

Proper monitoring of agentic AI requires comprehensive observability. Here the old IT adage comes into play: you can’t secure what you do not know you’ve got. Observability tools allow developers and others to see how the AI behaves, makes decisions, and progresses toward goals. 

If things go wrong and the agentic AI becomes erratic — for instance, not following established norms or goals — proper observability should spot the behavior and alert managers before things get out of hand.

Quelling the Resistance

Along with serious technical challenges, the less-technical aspects of integrating agentic AI are crucial to get right. 

It starts at the top. Organizational readiness is a must, for an AI rollout to be successful. All significantly disruptive technologies throughout the decades, whether it’s virtualization or client-server computing or cloud computing, have faced significant resistance from existing staff and stakeholders, who know what they like and like what they know.

Agentic AI is no different in this regard. IT must work closely with business units to manage expectations, redesign processes, and provide training. In other words, it must be a company-wide effort that gets buy-in from every department. This is done via efforts including change management strategies and stakeholder engagement.

Job Fears

In terms of employee satisfaction, one top priority for management should be to assuage concerns about layoffs. Many surveys, like this one showing that 28% of employees believe AI could reduce their roles or cost them their jobs, inevitably increase stress levels and lead to unhappier workers.

Instead, managers should focus on demonstrating how agentic AI can help employees become productive in ways more beneficial to the organization. Doing so can mitigate the morale dip that is likely to follow the introduction of AI technology.

Governance, Compliance, and Ethics

Agentic AI raises new issues around governance, compliance and ethics as well. The questions are legion, and can include:

  • Who’s accountable for the AI’s actions?
  • How do decisions align with regulatory frameworks?
  • How do we ensure fairness, transparency, and ethical use?

IT teams have to work closely and regularly with legal, compliance, and risk management groups to define policies, document agent behavior, and ensure adherence to internal and external standards. 

That in itself can be an onerous task, since the velocity of change in the agentic AI world—once a standard has been established, a new government regulation could pop up next week that completely demolishes that standard—can make it feel nearly impossible for businesses to hit the constantly-moving targets.

The ‘Finger of Blame’

That said, it still has to be done. As IBM notes, “... all AI governance is human-centered.” Humans must maintain control, and be ultimately responsible for the decisions and actions agentic AI takes. Blaming AI will be tempting, and the “finger of blame game” is a well-known phenomenon in the IT space: When something goes wrong, it’s common for the network team to blame the server team, which blames the applications team, which blames the database team, and so on.

This makes it imperative to have assigned responsibilities. There should be a clear delineation for the implementation and operation of agentic AI: not to lay blame, but to ensure that problems get solved and failures corrected quickly and properly. 

Look Out for Minotaurs

When it comes to compliance, it’s important to note that seemingly every governing body on the planet is coming up with its own regulatory framework. To name just a few examples:

  • European Health Data Space (EHDS)
  • CCPA (California Consumer Privacy Act)
  • AI Act in the EU

These all have their own rules, with the goal of ensuring responsible AI development and deployment, addressing ethical concerns, and balancing innovation with risk management. And this is just the smallest sampling of what’s out there; navigating the labyrinth of restrictions may reveal a Minotaur at the end, but keeping up is nonnegotiable. 

These frameworks also have their own specific ethical guidelines; in addition, there are industry-specific regulations to deal with: protected data, for instance, will be very different for the healthcare industry than for the finance industry. 

Recommendations

This is an exceedingly brief overview of the agentic AI landscape and what organizations face when considering its implementation. What is clear is that along with the incredible promise of the technology comes abundant risk. It can help and hurt in equal measure.

One thing to consider at the outset is whether or not agentic AI is even necessary for the proper functioning of your business. One recent, relevant analogy is cloud computing: companies rushed headlong into the public cloud at the beginning, thinking it was the magic bullet that would solve all its problems.

Too often, that gun was turned around and the bullet damaged the company instead, leading to huge, unexpected issues, not the least of which was enormous and unexpected monthly fees. That’s why “repatriation” became a buzzword, as a healthy percentage of companies started moving data out of the public cloud and back on-premises.

If an organization decides to take the plunge into agentic AI, extreme caution should be exercised. Dipping a toe into that pool is often a better strategy than a deep dive, especially if current staff is lacking the expertise. 

Thus, it’s important to know the use cases you think agentic AI will help with, and the specific problems (not “increase efficiency,” for example) that it can solve. Once the business case has been made, the C-suite can be brought in; once they have agreed, the rest of the staff can be prepared for the transition.

Slow and steady is typically the way to go with disruptive technology; that axiom certainly holds with regard to agentic AI. 

share this
Further Reading
Site development by MullerConsulting
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram