Resource Center / Resource Articles / AI governance in the agentic era

 

AI governance in the agentic era

This article, co-sponsored by HCLTech, covers agentic AI governance, with a focus on the evolving risk landscape, guardrails, and enabling agentic growth.


Published: July 2025


Contributors:


Navigate by Topic

Agentic artificial intelligence is an active area of exploration for many organizations. A recent IBM and Morning Consult survey of 1,000 enterprise AI developers found that 99% of respondents said they were exploring or developing AI agents. These agents are goal-driven, and their purpose is to perform tasks and carry out workflows in an autonomous manner.

AI agents tend to focus on specific tasks structured with simple workflows. At the same time, agentic AI tends to involve multiple AI agents that can carry out full end-to-end workflows with significant autonomy and may operate in more complex environments. Basic AI agents are not new; examples include antivirus software and robotic vacuums.

Agentic AI is a newer concept and can enable end-to-end workflows for key organizational processes such as IT incident management or managing customer returns.

Stanford's 2025 AI Index Report highlighted that AI agents already show signs of matching human capabilities for select tasks and can deliver speed and cost efficiency. Over time, agentic AI capabilities will likely continue to improve and adoption to increase. While this has many benefits to organizations and society, there are risks that must be managed.

"AI has accelerated rapidly, evolving beyond [large language models] LLMs and chatbots to assistants and now agents that can drive autonomous decision-making with continuous optimization. Agentic AI holds the potential to transform the way we work, unlocking new levels of productivity and allowing organizations to focus on more strategic priorities. As organizations implement these powerful systems, success will depend on having the right strategies and governance in place to enable them to responsibly scale interconnected agents to solve enterprise-wide challenges," Savio Rodrigues, vice president of Service Partners at IBM.

Enterprises face a dual challenge: deploying agentic AI to maintain competitiveness while preemptively addressing legal liabilities and ethical dilemmas. With the current pace of innovation, policymakers struggle to keep up with technological advancements. For example, a major airline was held liable for an AI chatbot’s misleading policy advice demonstrates the legal repercussions of insufficient guardrails.

Governance frameworks must anticipate emergent risks while enabling operational objectives.


Evolving risk landscape

Agentic AI offers significant promise, including productivity gains, new workflow models, fewer human bottlenecks, and accelerated innovation. With that promise comes new challenges. Understanding these new challenges and the evolving risk landscape can help organizations prepare for and mitigate risks effectively, enabling them to gain the full benefits of agentic AI.

  • expand_more

  • expand_more

  • expand_more

  • expand_more


Guardrails for agentic AI: A three-tiered framework

To guide adoption while managing risks, organizations can use a three-tiered framework of guardrails to enable governance of agentic AI that scales with use case risk and potential impact.

Tier 1: Foundational guardrails

Agentic AI systems should include the same guardrails that are necessary for all AI systems. These guardrails cover aspects such as privacy, transparency, explainability, security and safety.

They may also involve following global standards such as the International Organization for Standardization’s ISO/IEC 42001 and the U.S. Department of Commerce National Institute of Standards and Technology’s AI Risk Management Framework.

The guardrails may also include recording each system’s intended goals, boundaries, and limitations and incorporate safety features, secure access and internal explainability tools.

Organizations should evaluate and revise pre-existing AI guardrails to address the risks agentic AI may introduce. Having foundational protections in place helps ensure that every system starts on solid ground and can be safely managed over time.

Tier 2: Risk-based guardrails

Not all agentic AI systems carry the same level of risk. Some may operate in low-impact contexts, while others operate in higher-impact contexts that may influence people’s finances, health, human rights or other issues.

In this tier, organizations may adjust guardrails based on the risk level of the use case. For example, a chatbot answering retail product questions may only need minimal guardrails, such as clear user disclaimers, basic monitoring for accuracy, bias, and routine review of common queries to help ensure relevance.

In contrast, a chatbot agent handling banking disputes should have significantly more oversight, including rigorous pre-deployment testing, detailed audit logging, stricter access controls, and real-time supervision mechanisms. It may also be subject to industry-specific regulatory compliance and require human-in-the-loop decision confirmation for high-impact decisions.

This demonstrates how risk-based guardrails enable organizations to have right-size governance controls. Lighter guardrails may be appropriate for informational or low-impact agents, while mission-critical AI systems benefit from more robust governance mechanisms and oversight protocols.

Key tools that organizations can use to adjust guardrails as needed include real-time monitoring, human-in-the-loop interventions and customized workflows and performance thresholds. These tools help organizations apply the appropriate level of governance for the use case that is necessary to address the risks involved.

Tier 3: Societal guardrails

Agentic AI can influence organizations, entire communities, industries, and the environment. Societal guardrails are necessary to help mitigate risks that have a broader impact and help ensure alignment with social norms and public expectations.

  • Ethical design processes. Working with communities, experts and users to enable value alignment and define what responsible AI guardrails should look like.
  • Upskilling and training. Helping people adapt by offering training, upskilling and support programs.
  • Incident response systems. Putting protocols in place to report, analyze and learn from agentic AI-related issues.
  • Emergency controls. Enabling the ability to pause or shut down agentic AI systems in unusual or risky situations.
  • Public policy engagement. Participating in shaping AI laws and standards in an open and collaborative way.

These guardrails help build long-term public trust and reduce the risk of widespread harm.


Enabling agentic growth

Agentic AI will touch nearly every part of an organization and the people it serves. Everyone has a role to play in ensuring its responsible use.

Enterprise and legal professionals should start conversations early about accountability, documentation and compliance. The conversations should include privacy and risk teams in AI projects from the outset when needed. They can create processes that can adapt as laws and technologies evolve.

Technology and product teams should build explainability and safety into systems from the beginning. These teams should enable provenance and monitoring mechanisms and collaborate across departments to avoid silos and blind spots.

Regulators and policymakers should explore risk-based and use-case-specific rules that support innovation and protect people. They can continue to encourage transparency, training and international alignment.

Conclusion

Agentic AI has great potential to help solve some of the hardest problems organizations and consumers face, improve services and create new opportunities. But success depends on how responsibly it is developed, deployed and managed.

With proactive implementation of foundational guardrails, risk-based guardrails and societal guardrails organizations can unlock the full benefits of agentic AI for all.


Additional resources



Approved
AIGP, CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/CN, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 3

Submit for CPEs