Organizations are increasingly keen to the idea of deploying artificial intelligence agents to serve consumer needs and streamline customer-facing services. These chatbots, powered by the combination of large language models and tailored algorithms, have been floated as game changers for across industries, including real estate, health care and cybersecurity.

But the risks and challenges associated with agentic solutions are amplified by the tools' autonomous nature, developers and industry members warn.

Having AI make decisions people rely on was already a hot topic in global regulatory circles, with the EU General Data Protection Regulation requiring guardrails for automated decision-making technology and U.S. states wrestling with that topic. And while not explicitly named in the EU AI Act, governance leaders said agentic AI has the possibility of being high risk depending on its application.

"Think about agentic AI like a team of specialized contractors rather than a single employee," said Papi Menon, the chief product officer for Cisco's Outshift. "You wouldn't give a contractor keys to your entire building and access to all your data — you'd create appropriate boundaries."

The proliferation

A software trained specifically to take action on behalf of the user — think of how travel agents book itineraries for clients — agentic AI's distinguisher from generative AI is that it acts on the decision it makes, ideally freeing a human from having to do the task itself.

The topic lit up the World Economic Forum 2025 meeting in Davos, Switzerland, with technology companies promising employees soon would not just be helped by AI but able to hand off tasks altogether. Salesforce CEO Marc Benioff told Axios his cohort of business leaders will be the last to work solely with humans.

Under the agentic umbrella are companies that can help make agents and directories to find the tool that fits a client's needs. Some companies, such as Wayfound, aim to help manage a stable of agents.

The privacy and security sectors have been embracing the solutions. Agentic cookies are viewed as potential means to streamline data collection by studying which sites a user agrees to opt into or pings them when a newly visited site requests information, argued Luis Fernandez, a tech executive director at VML.

"When data sharing is voluntary and contextual, the information will be more precise, up-to-date and relevant," Fernandez said. "Marketers will have to spend fewer resources sorting through noisy, inflated and irrelevant data sets, and they can shift focus to authentic engagement strategies."

Meanwhile, developers such as Microsoft and Crowdstrike have been adding agentic products to their cybersecurity offerings. It offers a way for cybersecurity officers, who are facing a burnout crisis, to focus on real threats as opposed to false positives — within a set of parameters.

"Security teams can define when and how AI-driven and automated actions occur — from triage to final response," a press release detailing the product's launch reads.

But agentic AI poses new risks, especially in the security field, researchers at the OWASP Agentic Security Initiative found. A report noted some pre-existing problems with generative AI, such as memory poisoning, tool misuse and hallucinations, are amplified by agentic models because of their ability to execute tasks instead of just making information available.

"To mitigate this, it is essential to down scope agent privileges when operating on behalf of the user," the report stated. "This is essential to prevent hijacking control via prompt injections and identity spoofing and impersonation."

HR for AI agents

Onboarding an agent comes with some of the same challenges of adopting AI, said Cisco's Menon.

There's the need to find the right tool, connecting agents with the right inputs, scalable deployment and evaluating performance, he said. And whereas silos of information can hamper effective AI governance programs, AI agents designed to executive specific tasks can be isolated because they are not programmed with interoperability in mind.

Cisco, along with LangChain, and Galileo, are trying to solve the latter problem through AGNTCY, an open source collective looking to build a standard infrastructure for agents to work on.

In the interim, Menon said treating agents like a cohort can help spot problems.

"The interactions between agents often create emergent behaviors that aren't apparent when examining any single component," he said.

According to Salesforce Executive Vice President of Global Privacy Lindsay Finch, organizations with standing privacy and data governance policies may have a head start in standing up an agentic program. She said anyone looking to add AI agents should look at the structures they currently have in place to understand how agentic tools fit into the overall program. They should also look to the EU AI Act or the Colorado AI Act to understand what requirements are in place for high-risk models.

Next is understanding the data the agent will be working with. Finch said taking a critical eye to what information the agent will access and then red teaming it to see if the agent is drawing the wrong conclusions from the information or giving bad advice can minimize risk down the road. And like human employees, Finch said it is good practice to only give AI agents lower risk tasks initially and scale up as trust around the product builds.

"Because the agent’s only going to be as good as the data and the guardrails that are in place," she said.

Those who do not have AI governance program may find guidance in existing frameworks. BBB National Programs President and CEO Eric Reicin said the group’s Center for Industry Self-Regulation's incubator on AI in hiring does not touch on AI explicitly but was designed to adapt to evolving uses in the future.

"Transparency is critical. Employers must disclose what personal data is processed, the sources of that data, and how AI influences hiring decisions. Applicants should have the opportunity to request accommodations, opt out when necessary, and understand how AI is used at each stage of the hiring process," he said.

"These measures ensure AI-driven hiring remains ethical, equitable, and aligned with both legal requirements and industry best practices."

Caitlin Andrews is a staff writer for the IAPP.