Just_Super/iStock/Getty Images Plus
Follow ZDNET: Add us as a preferred source on Google.
ZDNET’s key takeaways
- Identity management is broken when it comes to AI agents.
- AI agents expand the threat surface of organizations.
- Part of the solution will be AI agents automating security.
As enterprises begin implementing artificial intelligence agents, senior executives are on alert about the technology’s risks but also unprepared, according to Nikesh Arora, chief executive of cybersecurity giant Palo Alto Networks.
“There is beginning to be a realization that as we start to deploy AI, we’re going to need security,” said Arora to a media briefing in which I participated.
“And I think the most amount of consternation is around the agent part,” he said, “because customers are concerned that if they don’t have visibility to the agents, if they don’t understand what credentials agents have, it’s going to be the Wild West in their enterprise platforms.”
Also: The best VPN services (and how to choose the right one for you)
AI agents are commonly defined as artificial intelligence programs that have been granted access to resources external to the large language model itself, enabling a program to carry out a broader variety of actions. The approach could be a chatbot, such as ChatGPT, that has access to a corporate database via a technique like retrieval-augmented generation (RAG).
An agent could require a more complex arrangement, such as the bot invoking a wide array of function calls to various programs simultaneously via, for example, the Model Context Protocol standard. The AI models can then invoke non-AI programs and orchestrate their operation in concert. All commercial software packages are adding agentic functions that automate some of the work a person would traditionally perform manually.
Arora: “Ideally, I want to know all of my non-human identities, and be able to find them in one place and trace them.”
Tiernan Ray/ZDNET
The thrust of the problem is that the AI agents will have access to corporate systems and sensitive information in many of the same ways as human workers, but the technology to manage that access — including verifying the identity of an AI agent, and verifying the things they have privileged access to — is poorly organized for the rapid expansion of the workforce via agents.
Although there is consternation, organizations don’t yet fully grasp the enormity of securing agents, said Arora.
Also: Even the best AI agents are thwarted by this protocol – what can be done
“It requires tons of infrastructure investment, it requires tons of planning. And that’s what worries me, is that our enterprises are still under the illusion that they are extremely secure.”
The problem is made more acute, said Arora, by the fact that bad actors are ramping up efforts to use agents to infiltrate systems and exfiltrate data, increasing the number of entities that must be verified or rejected for access.
Identity management is broken
The lack of preparedness stems from the underdevelopment of techniques for identifying, authenticating, and granting access, said Arora. Most users in an organization are not regularly tracked, he said.
“Today, the industry is well covered in the privileged access side,” said Arora, referring to techniques known as privileged access management (PAM), which keeps track of a subset of users who are granted the greatest number of permissions. That process, however, leaves a big gap across the rest of the workforce.
Also: RAG can make AI models riskier and less reliable, new research shows
“We know what those people are doing, but we have no idea what the rest of those 90% of our employees are doing,” said Arora, “because it’s too expensive to track every employee today.”
Expanding the threat surface
Arora suggested that the approach is insufficient as agents expand the threat surface by being used to handle more tasks. Because “an [AI] agent is also a privileged access user, and also a regular user at some point in time,” then any agent once created may gain access to “the crown jewels” of an organization at some point during the course of their functioning.
As machines gain privileged access, “Ideally, I want to know all of my non-human identities, and be able to find them in one place and trace them.”
Also: AI usage is stalling out at work from lack of education and support
Current “dashboards” of identity systems are not engineered to track the breadth of agents gaining access to this or that system, said Arora.
“An agent needs the ability to act. The ability to act requires you to have some access to actions in some sort of control pane,” he explained. “Those actions today are not easily configured in the industry on a cross-vendor basis. So, orchestration platforms are the place where these actions are actually configured.”
The threat is heightened by nation-states scaling up cyberattacks and by other parties seeking to compromise privileged users’ credentials.
“We are seeing smishing attacks, and high-stakes credential attacks across the entire population of an enterprise,” said Arora, referring to “phishing via text message.” These automatically generated texts aim to lure smartphone users into disclosing sensitive information, such as social security numbers, to escalate an attack on an organization by impersonating privileged users.
Palo Alto’s research has identified 194,000 internet domains being used to propagate smishing attacks.
Agents to find agents
Arora’s pitch to clients for dealing with this issue is twofold. First, his company is integrating the tools gained through this year’s acquisition of identity management firm CyberArk. Palo Alto has never sold any identity management products, but Arora believes his firm can unify what is a fragmented collection of tools.
“I think with the core and corpus of CyberArk, we are going to be able to expand their capabilities past just the privileged users across the entire enterprise and be able to provide a cohesive platform for identity,” said Arora.
“With the arrival of agentic AI […] the opportunity is now ripe for our customers to take a look at it and say, ‘How many identity systems do I have? How are all my credentials managed across the cloud, across production workloads, across the privilege space, across the IAM [identity and access management] space?'”
The second prong of a solution, he said, is to use more agentic technology in the security products, to automate some of the tasks associated with a chief information security officer and their teams.
“As we start talking about agentic AI, we start talking about agents or automated workflows doing more and more of the work,” he said.
Also: Is your company spending big on new tech? Here are 5 ways to prove it’s paying off
To that end, Arora is pitching a new offering, Cortex AgentiX, which employs automation trained on “1.2 billion real-world playbook executions” of cyber threats. The various agent components can automatically hunt for “emerging adversary techniques,” the company said. The tools can analyze computing endpoints, such as PCs or email systems, to gather forensic data after attacks for security operations center (SOC) analysts to make a human decision about how to proceed with remediation.
“We’re taking what is a task that is manually impossible,” Arora said of the AgentiX techniques.
“You can’t process terabytes of data manually and go figure out what the problem is and solve the problem,” he said. “So, SOC analysts are now going to spend their time looking at the complex problems, saying, ‘How do I solve the problem?’ And they’ll have all the data that they need to solve the problem.”
Arora was quick to add that Palo Alto’s products will still largely involve approvals by SOC analysts.
“Most of our agents will have humans in the middle where our customers will be able to see the work that is done by the agent, confirm it, and then go forth and take the action,” he said.
Over time, Arora said that greater autonomy may be granted to AI agents to handle security: “As we get better at it, we’re going to allow our customers to say, ‘Okay, I’ve done this five times with me watching it [the AI agent], it’s doing it right, I’m going to approve it, allow it to act on my behalf.'”


