HomeInnovationDesigning digital resilience in the agentic AI era

Designing digital resilience in the agentic AI era


While global investment in AI is projected to reach $1.5 trillion in 2025, fewer than half of business leaders are confident in their organization’s ability to maintain service continuity, security, and cost control during unexpected events. This lack of confidence, coupled with the profound complexity introduced by agentic AI’s autonomous decision-making and interaction with critical infrastructure, requires a reimagining of digital resilience.

Organizations are turning to the concept of a data fabric—an integrated architecture that connects and governs information across all business layers. By breaking down silos and enabling real-time access to enterprise-wide data, a data fabric can empower both human teams and agentic AI systems to sense risks, prevent problems before they occur, recover quickly when they do, and sustain operations.

Machine data: A cornerstone of agentic AI and digital resilience

Earlier AI models relied heavily on human-generated data such as text, audio, and video, but agentic AI demands deep insight into an organization’s machine data: the logs, metrics, and other telemetry generated by devices, servers, systems, and applications.

To put agentic AI to use in driving digital resilience, it must have seamless, real-time access to this data flow. Without comprehensive integration of machine data, organizations risk limiting AI capabilities, missing critical anomalies, or introducing errors. As Kamal Hathi, senior vice president and general manager of Splunk, a Cisco company, emphasizes, agentic AI systems rely on machine data to understand context, simulate outcomes, and adapt continuously. This makes machine data oversight a cornerstone of digital resilience.

“We often describe machine data as the heartbeat of the modern enterprise,” says Hathi. “Agentic AI systems are powered by this vital pulse, requiring real-time access to information. It’s essential that these intelligent agents operate directly on the intricate flow of machine data and that AI itself is trained using the very same data stream.” 

Few organizations are currently achieving the level of machine data integration required to fully enable agentic systems. This not only narrows the scope of possible use cases for agentic AI, but, worse, it can also result in data anomalies and errors in outputs or actions. Natural language processing (NLP) models designed prior to the development of generative pre-trained transformers (GPTs) were plagued by linguistic ambiguities, biases, and inconsistencies. Similar misfires could occur with agentic AI if organizations rush ahead without providing models with a foundational fluency in machine data. 

For many companies, keeping up with the dizzying pace at which AI is progressing has been a major challenge. “In some ways, the speed of this innovation is starting to hurt us, because it creates risks we’re not ready for,” says Hathi. “The trouble is that with agentic AI’s evolution, relying on traditional LLMs trained on human text, audio, video, or print data doesn’t work when you need your system to be secure, resilient, and always available.”

Designing a data fabric for resilience

To address these shortcomings and build digital resilience, technology leaders should pivot to what Hathi describes as a data fabric design, better suited to the demands of agentic AI. This involves weaving together fragmented assets from across security, IT, business operations, and the network to create an integrated architecture that connects disparate data sources, breaks down silos, and enables real-time analysis and risk management. 

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read

spot_img