About

Conference

SecurityWeek’s ICS Cyber Security Conference is the conference where ICS users, ICS vendors, system security providers and government representatives meet to discuss the latest cyber-incidents, analyze their causes and cooperate on solutions.

<We_can_help/>

What are you looking for?

>Event Session

Untrusted by Design: Agentic AI in Industrial Control Systems

Wednesday, October 29, 2025
11:40 AM - 12:15 PM
Windsor C (Strategy Breakout)

About This Session

Industrial Control Systems (ICS) are embracing AI agents, but this comes with new security questions. Integrating agentic AI—AI systems with autonomy and decision-making capability—into ICS can be as risky as installing a physical component from an unknown supplier with no security certification. Just as one would be wary of hardware from an unvetted source (no security score, no pen-testing results, unknown vulnerabilities), we must scrutinize AI agents before trusting them in critical infrastructure. Agentic AI can turn into a double-edged sword in the hands of threat actors. Adversaries are already exploiting these AI systems to turbocharge attacks on industriais targets. For example, recent threat intelligence reports show attackers using AI to dramatically accelerate the attack lifecycle, cutting the time from initial breach to system compromise or data exfiltration from days to hours. An AI-driven malware might intelligently adapt to mimic normal ICS network traffic, evade detection, or rapidly identify weak points in a SCADA system. This means that AI is not only helping defenders, but equally empowering attackers – a reality that ICS operators must urgently address. AI: Attack Accelerator: Threat actors are leveraging agentic AI to automate and speed up ICS attacks — shrinking the window from intrusion to impact. Lock Down AI Agents: Applying strict “least privilege” access controls on AI agents can prevent them from taking unauthorized actions or accessing sensitive systems. As we embrace AI in ICS, we must do so with eyes wide open. Agentic AI is not inherently malicious—but its autonomy, if left ungoverned, can become a liability. Security teams must evolve their threat models, treating AI agents with the same caution as any third-party component. In the age of autonomous systems, trust must be earned—not assumed.

Speaker

Liliane Scarpari

Liliane Scarpari

Security Technical Specialist - Microsoft

Liliane Scarpari is a Sr. Technical Specialist with deep expertise in cybersecurity for critical infrastructure. She focuses on the intersection of AI, industrial systems, and threat resilience. With a background in cyber security and a passion for secure innovation, Liliane helps organizations navigate the evolving risks of digital transformation. She recently presented at the https://smartgridobserver.com/ICS-Cybersecurity/agenda.htm, where she shared insights on the role of agentic AI as a new “untrusted component” in ICS environments.