Multiple cyberattacks on critical infrastructure facilities in 2016 resulted in mere inconvenience or embarrassment. How long can dumb luck keep us from harm?
By Michael Shalyt, VP Product, APERIO Systems
When the U.S. Energy Department released a nearly 500 page report this month warning of an “imminent” threat to the electrical grid, it was the latest reminder of just how dependent our day-to-day existence is on critical infrastructure networks — from power grids and water supplies to transportation networks and more. In 2016, attackers clearly demonstrated that industrial control systems, just like financial services, healthcare and other industries before them, are highly vulnerable to cyberattack.
Yet the truly scary lesson of 2016 is not so much what happened as a result of cyber attacks on critical infrastructure as what didn’t happen. Despite hackers accessing major infrastructure facilities, no municipal water supplies were poisoned, no nuclear power plants melted down, no trains crashed.
The non-lethal outcome of 2016 cyberattacks on critical infrastructure was the result of hackers that chose not to push their limits or had limited capabilities. The question we need to ask ourselves as we look forward to 2017 is, how long can dumb luck keep us safe?
2016: ‘The Year of If’
In March 2016, hackers breached the network of the “Kemuri Water Company” — the name given by Verizon, who investigated the breach, to an anonymous regional U.S. water utility. Through the utility’s outdated AS/400 server, hackers took control of hundreds of programmable logic controllers (PLCs) that governed the flow of toxic chemicals used to treat water.
The ransomware attack on the San Francisco transit system (MUNI) in November 2016 ended in free rides, a pidgin-English manifesto, and a 100 Bitcoin ransom demand. Once inside MUNI’s network, the hackers could have potentially disrupted service or even rerouted trains to cause collisions.
In both of these cases, and in numerous others, hackers either had their fingers on or close to the trigger of a mass-casualty event.
What stopped them from pulling the trigger?
According to Verizon, “If the threat actors had a little more time, and with a little more knowledge of the ICS/SCADA system, KWC and the local community could have suffered serious consequences.”
The keyword that we keep hearing about critical infrastructure attacks in 2016 is “if.” If the hackers had been NSA-grade, if they’d had more time in the network, or if they’d actually wanted to cause serious damage or loss of life, 2016 would not have been The Year of If. Rather, it could have easily been the year cyberattacks on critical infrastructure shifted from merely disruptive to massively destructive.
Robust Operational Procedures – 2016’s Saving Grace
The air gap between critical infrastructure digital control systems and connected business systems has largely disappeared. The benefits of real-time data for billing and other business uses, along with the cost-effectiveness of remote management, have driven this trend — which shows no signs of reversing.
With the crumbling of the once-impenetrable Chinese Wall between hackers and control over physical systems, what saved the day in 2016 during breaches like that at KWC was good old-fashioned analog procedures and a lot of luck.
According to the Verizon report, “KWC’s alert functionality played a key role in detecting the changed amounts of chemicals and the flow rates.” They’re talking about manual procedures here. These are procedures that were smartly put in place to handle malfunctions and physical emergencies. Operational teams at KWC, MUNI, and other hacked critical infrastructure providers have years of experience managing faults, downtime, and force majeure outages. They’re trained to react quickly to stop damage, minimize downtime, isolate the source of the problem and protect critical infrastructure.
Thus, the problem is, the procedures that saved us are themselves quickly being replaced by automated, smart systems — computers, that is. And we’ve learned that computerized procedures, no matter how redundant or robust, can be fooled.
So, in 2017, when people in the control room of critical infrastructure facilities are relying on physical data provided by digital systems which are monitored by other digital systems — where exactly is the failsafe?
Data Forgery – the Next Big Threat
If we agree that manual procedures, possibly as much as cybersecurity, were responsible for saving lives in attacks on critical infrastructure in 2016 – then we need to think long and hard about the nexus at which procedures and data meet.
This nexus is operational state awareness. And in light of our experience in 2016, it should be considered the Achilles Heel of critical infrastructure in 2017: the weak spot at which hackers are already beginning to strike.
Procedures are only as effective as the operational awareness driving them. To circumvent operational resilience and inflict actual physical damage on infrastructure, attackers need only to effectively blind operators to the true operational state of their equipment.
Decision-making at large-scale infrastructure facilities is based nearly entirely on data from thousands of sensors. These sensors range from legacy devices to brand-new IoT monitors. Yet all remain notoriously vulnerable to direct cyberattacks as well as hijacking of data as it moves from sensors to the control room.
This danger — thankfully unrealized in 2016, The Year of If – is known as data forgery. An understanding that existing safety and sensor fault detection mechanisms cannot detect forged sensor data is moving into the mainstream with hackers. From state-sponsored power hackers to basement amateurs, there’s a growing realization that falsified sensor data can easily mislead control systems, mask the actual state of physical systems, and leave the control room operationally blind.
2017 – ‘The Year of Truth’
If 2016 was the ‘Year of If,’ then 2017 should be the ‘Year of Truth,’ at least in terms of the data that drives critical infrastructure’s operational awareness. To keep us safer in 2017, we need to have full confidence in the integrity of sensor and other operational data.
To put it bluntly, in 2017 we need to be sure that our machines are not lying to us — because we can no longer rely on luck to keep our critical systems secure.