The control room inside the Paks nuclear power plant in Hungary. Photo: Bloomberg/Contributor
Major categories of cyber risk
Information disclosure
Perhaps the most well-known of information management problems is an information disclosure event or ‘breach’. This occurs so frequently that people have become quite numb to the issue. In the civil nuclear sector, the causes might include staff or contractors leaving storage media on trains and in taxis; disgruntled employees taking proprietary data with them when they resign (or are terminated); an organization being hacked electronically; and users being tricked through social engineering into giving away sensitive nuclear documents.
The first concern is to understand the different types of data that an organization holds, and how this can determine regulatory fines in the case of a breach. The broad categories of sensitive data can be defined as follows: personally identifiable information (PII), sensitive personal data (SPD), payment card and credit card information (PCI), protected health information (PHI), commercially confidential information (CCI), financially sensitive information (FSI) and value-sensitive information (VSI).1 PII and SPD usually consist of personal details and other information about an individual, such as financial status or religious affiliation. Disclosure of such information can be damaging to an individual, for example because it enables financial fraud or violates privacy. PCI is usually a concern in respect of breaches that can involve fraud affecting a person or institution. PHI breaches can lead to disclosures of medical data: for example, information about a pregnancy or terminal illness can affect job prospects or loan approvals. Protection of CCI is typically a more organizational concern. This category could include information on proposed mergers and acquisitions, or – specifically in the civil nuclear sector – plans for a new reactor vessel that is soon to be commercialized. FSI and VSI often refer to trades or asset valuations. Breaches of each of the seven types of data outlined above carry different average costs to the victim, due to variations in the sensitivity of the data concerned and the size of regulatory fines in differing jurisdictions. While a lot is known about such fines in the US, and even in Europe, the numbers vary widely according to geography and jurisdiction.
Civil nuclear facilities and organizations also hold sensitive information on other categories, namely security clearances, national security, health and safety, nuclear regulatory issues and international inspection obligations. The variety and sensitivity of these data mean that products tailored for insuring the civil nuclear industry have evolved independently and are likely to continue to do so.
There exists a great deal of available data about data breach risk, which is a well-studied phenomenon. This means that data breach risks to organizations can be estimated with increasing accuracy. A crucial but simple estimation approach in any organization is to examine the list of types of information (see Table 1) and estimate or categorize the number of records held in each category. These estimates need not be exact: indeed, the Cambridge Centre for Risk Studies has developed a simple logarithmic scale from P1 to P9 for studying past breaches.2
Table 1: How to estimate data breach risk to an organization
Severity |
Number of records lost |
Number of recorded US events (2012–18) |
% of events |
---|---|---|---|
P1 |
0 to 100 |
Below reporting threshold |
0 |
P2 |
100 to 1,000 |
Below reporting threshold |
0 |
P3 |
1,000 to 10,000 |
2,022 |
58 |
P4 |
10,000 to 100,000 |
918 |
26 |
P5 |
100,000 to 1 million |
324 |
9 |
P6 |
1 million to 10 million |
162 |
5 |
P7 |
10 million to 100 million |
50 |
1.4 |
P8 |
100 million to 1 billion |
19 |
0.5 |
P9 |
More than 1 billion |
2 |
0.1 |
Source: Risk Management Solutions, Inc. and Cambridge Centre for Risk Studies (2016), Managing Cyber Insurance Accumulation Risk, https://www.jbs.cam.ac.uk/fileadmin/user_upload/research/centres/risk/downloads/crs-rms-managing-cyber-insurance-accumulation-risk.pdf (accessed 28 Jan. 2019); and Coburn, Leverett and Woo (2019), Solving Cyber Risk.
To estimate costs in a given scenario, an organization should start by anticipating the likely worst-case outcome, as this provides the upper limit in terms of projected data loss. The next step is to make a list of the seven above-mentioned categories of information type in one column (PII, SPD, PCI, PHI, CCI, FSI, VSI), then leave space for a numeric estimate in another. For each category, a rough guess should be made as to how many records are held (corresponding to the same P1–P9 data-loss scale shown in Table 1). A relatively straightforward example might involve payroll data, for which each organization would obviously hold records for each employee. A reasonable starting assumption would be that the number of records is at minimum equal to the number of employees – with the caveat that many organizations potentially hold data on past as well as current employees. This exercise need not be time-consuming: the number of records can be estimated on the logarithmic scale, and easily refined later as and when more accurate numbers are discovered through interviews with different organizational units. Similarly, estimates can be made for other types of data, although some estimates may fall into the P1 category (0–100 records), for example if the organization does not hold that type of data (though organizations are often surprised to discover that they hold more data than expected).
The next step is to imagine the maximum cost based on this information, which is why the number of records for different categories of information needs to be listed. For example, a PCI event can cost 1.5 times as much as a PII event, and 5.5 times as much as a PHI event.3 Some up-to-date information on the average cost per record may be available for a specific category. Failing that, a good first estimate is IBM’s 2017 cross-record average of $141 per record (in other words, an organization can arrive at a rough estimate of its exposure by multiplying the number of its employees by $141 if better, more localized data are unavailable). It may be necessary to add a shareholder loss estimate (i.e. reflecting reputational damage), ranging from 0 to 25 per cent of the share price in the case of a publicly listed company. This contingency should be kept separate from the costs of incident response for a period of a few days or weeks while the affected organization seeks to understand the source of the breach. A good base assumption for incident response costs is to allow $400 per hour for a period of one day to three weeks. The resulting estimate is likely to be a worst-case scenario, but this is a realistic way of understanding the size of loss the organization hopes to prevent. Over time, a more accurate picture may emerge of the probability distribution of a range of scenarios, allowing mitigation contingency measures and self-insurance to be funded at a higher level of confidence.
A good base assumption for incident response costs is to allow $400 per hour for a period of one day to three weeks. The resulting estimate is likely to be a worst-case scenario, but this is a realistic way of understanding the size of loss the organization hopes to prevent.
Another simple method of breach cost estimation is to rely on the parameters in the EU’s General Data Protection Regulation (GDPR), which provides for fines of up to €20 million or 4 per cent of an organization’s global turnover, whichever is higher. If an organization holds any data on Europeans, this figure also provides an easy answer to the question: ‘How much it might cost us?’ Of course, there may very well be someone in a particular organization who has much better data and can thus offer more accurate analysis than is possible via these two simple methods; in addition to enabling a more realistic projection of exposure, this would offer the opportunity to boost in-house expertise in cyber risk management.
Compromise of industrial systems
Technological advances and the human factor mean it is no longer sufficient (or perhaps even possible or desirable) to isolate computer systems from the internet, a process known as air-gapping. The Stuxnet attack on Iranian ‘air-gapped’ nuclear centrifuges, for instance, illustrated the ability to infiltrate sensitive systems through a simple thumb drive and therefore the unreliability of air-gaps.4
‘Air-gaps’ – IT measures designed to isolate computer systems from the internet – need to be continually maintained for industrial systems. Yet years of evidence indicate that proper maintenance of such protections is often lacking (mainly because very real economic drivers exist that push users towards keeping infrastructure connected). Indeed, even when air-gaps are maintained, security breaches can still occur, as evidenced by an incident at the Davis-Besse nuclear generation facility in the US state of Ohio in 2003.
The plant’s network was breached by the Slammer worm,5 which gained access via a consultant’s data connection, exploiting the infrequency of security patches at the facility. Although the plant was offline for maintenance at the time of the incident, the worm disabled a safety parameter display system – a safety-critical feature even when the plant is not operating – for five hours. The security breach underlined the fact that systems such as that at Davis-Besse are vulnerable to randomly scanning worms, and that an air-gap offers limited protection (although its presence may delay infection by mitigating the core vulnerability that is the root cause of a worm attack). Engineers are known to use air-gaps to avoid patching, but the myth that this alone offers sufficient protection can lead to a dangerous tendency to deal with symptoms and not root causes. Such examples remind us that even air-gapped industrial systems still carry a residual risk of infection by virus, worm, Trojan or insider hacking attack, and that organizations are well advised to calculate such risk no matter how effective they believe their air-gaps to be.
The Davis-Besse incident was also a reminder of the potential safety-critical implications of malware for supervisory control and data acquisition (SCADA) systems, even when malware is not specifically designed to target such systems.6
In the civil nuclear sector, reports of computer glitches date back at least as far as 1991, though the earliest recorded malicious attack on a nuclear plant occurred in Lithuania in 1992.
Over the years, a great number of cybersecurity incidents at industrial facilities have resulted in physical effects. One of the earliest on record occurred in Maroochy Shire in Queensland, Australia in 2000, when a hacker released a large sewage spill.7 In the civil nuclear sector, reports of computer glitches date back at least as far as 1991,8 though the earliest recorded malicious attack on a nuclear plant occurred in Lithuania in 1992.9 Even worms and viruses intended for other targets can sometimes have impacts on civil nuclear facilities, as has been reported in the US.10 11 Japan has also experienced computer security problems with its civil nuclear facilities.12 Glitches continue to cause problems with control systems to this day.13 14 15
Note that most of these events occurred before the emergence in 2010 of Stuxnet, one of the most famous examples of a malicious computer program that has caused physical damage.16 There have also recently been two cyberattacks in Ukraine that have led to power outages,17 as well as an incident at an unspecified location in the Middle East involving malware that specifically altered the safety systems of industrial facilities.18
In many jurisdictions, the regulatory regime does not provide for compensation to victims of radiation released as a result of a cybersecurity incident.19 Plenty of data exist about cybersecurity incidents at civil nuclear facilities, but information of specific relevance for an insurance context (and codified in actuarial calculations) is not very easy to acquire. One reason for this information gap is that incidents resulting in radiological environmental impacts are much rarer than events involving data breach, distributed denial of service (DDoS) and ransomware.
Supplier digital business interruption
Even if an organization’s staff are highly trained, ready and capable of handling any technological accident, hacking incident or case of insider sabotage, it still faces the challenge of having to communicate and/or do business with other organizations, some of which may have less stringent safeguards in place. To illustrate, let’s consider a relatively simple ransomware event in which a computer system is deliberately infected with software that encrypts all the files with a malicious hacker’s secret cryptographic key. In theory, this means that the files can only be unlocked by the hacker, who charges a ransom to do so.
Now, imagine such an infection occurring in a global shipping company. The inability of that company to unlock files containing shipping manifests, legal certificates, insurance documentation or payment processing systems could hold up its global shipping for weeks or months. Such an event might in turn delay, disrupt or cancel the shipment of materials necessary for the safe operation of a nuclear facility, even though that facility may, in isolation, be appropriately protected against cyberattacks. Despite such crises sounding like remote possibilities, cyber-related disruptions to supply chains have already happened. In 2018, container shipping giant Maersk was hit by a ransomware attack that cost the firm an estimated $300 million.20 Yet what was even more interesting about the incident was the cost of the resultant disruption to Maersk’s downstream customers, estimated at nearly $3 billion.21
Thus, while an organization’s own security and privacy teams may be top-notch and without fault, it still might need to consider using insurance to cover the risks of disruption to its suppliers, whether these be for digital or physical assets. If an upstream supplier is hacked, this can cost the downstream organization time, money and effort, regardless of the state of the latter’s security measures. Insurance products are now evolving that are designed to cover contingent business interruption from cyberattacks. For some organizations, such arrangements may be worth discussing either within their internal risk teams or with cyber insurance specialists.