The CIA Triad and Thinking in First Principles (Part 1 of 3)
Early on when we launched the Cybersecurity Center, I recorded a video where I claimed the need for first principles of cybersecurity, and how these principles would help us progress the science of cybersecurity. In that video, I even offered an analogy to that of the first principles of electrical engineering, like the phenomenon described by Maxwell’s equations.
That analogy drew some criticism from readers, and reasonably so. Let me be clear on this point: I don’t think we are going to find an equation(s) that we can apply to solve all our cybersecurity problems. Maybe we will—the third and final post in this series will consider this possibility—but right now I’m guessing probably not. They do, however, need to be fundamental statements that serve as the basis for all higher-level ideas. This is not an easy task, but the following represents some of my thought process in trying to understand how to reach that goal.
After reading a number of papers and having many conversations about the fundamentals and first principles of cybersecurity, one set of characteristics is consistently mentioned as the basis of all cybersecurity: the CIA triad—confidentiality, integrity, and availability. These concepts are widely regarded as the pillars of cybersecurity [1], [2]. In short:
Confidentiality—Ensuring that only those required and approved have access to the information.
Integrity—Ensuring the information sent is the information received.
Availability—Ensuring that, when needed, the information is available.
The above definitions may likely come under attack for a variety of reasons, both reasonable and not. This is precisely why I question the idea of treating the triad as first principles: no one seems to agree on precise definitions. These ideas of confidentiality, integrity, and availability, originally articulated by Saltzer as far as I can tell, are slightly but consequentially modified from how standards bodies like ISO define them. Not having a precise definition or statement makes it difficult, if not impossible, to satisfy the conditions of a first principle.
For example, ISO 27000, a family of international standards for managing information security risks and the associated controls to do so, defines confidentiality as a “…property that information is not made available or disclosed to unauthorized individuals, entities, or processes” [3].
Compare this to Saltzer and Schroeder’s original definition of failing to preserve confidentiality:
“An unauthorized person is able to read and take advantage of information stored in the computer. This category of concern sometimes extends to ’traffic analysis,‘ in which the intruder observes only the patterns of information use and from those patterns can infer some information content. It also includes unauthorized use of a proprietary program” [2].
The difference in language is nuanced but striking. The obvious difference between the two is that Saltzer and Schroeder frame the definition around a perpetrator and protection from malicious acts, whereas the ISO 27000 definition frames the idea in terms of appropriately making information available. To me, this is the difference between protecting the information from malicious actors versus allowing access to those that need the information. The difference in mindset might be explained by the intent of the authors, where Saltzer and Schroeder, and most every cybersecurity researcher of that era, were working on projects associated with the U.S. Department of Defense. ISO, from Switzerland no less, takes a more neutral tone.
None of this discussion is to suggest that the language associated with the triad is responsible for the failure to make cyber systems implicitly safe. That said, I would be surprised to hear anyone make an argument that failure to standardize on language has helped the situation.
Insofar as language is concerned, different definitions will dictate one’s ability to decide whether a security control is appropriately addressing the elements of the triad. Take, for example, the arguments made in [4]. Lundgren uses the example of a time-controlled safe and that unavailability is a method of making something secure. According to the ISO definition of availability—“property of being accessible and usable on demand by an authorized entity” [3]—a time-controlled safe does not meet the criteria. On the other hand, Saltzer and Schroeder’s definition of compromised availability is fully compatible: “…an intruder can prevent an authorized user from referring to or modifying information, even though the intruder may not be able to refer to or modify the information” [2].
The security behind a time-controlled safe is a perfectly acceptable control—depending on the needs of the user—as it allows modification at a certain time of day but is locked at all others.
It seems that most people agree that perfection of all pillars of the triad is an unreasonable, and likely an impossible, expectation. Considering this, if the triad itself is malleable and perhaps context-dependent, it is hard to argue that its pillars are a first principle, or even fundamental characteristics of security.
That is not to say that the triad is not important; it most certainly is. However, in a search for first principles, the triad doesn’t suffice, but my next post will show how the triad may be able to lead us there.
Contributor
Nicholas Seeley
Senior Vice President of Engineering Services, Engineering Servicesnicholas_seeley@selinc.comView full bio[1] S. Samonas and D. Coss, “The CIA Strikes Back: Redefining Confidentiality, Integrity and Availability in Security,” Journal of Information System Security, vol. 10, no. 3, pp. 21–45, 2014.
[2] J. H. Saltzer and M. D. Schroeder, “The protection of information in computer systems,” Proceedings of the IEEE, vol. 63, no. 9, pp. 1278–1308, Sep. 1975, doi: 10.1109/PROC.1975.9939.
[3] 14:00-17:00, “ISO/IEC 27000:2018,” ISO. https://www.iso.org/cms/render/live/en/sites/isoorg/contents/data/standard/07/39/73906.html (accessed Apr. 05, 2022).
[4] B. Lundgren and N. Möller, “Defining Information Security,” Sci Eng Ethics, vol. 25, no. 2, pp. 419–441, Apr. 2019, doi: 10.1007/s11948-017-9992-1.
Contribute to the conversation
We want to hear from you. Send us your questions, thoughts on ICS and OT cybersecurity, and ideas for what we should discuss next.
Tech Paper
Learn how to implement defense-in-depth cybersecurity in industrial control system (ICS) environments.
Video
Learn how the CIA triad informs how to apply cryptography to operational technology (OT) networks.
Video