The CIA Triad and Thinking in First Principles (Part 2 of 3)
Given the previous post, I feel I should clarify: to imply that the CIA triad is therefore unnecessary or unusable would be a misstatement. I believe it is useful in practice as a practical exercise to evaluate possible controls and also useful in helping us get to the true first principles. I would consider the triad as a functional characteristic, and in doing so, it was helpful to think about why. What is it about confidentiality, integrity, and availability that make them important aspects of cybersecurity? What are they rooted in?
These thoughts led me to carefully consider Descartes’ requirements of a first principle [1]. Interpreting his words, we have found a first principle when the idea can no longer be broken into smaller, more fundamental ideas. So, what are the fundamental components of confidentiality, integrity, and availability?
These questions led me to consider a topic that I am much more familiar with: applications of protective relaying in electric power systems. The first book that I ever read about protective relaying was GE’s The Art and Science of Protective Relaying [2]. The book lays out the functional characteristics of protective relaying as sensitivity, selectivity, and speed. I found there to be a striking resemblance between these three characteristics and the triad. In both, it seems that perfection of all elements is not possible or even necessary. Furthermore, improving one characteristic seems to negatively impact the others. With that, what are the roots of sensitivity, selectivity, and speed? Each characteristic for microprocessor-based protective relays has, at its root, at least one or some combination of Maxwell’s equations, the Nyquist criterion, and Shannon’s theorem.
For example, speed is dictated by the propagation of the electromagnetic wave produced by a fault on a line; speed is dictated by physics and defined by Maxwell’s equations. As defined by the physics we know today, a protective relay will never be able to detect a fault and operate faster than it takes for the fault-induced wave to travel down the line where the relay sits. The phenomena described by Maxwell’s equations are the most basic elements that define the limits of speed in protective relaying; they embody the first principle of speed. Similar arguments can be made for why C. E. Shannon and Harry Nyquist play a role. Can we use a similar process starting with the CIA triad to get to first principles?
If confidentiality, integrity, and availability are functional characteristics of cybersecurity, what is their foundation? Starting with confidentiality, what is the genesis of confidence? If I tell someone to keep something in confidence, I am relying on my trust in that person to obey my request. If that trust is misplaced and the person betrays me, then my confidence is broken. I would argue that, similarly, if information is to be held confidential the underlying principle that allows that confidence is trust.
In the case of Anderson’s reference monitor [3], confidentiality is preserved by the reference monitor overseeing interactions and requests for data. If the person or program requesting specific data is authorized to access the data, as determined by the set security policy and arbitrated by the reference monitor as the ultimate authority, access to the data is granted. Otherwise, access to the data is revoked. But as asked earlier, what is monitoring the reference monitor? If we perpetually need a monitor to monitor the monitor, we enter a never-ending cycle. At some point, we must trust that, in this example, the reference monitor is doing what it is designed to do.
On the other hand, we can consider Bell and Lapadula’s formal methods of determining security. They used general system theory to derive and prove a well-defined mathematical system for guaranteeing the security (in this case, also confidentiality) of a system. The caveat being, in their words: “Two problems are immediately evident. First, unless the system guarantees the inviolability of rule W our security theorem does not apply…” [4].
The actual rule W is inconsequential for our discussion; however, what is important is that the inviolability of W would be determined by the hardware and software that implemented the rule W. How do we guarantee the inviolability of hardware and software? My argument is that we cannot. At some point, we must invoke trust.
However, before we get into the why of trust, it is important to ask: is trust the first principle? Going back to Descartes, we again ask if trust is the most fundamental element, or can it be broken down further? With that question, it is helpful to ask another: Why does trust exist in the first place? This question leads into volumes of texts and theories on the origin and nature of trust that people have spent lifetimes dedicating themselves to the study of. This is to say that there is no one universally agreed upon explanation, but what I found most convincing was from James Coleman and his book The Foundations of Social Theory [5]. Coleman argues that issues of trust are a subset of issues that result from risk, and without risk, trust has no need to exist. Following this line of thought, if trust is a subordinate to risk, this stands to reason that risk is more fundamental than trust.
In a similar exercise, we can ask, is risk the most elemental concept? To this question, we might need to figure out why risk is present. We can make the argument that risk would be nonexistent if all data and results were certain. From the perspective of cybersecurity, if one knew with absolute certainty that the only ones that had access to, and would ever get access to, a critical system were those who had been appropriately authorized, then all malicious outsider security problems go away. With absolute certainty, risk seems to disappear, because what is risk other than a calculation to quantify the likelihood that an event goes as expected? This would explain why risk is often measured in probability.
So, what is the first principle of cybersecurity? It seems to me that it has something to do with uncertainty. The root of the problem of cybersecurity, as far as I can tell, lies at the feet of uncertainty. How useful is this revelation? Not very, in a practical sense. It only allows us to ask more questions and does not seem to provide any real solution. That said, it can serve as a starting point. And with this starting point, we can dive into what uncertainty means and what tools we have available to describe it in ways that might provide practical solutions.
Contributor
Nicholas Seeley
Senior Vice President of Engineering Services, Engineering Servicesnicholas_seeley@selinc.comView full bio[1] R. Descartes, Principia Philosophiæ, 1644.
[2] C. R. Mason, The Art and Science of Protective Relaying, 1st edition. New York: Wiley, 1956.
[3] J. P. Anderson, “Computer Security Technology Planning Study (Volume I),” p. 43.
[4] D. E. Bell and L. J. LaPadula, “Secure Computer Systems: Mathematical Foundations,” MITRE CORP BEDFORD MA, Nov. 1973. Accessed: Apr. 05, 2022. [Online]. Available: https://apps.dtic.mil/sti/citations/AD0770768
[5] “Foundations of Social Theory — James Coleman.” https://www.hup.harvard.edu/catalog.php?isbn=9780674312265 (accessed Apr. 05, 2022).
Contribute to the conversation
We want to hear from you. Send us your questions, thoughts on ICS and OT cybersecurity, and ideas for what we should discuss next.
Video
Video
Video
Nicholas Seeley examines the risks of various cybersecurity solutions and whether they increase complexity, decrease complexity, or increase observability.