Integrity, Part 1: The Middle Child
The more I think about the CIA triad, the more I feel that attention to integrity is neglected. There tends to be a battle for the first initial in the triad, and what industry people work in likely dictates what people put first. Those in the IT realm generally keep confidentiality first—CIA. A growing number of folks on the operational technology (OT) side are starting to write about availability taking the first spot—AIC. Integrity always seems to remain in the middle in what seems like an unceremonious admission that the characteristic is important, but we don’t quite know what to do with it. But is anything more important than integrity?
That question probably doubles as a life statement as well as a cybersecurity musing, but I’ll stick to the cyber realm here—though I do have a fascination with the concept of trust, which goes hand in hand with integrity. Certainly, confidentiality is important, but what is the value of keeping information confidential if the integrity of that information is corrupt? Similarly, availability is important, but how useful is it to have highly available information of low or suspect integrity?
I wonder if the reason that integrity is such an overlooked characteristic is because we don’t understand much about integrity when it comes to data. Sure, we have hashing, but that seems to only be a complement to its cryptographic cousin, encryption—good at making sure what was sent is what was received, but just like encrypting information that has already been corrupted makes little sense, so does hashing something that has already been corrupted. Perhaps not much has been written about integrity because we don’t have a firm grasp on how to quantify it, as opposed to availability and confidentiality.
Clarkson and Schneider wrote a paper about a decade ago [1] that addresses this very concern. The authors use standard information theory techniques à la Claude Shannon to define various aspects of information integrity. It’s an interesting read for anyone who wants to travel down this wormhole. The wormhole being: once you start reading something that references Shannon and his work in what is now known as information theory, it seems obligatory to start reading Shannon directly [2], and from there you go to the history that informed Shannon, namely Hartley [3] and then will likely stumble on to Szilard [4], and then you get introduced to Maxwell’s Demon [5], and at that point, you start writing about things like how nobody pays enough attention to concepts like information integrity and highlighting some of the rare academic works that do so. Don’t say I didn’t warn you.
More to the point, the paper goes on to focus on, explain, and quantify the idea of data contamination as a measure of integrity. As mentioned above, Clarkson and Schneider use the traditional Shannonian information theory model to derive these quantifications. For those unfamiliar with Shannon’s work, it’s based entirely on probability theory. Where Shannon’s work mainly focused on the information capacity of communications channels, he spends a good deal of [2] defining exactly what he feels information is. Interestingly, Clarkson and Schneider make use of previous work on belief-based quantification of information flows [6]. I say interestingly because Shannon specifically and somewhat famously [7] called the idea of “meaning” related to information as irrelevant. So, 60 years later, to incorporate belief into a framework developed by someone that sought to distance the framework from meaning is, well, interesting…at least to me.
The paper ends asking more questions than it answers, and for sure it can be listed as an academic exercise. I don’t say this in a pejorative sense—quite the opposite, really. It’s an important topic that deserves bright minds exploring it; it starts to get to what I feel are some of the more foundational questions needing answered in cybersecurity. What is information integrity? So, thanks to Drs. Clarkson and Schneider for putting in the effort. And, since I’ve been pretty hand-wavy about the specifics of the Clarkson and Schneider paper, in my next article I’ll dive into the details of how what they are proposing works and why I’m intrigued by their idea.
Contributor
Nicholas Seeley
Senior Vice President of Engineering Services, Engineering Servicesnicholas_seeley@selinc.comView full bio[1] M. R. Clarkson and F. B. Schneider, “Under consideration for publication in Math. Struct. in Comp. Science Quantification of Integrity †.” 2012.
[2] C. Shannon and W. Weaver, “The Mathematical Theory of Communication,” p. 131.
[3] R. V. Hartley, “Transmission of information 1,” Bell Syst. Tech. J., vol. 7, no. 3, pp. 535–563, 1928.
[4] L. Szilard, “On the decrease of entropy in a thermodynamic system by the intervention of intelligent beings,” Behav. Sci., vol. 9, no. 4, pp. 301–310, 1964.
[5] Maxwell’s Demon: Entropy, Information, Computing. Princeton University Press, 2014. doi: 10.1515/9781400861521.
[6] M. R. Clarkson, A. C. Myers, and F. B. Schneider, “Quantifying information flow with beliefs,” J. Comput. Secur., vol. 17, no. 5, pp. 655–701, Oct. 2009, doi: 10.3233/JCS-2009-0353.
[7] J. Gleick, The Information: A History, a Theory, a Flood. Knopf Doubleday Publishing Group, 2011.
Contribute to the conversation
We want to hear from you. Send us your questions, thoughts on ICS and OT cybersecurity, and ideas for what we should discuss next.