# Integrity, Part 2: Relationship With Simplicity and Security

In my previous post, I didn’t do the hard job of offering my own thoughts on Clarkson and Schneider’s paper [1], which I was told “…is akin to lobbing a grenade, walking away, and expecting other people to pick up the pieces.” That is a fair critique, so this post will dive in a little deeper to what Clarkson and Schneider are demonstrating in their paper, “Quantification of Integrity,” and why I’m intrigued by the idea. Note: the ideas below will make the most sense to those that have a background in probability and information theory, though I will explain the concepts as best I can in plain English.

So, let’s jump in and show how Clarkson and Schneider define contamination of information, which is part of how they proposed to quantify integrity. In a single bitwise operation, contamination can be defined as the mutual information shared between an untrusted input and a trusted output given that the system is provided a trusted input. Written in the language of information theory and probability:

*C *≜* I *(*u*_{in},* t*_{out }|* t*_{in })

This is also known as conditional mutual information, and in some texts is presented as *C* ≜ *I *(*u*_{in}; *t*_{out}_{ }| *t*_{in }). Clarkson and Schneider used a contrived example to demonstrate how this idea works on a simple bitwise operation. Suppose an untrusted input (*u*_{in }) could perform a logical *xor* operation on a trusted input (*t*_{in }) to alter the trusted output (*t*_{out }):

*t*_{out} = *u*_{in}* xor t*_{in}

Does it have to be *xor*? No. Does it even have to be a logical operation? That’s an interesting question and one that deserves more thought. Regardless, using the definition above to solve for C, Clarkson and Schneider show us that the untrusted input, *u*_{in }, represents one bit of contamination, which we would expect because *u*_{in} in this example is one bit. I would say this is a trivial example, but the calculation is certainly not trivial—not difficult, exactly, but not trivial. For anyone interested in solving this, the paper does a good job of providing the basic tools of how to calculate joint and conditional probabilities to arrive at the answer. For brevity’s sake, I’m not going to step through the math here. That said, in true academic form, there are several steps missing in the paper needed to arrive at the solution, so feel free to reach out to me if you’d like to know how I got to the same answer of one bit.

The example is a nice hands-on demonstration of how to perform the calculation and show that the definition of contamination can be used to evaluate a system and arrive at what could reasonably be considered the correct answer. Outside of that, I don’t think the example has any practical utility, but I don’t think that was their intention, either.

Having touched on the technical details, let’s talk about why I find this paper interesting. First, I’ve come to the conclusion that integrity is the most important pillar of the CIA triad, yet it is the least understood. To me, that means there is plenty of opportunity for exploration with potential for fundamental advancements, which was my whole point in digging into these ideas.

More specifically to what Clarkson and Schneider are conveying, contamination is defined as the mutual information between an untrusted input and a trusted output. Why is that interesting? I believe that it implies if less information is required to drive the output, then less contamination is possible, which points towards a benefit to reducing complexity.

From a power engineer’s perspective, opening and closing a power circuit breaker really only requires one bit: 0 = open, 1 = close, or vice versa. But from a SCADA console, how many bits are required to open that same breaker? Pick your favorite protocol, but regardless, substantially more than one bit. Which, per how I read Clarkson and Schneider, opens the system to far more contamination.

However, if I try to argue the other side, it’s also possible that the relative amounts of contamination in different systems are irrelevant to their security. If we use the hypothetical case where the breaker only needed one bit of information to operate, then I only need one bit of contamination to cause an unintended operation.

So, where does that leave us? This paper helps provide a rich basis for further investigation into the quantifiable correlation, and potentially causation, between integrity and security. Clarkson and Schneider are providing a framework to think about and quantify integrity. That framework uses information theory, and information theory is based on ideas related to uncertainty and complexity. And if you have made it this far in my post, you were probably already aware of my affection for and interest in how uncertainty and complexity affect cybersecurity. So, this has been a fun thread to pull, and I intend to keep pulling.

Contributor

Nicholas Seeley

Senior Vice President of Engineering Services, Engineering Servicesnicholas_seeley@selinc.comView full bio[1] M. R. Clarkson and F. B. Schneider, “Under consideration for publication in Math. Struct. in Comp. Science Quantification of Integrity †.” 2012.

Contribute to the conversation

We want to hear from you. Send us your questions, thoughts on ICS and OT cybersecurity, and ideas for what we should discuss next.

Video

Learn how the CIA triad informs how to apply cryptography to operational technology (OT) networks.

Video

Tech Paper

Learn how to implement defense-in-depth cybersecurity in industrial control system (ICS) environments.