Security is often pushed aside to make way for the convenience that new technologies offer. As interconnection grows, how do we ensure our systems remain secure and defensible?
Security policies only work if people follow them, use them, and trust them. This doesn’t always happen. Oftentimes, those who write the security policies cannot or do not foresee the realities or emergency situations that workers deal with every day.
There is an emerging field of research called the Science of Human Circumvention, which studies when well-meaning users circumvent security policies in order to do their job. If people don’t trust the policies, then we will never have truly secure systems. How can we bridge the gap and make security policies usable and efficient?
Interconnection is spreading. New technologies are pushed out faster and faster. The number of devices is growing. We rely on computers more and more, and the complexities of our systems and protocols and codes is swelling.
At some point we have to pause and take a breath. Yes, we live in a more connected world than ever, and that is exactly why we need to rethink our approach to security design and policy. How can we make security convenient?
Every day, you can see and hear people interpreting the same message multiple ways. Imagine if one of those people had malicious intent.
This is the problem with complex formats of data design. All code is written under certain assumptions that hold true only if everyone interprets them the same way. When there are too many assumptions, it leads to disagreements, and those disagreements often break security. The solution lies in making our data packaging simpler and cutting out features that give too much privilege to the attacker.
“There is hope; we should just stop solving those unsolvable problems.”
“I want to believe that we can create a better future.”
It is possible for security to be convenient, usable. We all want security, we want to know we’re safe, and we want to trust our devices. But we also don’t want to feel constricted by security.
That means we need to rethink our cyber networks and ask ourselves how a functionality or device will be used in six months or five years. How will it evolve? How can it be exploited? If we design with those answers in mind and with our fundamentals, our systems will be both usable and secure.