In episode 4 (0x03) of the cult TV-series Mr. Robot, Elliot hacks into Steel Mountain’s Data Center HVAC system (Heating, Ventilation, and Air Conditioning) by connecting a rouge Raspberry Pi to an exposed network access point. By controlling the temperature of the server rooms, Elliot was theoretically able to pump up the heat to cause the backup tapes to melt and as a result destroy Evil Corp’s ability to recover from a software implant attack meant to encrypt all of their data thus rendering their entire business useless overnight and freeing people from the tyranny of credit cards, unfair loans, and other modern-world evils.
While Mr. Robot is just a TV show, the attacks and techniques that were scattered around the main plot are considered by many experts as realistic. Those who are familiar with these attacks may also notice that most of the hacking is physical to a large degree, i.e. it requires some level of proximity to the target. Whether Eliot is hacking into cars by using signal replay attacks, breaking into police computers using software bluetooth keyboards or cloning the security guards’ RFID-based access cards, all attacks have an element of proximity danger. The show highlights the fact that in an increasingly more complex physical world where digitization is moving at an extremely fast pace, perfect security is a pipe-dream.
I am sure this is not the answer you were looking for but let’s not despair. We can rightly assume that perfect security is not possible but surely we should not make it any easier for our adversaries. How do we do that is a matter of philosophy.
I believe that changing the way we think about devices, networks and systems can influence current and future security strategies and provide long-term returns. My personal mantra is Zero Trust Security. Nothing is trusted, everything is assumed to be compromised. This may sound like a ridiculous proposition but it is, in fact, the basis on which many systems and networks (including The Internet and The Dark Web) are built.
This is not a wide-spread mental security model in most enterprises, however. In fact, in many businesses, decisions are based on the assumption of some sort of underlying trust. The simple fact that a user needs to be connected to a specific network in order to access a system automatically implies that there is a trust relationship at the core. All decisions, such as how to build additional network infrastructure and providing access to data and applications are based on the flawed assumption that preceded. Over-time the effect of this is layers of unrealistic expectations that can be defeated by the most basic forms of attack.
Let’s imagine for a second that we don’t work from a dedicated office network but from Starbucks’ free WiFi network. What will change in our design to ensure that our systems are secure? Will we build applications and networks the same way we build them today? How would we govern access? This simple thought exercise can reveal a lot about what sane security design should look like in hostile environments. One thing is for sure, we will not put critical production applications on a hostile network without some guarantees that they are safe and secure. User access is not guaranteed under some static rules, i.e. RBAC (Role-based Access Control).
The Starbucks security model is not a farfetched example. The fact is that while there is still a need for dedicated corporate networks, there is a continuous demand for more flexibility. Working from home and co-working offices are two recent examples where security teams have no practical say over how things are done. Home and co-working networks should be considered hostile by definition. How do we manage to stay secure in such circumstances? I believe what the security industry calls Zero Trust Security is the key to the answer.
“Zero Trust Security” is a high-level concept. It is as high-level as “organizational synergy” and other corporate lingo if you ask me. What does it even mean? From my point of view it is applying the Starbucks security model I provided as an example but in practice. It is simply a way of thinking about the problem of security.
Right now, one way to look at Zero Trust Security (a.k.a Beyond Corp as coined by Google) is as a mechanism that is used for challenging the user at the right time without any previous assumptions. For example, imagine that you login into a critical business application for which you need to provide a set of credentials. Upon successful authentication, you are provided with a basic level of access, i.e. you are not yet fully trusted. When you decide to perform a critical operation, you are challenged again (you are elevating privileges) but this time you need to do an identity verification check combined with a push notification to someone else for approval. Other types of operation may require different levels of escalation entirely depending on your personal circumstances which would be related to the current device, knowledge of information, approval, physical location, a combination of endpoint security features, etc. Applications should be able to adapt to the users’ ever-changing environmental circumstances and provide adequate security controls in response.
In a world where the user is constantly challenged depending on their personal circumstances, hardware and software implants are less concerning. While there is still a risk, at least it is somewhat mitigated by the fact that users are no different from each other and by default they have minimal access. Only when it is absolutely required, the user is provisioned temporary access so that they can fulfill a specific task. The decision to grant or deny access is performed holistically. While it is still possible for malicious software or devices to impersonate a user, it is significantly harder and more expensive. At the end of the day, it is down to economics. The goals is to make it prohibitively more expensive for attackers.
Let’s go back to Elliot and FSociety. While Elliot’s attack on Steel Mountain’s HVAC is realistic, it is certainly a product of a much more fundamental problem related to trust. Why is it even possible to influence a critical system by plugging a rouge Raspberry Pi into an insecure network? It certainly helps develop the plot but it does sound like something that can be easily avoided in real life.