Getting to grips with the zero-trust approach
This security approach, in its simplest sense, means that we cannot trust anyone anymore
One of the most common misconceptions that organisations have with IT security today is that they still subscribe to the notion that they mainly have to protect their systems against threats from internet. Because of this, they stock up on security solutions that aim to fix the issue of internet-related threats.
There is no shortage of businesses that install firewalls, intrusion prevention systems (IPS), and anti-spam solutions, but still allow their users to communicate directly with their internal data centre. What they fail to realise is that it only takes one piece of malware, or a single malicious user to extract a wealth of data from such an exposed server, with damaging consequences to the organisation.
Given that on average, it takes up to six months to detect a data breach, and that close to 70% of cases are only brought to light by external parties, the need for a new approach to security is dire. In fact, most businesses could see far more value from their IT security spending if they instead began to extend security across far more perimeters of their infrastructure than just the internet. This is where the zero-trust comes in.
What is zero-trust?
Historically organisations have always considered the internet as the biggest source of malicious attacks, but with the explosion of malware and with wider access being granted to employees, threats can now originate from within the known user base as well. This has promoted the need for zero-trust, a term that was originally coined by the IT consultancy firm Forrester. Its simplest sense, it means that we cannot trust anyone anymore. The three most basic principles of the model are: secured access for all resources regardless of location; stringently applied ‘least access’ control; and fine-grain monitoring and logging of all network traffic.
Tenants of the zero-trust approach
The prime requirement is to be able to control all internal and external communication taking place within the organisation. Of course this is difficult, but it is a good ambition to have complete visibility of any communication within an infrastructure. This visibility is important as it delivers the ability to detect and control good and bad behavior. Equally important is the ability to react promptly to security issues, which is why the management of security events raised by the infrastructure must be made a priority.
Visibility into data access across every perimeter of the organisation inevitably leads to the generation of a wealth of behavioral information. Long-term success with the zero-trust approach will depend on your organisation’s ability to understand data and traffic patterns and leverage analytics for early detection of malicious behavior.
Another aspect of zero-trust, which is borne out of the hyper-connected nature of the modern enterprise, is that IT can no longer trust any device. This is because users now connect to enterprise networks via a wide range of endpoints, from remote locations and often via insecure networks. As a result, IT simply cannot control the device anymore.
But are we ready?
In the past, one of the main inhibitors to implementing a zero-trust architecture, which spans the whole organisation and safeguards all perimeters, has been performance. But the exponential growth in processing power means that today, we have solutions which can actually handle the traffic load of even the biggest organisations.
Also, organisations will be happy to know that the technologies required for this approach are very much the same - familiar security components such as firewalls with strong capabilities in application-level visibility, web-application firewalls, IPS systems and network and endpoint forensics tools will continue to be relevant. But as the zero-trust model calls for the segmentation of infrastructure, and the creation of logical security zones, more emphasis must be given to having central security capabilities of controlling traffic between the zones.
From a technical perspective, the zero-trust approach utilises a lot of virtualisation technologies such as virtualised compute, virtualised routing and segmentation of the infrastructure in security zones, in order to create very clear demarcation points between the different security layers in the infrastructure. Here, too, most organisations will actually have all of those capabilities in their current infrastructure, although they may not be fully utilising them.
One of the areas where enterprises do fall short however is in event monitoring and handling. I am often surprised to see that despite business and applications now running 24-7, organisations still think monitoring and addressing security events is only something that needs to be done during working hours. In most cases, this is due to resource constraints, which is why utilising a competent third-party provider with technically qualified staff for a real 24-7 operation could be a feasible option.
The final, but perhaps most difficult hurdle is the need to challenge the status quo. This would mean challenging very entrenched ways of implementing security. No matter the difficulty, though, it must be done. After all, it is only a matter of time before anyone who believes that everyone should have the same level of access will be proven wrong!
Setting IT on the path towards zero-trust is a fantastic opportunity to re-evaluate the IT components that are no longer working from a security perspective. In the era of advanced persistent threats (APTs), access abuse, social engineering, and modern malware, it also presents the opportunity to substantially improve your defensive posture and prevent exfiltration of sensitive data.