The Surprising Reason Why No One Can Protect Their Network… Yet
That isn’t to say that they never make mistakes – they do. Sometimes it’s because of incorrect or incomplete information, sometimes it’s because of limited technology or lack of resources, and sometimes it’s because, frankly, a bad decision is made. That happens, and what’s more, fallibility certainly isn’t limited to network security professionals.
Everyone from doctors to teachers, from cab drivers to CTOs occasionally wish that life had a convenient “undo” button (or even a CTRL+Z hotkey would be fine).
And so, naming this article “The Surprising Reason Why No One Can Protect Their Network… Yet” may seem like the setup for a diatribe against network security professionals – one that criticizes intelligence, effort or both. But those looking for such a rant here will be disappointed, because the problem isn’t about people.
It’s about tools!
We know the tools we have, and what we are doing with them, is not working because industry data such as the Verizon 2014 Breach Investigations Report shows that 88% of Web App attacks are discovered by external parties, and fully 99% of POS (point of sale, retail) attacks are discovered by outsiders. Not only are we failing to prevent attacks, we often don’t even know when we are attacked! That is why Mandiant’s 2014 Threat Report recently revealed that the average attacker was on a target’s network for 229 days before discovery.
Why don’t these tools work?
Because they are built for investigation and data aggregation, not intelligent detection. That is, the tools that many network security professionals have been given to detect threats within the network — and ultimately keep the enterprise safe — aren’t merely ineffective for this critical task; they’re categorically unsuitable. Usage of such tools either misses critical information, or floods security operations with too many spurious events and prevents effective identification of real attacker activity. Specifically:
- SIEMs, which retain and archive logs from multiple systems, are useful for ensuring compliance with prevailing best practices and regulations. However, despite the “S” in the acronym, SIEMs are fundamentally incapable of detecting malicious activity within the network. In fact, the harder you try, the more alarms and alerts are created.These false positives inundate security operations with in-actionable information, such that even if a true attack is detected it will be lost in the noise.
- Network forensic tools are useful for logging pertinent traffic from servers and applications, which allow enterprises to analyze and reverse engineer notable incidents or attacks. However, malicious activity typically doesn’t restrict itself to pertinent traffic lanes; rather, it transmits across the entire network, and often as far away from the spotlight as possible.And since enterprises cannot spend millions of dollars to store unthinkably huge traffic logs — and spend even more money on top of that recruiting and retaining the small army of data scientists they’d need to make sense of it — it’s clear that network forensic tools aren’t viable for detecting internal malicious traffic on the level that today’s enterprises require.
- Specialized tools are useful for governing administrative use and file access in specific systems, as well as for running analytics to see if something is abnormal. However, cyber criminals don’t have to compromise administrator accounts to breach the network – and so these specialized tools don’t necessarily set off any alarms. Indeed, we’ve seen numerous use cases where cyber criminals “get in and get out” without running up against such configured detection logic.And speaking of logic: enterprises that try and write enough rules to snag suspicious traffic with specialized tools quickly find themselves once again inundated with false positives. So they either try (and fail) to chase down every possible threat and easily miss real attacker activity, or they turn down the sensitivity — which exposes them to potentially catastrophic false negatives.
With all this being said, it’s important to note that each of these tools are important and deliver value. Rather, the message is that enterprises cannot and should not view these tools as being capable of reliably and effectively detecting malicious activity within their network. That’s not what they were designed to do, and any belief to the contrary does nothing more than position network security professionals to fail — despite their evident competence and best efforts.
Enterprises that want to steer clear of this quagmire should begin by acknowledging that their existing tools — while useful in many ways — aren’t suitable for detecting malicious activity in the network. As the old saying goes: “knowing is half the battle”. This knowledge goes a very long way towards taking intelligent action.
And what is that intelligent action? It’s one where enterprises deploy Active Breach Detection technology that lets them automatically:
- perform behavioral-based network level profiling to detect any suspicious activity on the network
- integrate visibility into host activity and suspicious executables as well as cloud security expert systems
- flag suspicious traffic with very low false positives
When the above happens, network security professionals are empowered with details and other contextual data they need to rapidly mitigate threats, clean up infected devices and systems, plug security gaps, and keep the enterprise’s data from falling into the wrong hands.
That’s what network security professionals are trained to do. But they can’t do it alone. They need the right technology to get the job done.