The Blind Spot in Your Cybersecurity Monitoring Strategy
Cybersecurity monitoring tools have certainly reduced incidents, but blind spots still exist. Matt Holland, CTO & co-founder at seedata.io, looks at how organisations can identify those blindspots and monitor them effectively
Organisations are spending $120bn collectively on cybersecurity this year alone, a figure that is increasing 10% YoY. There are new solutions coming to market all the time, and a constant stream of new vendors joining the fray to ‘solve’ the cybersecurity problems faced by enterprises all over the world. We’ve never had it so good!
And yet, we still see the frequency and scale of attacks rising, and the financial and operational impact of these incidents increasing. We know it’s a battle of attrition and things would far worse if we weren’t making the efforts we do, but we’d be negligent if we didn’t ask ourselves occasionally, “are we doing this right?”.
We create inventories of our assets. We assess risks, identify threats and address vulnerabilities. We develop applications that are secure by design, and we embrace bug bounty reports. Every year we consider if the tools, processes and people we are using to implement our strategy are optimal, and we potentially an apple for an orange here and there. Einstein said “The definition of insanity is doing the same thing over and over again and expecting different results.” and so I ask, “are we doing this right?”
I can’t offer you a universal epiphany that the whole cybersecurity industry will get behind (sorry), but there is one area I will point my figure at and suggest that we are philosophically missing a trick; that of monitoring.
Are we monitoring right?
I’m not faulting the tools or processes that in place today, mostly doing a good job of what we have asked them to do. I’m not here to pitch vendor against vendor, but rather to challenge a basic principle upon which our industry has spent billions of dollars. You see, cybersecurity monitoring as we do it now, is flawed, and we have created a false dependency that it can never live up to.
We collect events from all over our environment and our various supplier platforms. We correlate them against signatures, patterns and run them through AI algorithms, to look for indicators of compromise or other flags of suspicious activity. Any day when our SOC screens are a healthy hue of green is considered a victory, and we proclaim ourselves to be secure
But we’re not! This is the lie of monitoring. The only fact those beautiful dashboards are really able to impart is that they didn’t find anything.
Read More: Cybersecurity on Top Business Tech
Stay with me here… Security monitoring is effectively a form of model-based testing; it aims to validate the operations of a system from a behavioural point of view, i.e., to ensure it conforms to the expected functional outcomes. It requires that the system under observation is checked according to predetermined and expected behaviour under specific circumstances.
So, security monitoring is a form of testing. As any software engineer worth their salt will tell you, testing can detect the presence of errors but not the absence of errors. We are straying into philosophical questions of proof versus evidence, but what I’m trying to say is that just because your SOC didn’t find any incidents doesn’t mean you haven’t had any.
It’s not if, but when
Next, I’m going overlay another true-ism; in cybersecurity, it’s not if, but when. We accept and operate on the basis that an incident will happen to us, and we must be prepared. We spend hours practising disaster recovery procedures and in crisis rehearsals. And with good reason when you look at the stats in various yearly breach reports
And so, to the punchline. If we accept an incident will happen in the future, and we accept that monitoring is only good at telling you what it ‘has’ found, then we must equally accept that it’s every bit as likely that you’ve had an incident in the past and you don’t know about it yet. Potentially, several incidents.
We can all try and comfort ourselves by saying that our various monitoring tools have reduced the margins in which an incident could hide to an almost minute risk, but again, that logic only holds up if you believe that you know the full potential width of that margin. If we knew everything that we weren’t able to monitor, we would surely start monitoring it. It’s Donald Rumsfeld staring at a Johari window of “unknown unknowns”.
Not wishing to leave you with sleepless nights, I will describe the beginnings of a solution, and share how my company, seedata.io aims to address this blind spot. We believe that the placement and monitoring of trackable data will provide a richer and more complete monitoring strategy, by including post-attack telemetry. Your SOC can tell you when an attack is underway and help you prevent it, and seedata.io can tell if an attack has already happened and data has left your control, even if your SOC didn’t spot it.
If this sounds interesting, come talk to us, on info@seedata.io