Finding the Signal Amid the Noise: A Strategy for Managing Noisy Alert Systems

By Haystax, December 15, 2015 | SHARE

By Josh Hunter and Marvin Marin

Wheels that squeak may get some oil, but wheels that squeak too much get removed Baselining and tuning security systems is part of the continuous lifecycle of any security operations center (SOC).  The challenge for SOC operators is to continually tune their systems to issue alerts on as many suspicious events as possible with minimal false alerts, for as the number of alerts increases, SOC personnel may become desensitized to the sheer volume of events, alarms and information.  Even with automated solutions such as security information and event management (SIEM), the problem is that humans are being overloaded with too much information to process. MySpace and Facebook serve as strong examples of how this signal to noise issue presents for human operators.  MySpace’s demise was partly due to the fact that an individual’s page could be customized to an extreme, whereas a Facebook page is normalized and allows only modification to content and not the framework.  This cleaner interface (normalization) allows for people to take in more information because their minds aren’t taxed with trying to make sense of the data and the underlying structure; they only have to concentrate on the data. Because of the data overload in SOC environments, it’s not uncommon for personnel to ignore alarms, tune them incorrectly, or simply suppress them by turning them off.  The compulsion to hit the ‘mute’ button and ignore the problem is great, especially when security personnel are over taxed and don’t have adequate technical resources to automate functions. Sometimes this is easily confirmed by walking into the SOC and seeing how many monitors are turned off! The problem is exacerbated by the high rate of false positive and negative alerts that security tools generate.  While no solution can be 100% accurate in real world environments, the issue is compounded by the different types of data, networks, IT gear, connection speeds, and other factors that can change how both man and machine interpret the data that is presented.  Add to this the wide-ranging levels of experience and capabilities of human operators and you have many factors working against SOC operators. So what should SOC operators do?  We have one immediate suggestion: SOCs should employ a weighting system based on asset and data value to properly enforce the concept of “protect what’s most important to the organization.” The weighting system will also help the SOC to prioritize and protect just occupied islands rather than the entire ocean.  Additionally, visualization (i.e. graphs, heat maps, etc.) helps to look at the ‘big picture,’ but visualizations must serve to clarify an issue or solve a problem and not to make a pretty mess of things – as sometimes happens. We can’t overstate the importance of reducing the signal to noise ratio in SOC environments. Technology and automation need to play stronger roles in this process. Fortunately, big data analytics can help with this problem. Haystax Technology has several threat mitigation tools that fit this bill, and it’s an area that is benefiting from increased attention and innovation. A robust stream of threat information is critical to any SOC. But if that information is not prioritized, tested for false alerts and presented as actionable intelligence to human operators, its value decreases quickly.