Everyone knows you need to block the bad stuff from getting onto your network and calling home to its masters. But what happens when something good gets incorrectly flagged as malicious? A false positive (FP) if managed poorly can be almost as bad as letting something potentially dangerous get through.
FPs can result in lost traffic and revenue for the wrongfully accused, wasting time and resources for those investigating and resolving the issue. In a study conducted by ESG Research, most security, IT, and DevOps leaders say their organization spends an equal amount, or more, time on false positives as on legitimate attacks. When FPs become a recurring problem, you start to lose trust. Is this IP really good? Is that IP really bad? Too often, the result ends up being over investigation or a lapse in standards.
Drastic breaks in service can also be a potential result of FPs. An extreme example (that has happened without any fatalities) is a hospital using a SAAS paging service. This paging service was hosted on an IP that was simultaneously hosting multiple phishing sites. With this, the pager service didn’t work until they added that IP to their custom whitelist. Is the potential risk of blocking something good worth the extreme vulnerability to malware and attacks that comes with underblocking?
Stay with us for a bit and we'll answer that question together.
The most common source of FPs is human error. Analysts at threat intelligence providers tend to have a bias towards blocking, as threat removal is often difficult. Faulty algorithms are an additional source of FPs. Many companies use machine learning and data mining to find malicious addresses that are not always 100% accurate.
Another frequent cause of FPs is when companies self-list through misconfiguring auto-protection services such as DenyHosts. If DenyHosts are misconfigured not to exclude reporting of their own IP addresses (especially their system monitoring servers), an organization’s own DenyHosts can send their IP addresses to the shared database, locking themselves out.
While some infrastructure is born malicious (such as C2 DGA domains and sketchy hosting providers), it's more common for legitimate infrastructure to be compromised and exploited for malicious purposes. Because of this, false positives often occur if the IP or domain that was previously serving up malicious traffic has since been cleaned up, but has not aged off the data source in question. If reputational services do not constantly update their lists based on the current threat infrastructure landscape, they are likely to contain many FPs.
A more complex FP situation is when malicious content is served from one URL (or a couple) living on an IP address that hosts thousands of other domains as well. The innocent sites stuck in a bad neighborhood. For instance, the TLD .tk has earned itself a bad reputation from the large amounts of malicious domains and spam that it hosts. Yet .tk also hosts the Tcl Developer Xchange (tcl[.]tk), a popular and legitimate service that is most likely blocked by many because of its bad neighbors. In these examples, it’s not so much a false positive as not having the correct level of granularity to deal with multi-hosts. Basically, any critical service that’s connected through a shared public IP has the potential of being blocked in a false positive.
When you go to Disneyland with your family and that super cool ride that your kids love is closed for maintenance, it can definitely be a big disappointment. But no one would want the ride to stay open if there is suspicion that it is faulty, especially when the price can be a serious injury - or worse. The same caution should be taken in cyber security. The cost of underblocking and enduring a cyber attack or ransomware infection is so high at this point, that the bias has to change towards staying safe from as many threats as possible. But - that does not mean "block everything, all the time".
Many well-known threat feeds are more generic than one realizes, providing a one-size-fits-all solution. As we all know, one size definitely does not fit all in this industry, and every organization has its own needs. The result of this solution approach - no customizability, neither in adding IOCs you want to block for your organization's specific needs, and nor in dynamically unblock IOCs you need to be accessing right now. You should to be able to easily and dynamically whitelist indicators that need to be unblocked ASAP. No waiting hours or even days for the support team to help you out, no tedious manual changes of BIND configurations. We're talking about an instant policy change with rapid propagation for actual operational responsiveness. That's what ThreatSTOP does.
Ready to try ThreatSTOP in your network? Want an expert-led demo to see how it works?