Wednesday, September 5, 2012

Over Classification

In theory, as a large enterprise and/or critical infrastructure provider, building a working relationship with the US government can provide valuable intelligence (in both directions).  Where the theory often breaks down in practice, however, is that the government suffers from issues arising from severe over-classification of data.  I understand that there are certain sensitivities and secrets that must remain closely guarded, but lists of malicious domain names do not fall within that realm.  The attackers already know where they are attacking us from, so we are not keeping anything from them.  They also already know that we know about them (it is very difficult to truly disguise network defense measures).  Moreover, malicious domain names themselves are not really all that valuable anymore (reference earlier blog posts), and on top of that, are often re-purposed by multiple actors/groups.  I frequently see domain names that used to mean one thing, but today mean something else or multiple different things.

So, my question remains, why guard these so tightly?  Can't we all agree that withholding this intelligence hurts the overall security posture of the United States?  Seems like the opposite of what we were aiming for.

Anomaly Detection

Most network security monitoring techniques use one of two approaches:

  • Signature-based detection (i.e., "I know this specific activity is bad")
  • Pattern-based detection (i.e., "I know this pattern of activity is bad")
Those are both well and good, but they leave a gaping hole.  What is the answer to the question: "Is this previously unknown activity normal and expected, or is it weird and unexpected?"

The way to answer that question is through anomaly-based detection techniques.  Unfortunately, at the present time, we as a community do not have so many mature, production-ready approaches to anomaly-based detection, nor do we have many vendor options.

I am cautiously optimistic that in the coming years, we will begin to mature out capabilities in this area.  It is sorely needed.

The Final Frontier

If you think about it, the final frontier for network monitoring is most likely the internal network.  We as a community have become quite good at instrumenting the edge and somewhat proficient at monitoring it.  Most of us, interestingly enough, have no idea what is going on inside our perimeter.  This is something that requires serious thought and attention in my opinion.  What lies beneath?  That is the question that we should seek to answer.

Nary a Vendor Can Keep Pace

There was a time a few years back where a list of malicious domain names was one of the prized possessions of an incident response team.  In previous blog postings, I've discussed how attackers have moved away from purely malicious domains and more towards using legitimate or even "disposable" domains coupled with specific URL patterns. One needs to work at staying on top of these as Indicators of Compromise (IoCs), as they change quite frequently.  Given this, one would expect that vendors would quickly pounce on this opportunity to service their customer base by:
  • Providing URL pattern based intelligence rather than just domain name based intelligence
  • Allowing for mining of the vendor collected data using URL patterns
Surprisingly, there are few vendors that facilitate this type of approach.  My hope is that in the near future, more vendors will rise to the challenge confronting us all.