It was the end of 2010. The Packet Ninjas’ team (the predecessor of ShadowDragon) had been making mad dashes on application assessments and penetration tests while deployed to a nowhere town. The discussion was as lively as the hacking and exploitation had been during the engagement, including a conversation on the advent of Wikileaks. As we finished up the assessment and started our trek home, we received a phone call from a potential client. He asked if we knew much about Anonymous and Operation Payback which was affecting the financial epicenters. We agreed to look into the threats more closely once we got back to the home office.
Having engaged many times before to solve insider threat and corporate espionage — often internationally, we thought this might be interesting. Little did we know a few hours of analysis that night would change the ways we had focused on and applied different security disciplines.
Arriving home, we had a few extra days to relax before the full swing of Christmas and family had not completely set in. Two of us decided to go to a movie that night, and then we would have a look at LOIC and HOIC more closely. After a few hours of analysis we had produced the following a write-up for the client ( you can read -> LOIC-DDOS Initial Writeup 2010  )
Our clients’ reaction to the analysis was generous. They asked us to monitor Anonymous — not necessarily for attribution — but for attacks against our clients as well as future evalutation of any and all capabilities we observed. This meant we needed to analyze every tool and capability we discovered, and create signatures to detect and block the activity.
Threat Intelligence Before it was a Thing
While we had been quietly researching malicious online activities against our clients for a decade, no one – including us – had named this type of service. We watched HBGary and others melt and really didn’t want to become a target ourselves. We didn’t want to advertise that we had been monitoring folks in various forms. Nor did we want to tip our hand that we had been creating capabilities for our clients to detect and deter these attacks. We didn’t want to be scooped by other researchers. At that time the term “Threat Intelligence” hadn’t become a common term, and wouldn’t for another year or two. For us, it didn’t need a name. This was our job. Analyze the clues and pursue solutions with excellence.
The Organic Evolution of OIMonitor
Initially, another member of our team (who we will refer to as OGhost) wrote a simple program to monitor all of the channels on which Anonymous had been operating. We methodically combed through all the websites mentioned, pastebin, twitter handles and any online platform that we could read on a daily basis. As we identified new and relevant targets, we would document them. And, we continued to uncover new tools, analyze them and create signatures.
Anonymous slipped up a few times on how they configured some of their IRC servers through a clustering configuration error. As a result, we were able to monitor all of their channels for about three and a half months without being kicked out. This helped formulate our daily output of intelligence briefs to look like this:
Title: <Content> Current Attacks / Operations Attack Chatter Predictions for Next 24, 48, 72 Hours New Tool Changes (To Prioritize Reversing, Pcaps, IDS Signature Development). New Attack Tools Identified New Attack Detection Created
An example of this output can be found below where we detail the first date Sabu had showed up in channels on our daily brief on January, 25, 2011.
From a process perspective, we determined that the creation of a standard output was the key to gaining insight in the long term. Requiring analysts at different levels to place specific types of information into set categories provided for a repeatable process. Plus standardizing data created the foundation for deeper comparative analysis.
As time went on, reading every IRC message for a few hours every morning became laborious — and boring — and we needed alerts on specific keywords to speed-up the process. We made these additions to our simple program. This was the advent of OIMonitor in its roughest form. We needed to monitor a communication medium, specifically with a dialogue-based protocol (IRC) for keywords, and prioritize critical events moving forward (and store all data for historical purposes) .
The augmentation of monitoring for areas of concern were very close to IDS monitoring on a network, but applied to different mediums used for chatter. In some instances we produced intelligence that was weeks, and sometimes months before a problem would materialize to the outside world.
Our overall security monitoring followed this process:
- Identify where information is being talked about, enabling monitoring of specific areas of interest.
- Identify what needs to be observed. For our clients, we needed to identify three different things: common sentiment, targets by the collective group and new capabilities. For each new capability, we needed to evaluate it and create signatures to detect the observed attack tools.
- Write a daily brief, as formatted above. with monitoring.
Tweaking, Automating, and Creating the Platform
Having completed our first tour of monitoring and analysis, we reviewed lessons learned. We concluded that key elements of our workflow needed tweaking. Specifically, more automation was needed in the collection of data in all areas of our experience as well as the process in general. In addition, there needed to be keyword alerting and storage of all information. And, OIMonitor was born. (OI = Open source Intelligence Monitor).
Since 2011, the OIMonitor platform has grown significantly to include monitoring on social media, IRC, forums, dark web, pastebin sites, and many other robust areas accessible via API or the web interface.
Strategic Takeaways
While doing our jobs in 2010, we learned why having a monitoring solution was important for those of us of poking around in early days of the dark web – and decided to do something about it by creating OIMonitor. We did this at first to support day to day activity. As it evolved, we realized the larger benefits to the investigative process.
First, having a platform that aids in the automated collection and escalation of key artifacts separates the signal from the noise in chaotic moments, and provides an agenda for deeper analysis. Furthermore, standardizing research results is key in turning raw data into intelligence that is both scalable and actionable in support of business needs. And, presenting these outputs in daily and weekly updates often reveals insights that become central clues in an investigation.
Together these benefits eliminate the time-consuming task of evaluating raw data, enabling analysts to focus on taking the actions needed to detect, deter or thwart future or ongoing attacks.