Blog Archives

Imperva’s DIY syslog format

I have had the fortune to support a few WAF installations, my preference is Imperva’s WAF solution. For any security product, being able to know what it’s doing and what is going on within the product is as important as the actual security being provided.

One of the features of Imperva’s solution that I find tremendously useful in an enterprise setting, and possibly an MSSP as well,  is the ability to construct custom syslog formats for triggered alerts and system events in almost any format. I like to think of this as a Do-It-Yourself syslog formatter because the feed can be built and sent anywhere, using any number of options. More importantly, the feed can be bundled with specific policies or event types to provide limitless notification possibilities that often require professional services engagements to develop and implement.

In Imperva terminology, any policy or event can be configured to trigger an “Action Set” containing specific format options for among other things syslog messaging. If your logging platform (PLA) or SIEM requires a specific format, there’s a very strong chance that, with no more effort than building a policy, you can build the ${AlertXXX} or ${EventXXX} constructs necessary for your needs.

You can model the alerts to look like the Cisco PIX format, ARCSight’s CEF format can be used, or you can make your own as I’ve done in this screenshot:

Basic Syslog Alert Format

Basic Syslog Alert Format

In addition to allowing customized messaging format, Imperva’s SecureSphere platform allows unique message formats and destinations to be specified at the policy and event level. For example, a “Gateway Disconnect” or ” throughput of gateway IMPERVA-01 is 995 Mbps” message can be sent to the NOC’s syslog server for response, while XSS or SQL Injection policies can be directed to a SOC or MSSP for evaluation. Additionally, the “Action Set” policies can be setup so that the SOC is notified on both of the  messages above as well as security events.

The configuration of the custom logging format is very straightforward, using placeholders to build the desired message format.  The document “Imperva Integration with ARCSight using Common Event Framework” provides a number of examples, including a walk-through for building a syslog alert for system events, standard firewall violations, as well as custom violations. The guide is directed at the integration with ARCSight.

Depending on the version of Imperva SecureSphereyou are running / evaluating, the alert aggregation behavior will differ. Newer versions (6.0.6+) better support SIEM platforms with updated alert details, where older versions push syslog events on the initial event only.

You can request a copy of Imperva Integration with ARCSight using Common Event Framework to get additional ideas on customizing your syslog feeds for your SIEM product.

Advertisements

Log Management 101

If you are in any way involved with technology, you have at one time or another dealt with logging. You either didn’t have what you needed, spent hours looking a needle in the haystack, or had to go find someone to help you get what you needed.

In networking and security circles, logging is usually a hot-button topic. I’ve spent many man-months working with logs, investigating logging solutions, praying to God for a solution to logging, tweaking open-source logging programs, and discussing logging technology with various vendors. I’ve collected some of the more relevant articles on the topic and have some additional commentary / insight.

First and foremost, the primary problem of Log Management is often not the log(ging) portion – most systems spew limitless log data – the problem is the management of the log(ged) data. You’ve heard 1000 times, management styles vary by manager. Management of network, servers, and security differ wildly. Networks and servers are managed by exception, they get attention when they need it otherwise they just purr and no one bothers them (until Patch Tuesday or until Cisco has a bad day). Managing security is altogether different, because security for a given organization is the net sum of all the parts of said organization, and in order to properly manage security, information from all the parts must be reviewed on a routine basis.

This routine review of information is where the problem comes in and it is why nearly every regulation, guideline, or contract stipulation (read PCI DSS) has mandatory language about reviewing logs. Again there is seldom anything about requiring systems to output logs, because that is rarely the problem. Technology is often times misapplied in attempts solve human-created problems, log management is an ideal instance for technology to come to our rescue. How else would a sane person cull through gigabytes or terabytes of log data from ten’s or hundred’s of systems or network elements? <Don’t answer that question, I’ve done it myself .>

Logging resources

Dealing with the actual logs and delving into whether you want to collect log data, consolidate the log data, correlate log data, and or perform event management usher in the next set of issues / challenges when dealing with logging. Originally logs might have been aggregated onto one server or they might have been aggregated by log severity using syslog-ng or syslogd. Network or systems management systems (CiscoWorks or HP OpenView) might have received logs and performed parsing or filtering on key events that drove alerts and alarms but these systems likely discarded the log data itself, so investigating the root cause of an alert or incident became next to impossible.

Compliance came to the scene and brought innovation in log collection and log management; software became available to manage logs from multiple sources but you had to pick your poison: Windows with MS SQL back-end, Solaris or HP-UX with Oracle, or Linux and MySQL. Aggregation and consolidation were solved by tiered deployments and agents were usually installed to siphon off Windows Event Data. Security hit the mainstream and event correlation was the big buzz word.

The problem was how the intensive resources necessary to store the log data competed with the intensive resources necessary to correlate dissimilar log formats to produce alerts or suppress logged events. Gartner and others coined the the term SIEM, which combined log collection and event correlation, and a new market arrived. Most SIEM (SIM, then SEM, finally SIEM) are really good at the collection piece (historical) or really good at the correlation (real-time) piece, while few are good at both. Go, go Magic Quad. rant! If you hadn’t already noticed, I’m not a fan of combining these technologies (I don’t mix my food while I’m eating either). I like my logging solution to have plenty of evidence preservation goodness and I don’t want it muddied because a correlator had to normalize the data before it could parse, alarm on, or display the log data.

Some of the options for solving log management challenges

Just scratching the surface, more please?

Analytic Evolution

Event management has evolved over the years from ICMP probes, snmp traps, and syslog data to a very sophisticated market place where a myriad of products await perspective technology shoppers. The underlying technology behind these products, can be described in essence as analytics, which is a vital part of any security environment, read on to understand the evolution and where we are today.  

The most secure environment in the world is not viable, if there is no mechanism for the security environment to alert the operators to malicious or suspicious activity! 

Technology is available today that reliably provides what I would describe as 2nd Generation Analytics for the health and success of deployed security systems. I say 2nd generation because Network Management and Systems Management software has been available for years that collected alerts and provided indications of hard and soft failures for specific vendor devices, but technology to determine relevancy and severity has been hot and cold.  

My favorite all time network monitoring program was simple and reliable, it worked so well Cisco included it in one of it’s management products; CastleRock SNMP. You could tell if a device or a link or a service was UP or DOWN instantly. Now everything from Cisco is java-based.

The original method of event processing was relegated to specific failure modes and had no means of identifying a security breach outside of a Denial-of-Service attack. This interrogation method of event management gave way to distributed event monitoring and event consolidation because the masses were sold on the idea that in order to determine the disposition of an anomalous event one must incorporate more security devices and more event data. 

The approach of 2nd Generation Analytics is: gather as much data as you can to determine the extent and impact of various events thereby achieving rudimentary correlation. 2nd Generation Analytics provided a means to blend events from an IDS with data from a deployed vulnerability system and determine whether a particular event is relevant, then optionally compare access logs on the webserver to determine success or failure of the attempt. 

Products are available that can consolidate alerts and alarms for supported platforms and events as well as push events in semi-standardized formats but as the above example suggests, an organization has to implement multiple technologies in order to know whether the data on the webserver is safe and secure 

Alongside this 2nd Generation Analytics, is the notion of management systems for management systems. A Manager of Managers, MOM if you will, where the muscle of technology is wielded to integrate disparate management systems in the hopes of creating a single cockpit-style view of events. The key difference between the two being MOM requires underlying management systems to push processed event data into another management system while 2nd Generation Analytics solutions interact directly with the security environment or other infrastructure. 

1st Generation Analytics brought us device interrogation, 2nd Generation Analytics brought us consolidation and limited correlation, so what’s next? Minority Report? Shut down users or devices before they break the network? {Sounds like a great commercial idea for Cisco’s Self Defending Networks}

3rd Generation Analytic technology is entering the market place, but the technology needs maturization. Statistical Anomaly Detection and Behavioral Analysis are the current incarnations and there will no doubt be others, these technologies seek to apply complex mathematical techniques to events occurring within security environment and the rest of an organization’s technology infrastructure to make alerting and correlation decisions. The intent being to answer the question of ‘What is normal?’ by looking at the questions: How is this resource accessed, When is this resource accessed, Where is the user accessing this resource, etc, in hopes of understanding the answer to ‘Is my data safe and secure?’  

The validation for this technology will come with interpreting multiple data sources accurately and its ability to “learn” what is normal for company A that is abnormal for company B. There is a saturation threshold with this approach when too many event sources are being analyzed that results in something worse than False Positives, which is False Negatives: an organization is back to thinking they are secure because the analytics are unable to correlate some or all of the event sources and don’t process critical event information occurring from an actual breach. 

The evolution of analytics has brought improved monitoring and alerting for the security environment but analytics still suffer from the issue of fidelity, which becomes increasingly important with each successive leap in analytic technology. If the analytics are not trusted and not acted upon, then the security environment cannot fulfill its purpose, and the analytics become irrelevant.

Stay tuned, more to come…