Blog Archives

Imperva Placeholders

I had an email asking what placeholders I usefor logging platform integration. Rather than reply in a comment or email, I thought I’d just make a post out of the response.

Looking at placeholders, here are some of the ones I use the most:

  • ${Alert.dn}  this is the alert id
  • ${Alert.createTime} this is the time the ALERT was created (note this can be misleading)
  • ${Alert.description} this is bound to the alert, so you may see “Distributed” or “Multiple” appended due to aggregation of events
  • ${Event.dn} this is the event (violation) id
  • ${Event.createTime} this is the time the EVENT was created (this is when the event happened}
  • ${Event.struct.user.user} this is the username from a web or database action
  • ${Event.sourceInfo.sourceIP}
  • ${Event.sourceInfo.sourcePort}
  • ${Event.sourceInfo.ipProtocol}
  • ${Event.destInfo.serverIP}
  • ${Event.destInfo.serverPort}
  • ${Event.struct.networkDirection} which way is the traffic flowing that triggered the event?
  • ${Rule.parent.displayName} this is the name of the Policy that was triggered

There are other placeholders you can leverage, but these are the core I start with. I like these because they’re used on the web gateway AND the database gateway. This lets me have a consistent intelligence feed to my log monitoring platform and my SIEM product.

The trick here is that I can see how may events roll up underneath a single Alert. In the syslog feed, I can track the duration of an attack as well as tell you when I last saw the activity, because I track Alert.createTime and Event.createTime.

There are lots of options for how you build your syslog feed:

  • You may be interested in the response time of the query or web page
  • Perhaps the response size is of concern to you
  • You may treat threats differently depending on where they occur in a database table or URL
  • You may be interested in the SOAP action or request

Last but not least, in addition to security events you can also push system level events in the same manner using different placeholders.

  • Configuration events can be syslog’d on complete with the user making the change
  • Gateway disconnect messages can be sent via syslog (snmp might be better, but you need to load the custom OIDs)
  • Excessive CPU or traffic levels can be sent via syslog

How are you using placeholders?

Advertisements

Analytic Evolution

Event management has evolved over the years from ICMP probes, snmp traps, and syslog data to a very sophisticated market place where a myriad of products await perspective technology shoppers. The underlying technology behind these products, can be described in essence as analytics, which is a vital part of any security environment, read on to understand the evolution and where we are today.  

The most secure environment in the world is not viable, if there is no mechanism for the security environment to alert the operators to malicious or suspicious activity! 

Technology is available today that reliably provides what I would describe as 2nd Generation Analytics for the health and success of deployed security systems. I say 2nd generation because Network Management and Systems Management software has been available for years that collected alerts and provided indications of hard and soft failures for specific vendor devices, but technology to determine relevancy and severity has been hot and cold.  

My favorite all time network monitoring program was simple and reliable, it worked so well Cisco included it in one of it’s management products; CastleRock SNMP. You could tell if a device or a link or a service was UP or DOWN instantly. Now everything from Cisco is java-based.

The original method of event processing was relegated to specific failure modes and had no means of identifying a security breach outside of a Denial-of-Service attack. This interrogation method of event management gave way to distributed event monitoring and event consolidation because the masses were sold on the idea that in order to determine the disposition of an anomalous event one must incorporate more security devices and more event data. 

The approach of 2nd Generation Analytics is: gather as much data as you can to determine the extent and impact of various events thereby achieving rudimentary correlation. 2nd Generation Analytics provided a means to blend events from an IDS with data from a deployed vulnerability system and determine whether a particular event is relevant, then optionally compare access logs on the webserver to determine success or failure of the attempt. 

Products are available that can consolidate alerts and alarms for supported platforms and events as well as push events in semi-standardized formats but as the above example suggests, an organization has to implement multiple technologies in order to know whether the data on the webserver is safe and secure 

Alongside this 2nd Generation Analytics, is the notion of management systems for management systems. A Manager of Managers, MOM if you will, where the muscle of technology is wielded to integrate disparate management systems in the hopes of creating a single cockpit-style view of events. The key difference between the two being MOM requires underlying management systems to push processed event data into another management system while 2nd Generation Analytics solutions interact directly with the security environment or other infrastructure. 

1st Generation Analytics brought us device interrogation, 2nd Generation Analytics brought us consolidation and limited correlation, so what’s next? Minority Report? Shut down users or devices before they break the network? {Sounds like a great commercial idea for Cisco’s Self Defending Networks}

3rd Generation Analytic technology is entering the market place, but the technology needs maturization. Statistical Anomaly Detection and Behavioral Analysis are the current incarnations and there will no doubt be others, these technologies seek to apply complex mathematical techniques to events occurring within security environment and the rest of an organization’s technology infrastructure to make alerting and correlation decisions. The intent being to answer the question of ‘What is normal?’ by looking at the questions: How is this resource accessed, When is this resource accessed, Where is the user accessing this resource, etc, in hopes of understanding the answer to ‘Is my data safe and secure?’  

The validation for this technology will come with interpreting multiple data sources accurately and its ability to “learn” what is normal for company A that is abnormal for company B. There is a saturation threshold with this approach when too many event sources are being analyzed that results in something worse than False Positives, which is False Negatives: an organization is back to thinking they are secure because the analytics are unable to correlate some or all of the event sources and don’t process critical event information occurring from an actual breach. 

The evolution of analytics has brought improved monitoring and alerting for the security environment but analytics still suffer from the issue of fidelity, which becomes increasingly important with each successive leap in analytic technology. If the analytics are not trusted and not acted upon, then the security environment cannot fulfill its purpose, and the analytics become irrelevant.

Stay tuned, more to come…