Category Archives: How to’s

Thought this might help someone else, so I put it over here

Imperva Placeholders

I had an email asking what placeholders I usefor logging platform integration. Rather than reply in a comment or email, I thought I’d just make a post out of the response.

Looking at placeholders, here are some of the ones I use the most:

  • ${Alert.dn}  this is the alert id
  • ${Alert.createTime} this is the time the ALERT was created (note this can be misleading)
  • ${Alert.description} this is bound to the alert, so you may see “Distributed” or “Multiple” appended due to aggregation of events
  • ${Event.dn} this is the event (violation) id
  • ${Event.createTime} this is the time the EVENT was created (this is when the event happened}
  • ${Event.struct.user.user} this is the username from a web or database action
  • ${Event.sourceInfo.sourceIP}
  • ${Event.sourceInfo.sourcePort}
  • ${Event.sourceInfo.ipProtocol}
  • ${Event.destInfo.serverIP}
  • ${Event.destInfo.serverPort}
  • ${Event.struct.networkDirection} which way is the traffic flowing that triggered the event?
  • ${Rule.parent.displayName} this is the name of the Policy that was triggered

There are other placeholders you can leverage, but these are the core I start with. I like these because they’re used on the web gateway AND the database gateway. This lets me have a consistent intelligence feed to my log monitoring platform and my SIEM product.

The trick here is that I can see how may events roll up underneath a single Alert. In the syslog feed, I can track the duration of an attack as well as tell you when I last saw the activity, because I track Alert.createTime and Event.createTime.

There are lots of options for how you build your syslog feed:

  • You may be interested in the response time of the query or web page
  • Perhaps the response size is of concern to you
  • You may treat threats differently depending on where they occur in a database table or URL
  • You may be interested in the SOAP action or request

Last but not least, in addition to security events you can also push system level events in the same manner using different placeholders.

  • Configuration events can be syslog’d on complete with the user making the change
  • Gateway disconnect messages can be sent via syslog (snmp might be better, but you need to load the custom OIDs)
  • Excessive CPU or traffic levels can be sent via syslog

How are you using placeholders?

Advertisements

Imperva: Alerts and Events

I received some emails overnight on the Imperva DIY Syslog posting asking when to use the alert placeholders versus the event placeholders.

For anyone not familiar with the Imperva SecureSphere platform, the system has a handy feature that provides aggregation of events on the SecureSphere management server detected by the gateways. This works whether you’re using the web or database gateways but for today I want to focus on the relationship between the data coming from the gateways and the aggregated data on the manager,  I’ll let ImperViews get into the other details – you can read more in the Imperva documentation.

The first thing you have to take note of is the Imperva hierarchy for violations/events and alerts. When the Imperva detects a condition that meets the criteria of a policy, whether that’s correlation, signature, profile, custom, etc., a violation is triggered on the gateway and fed to the management server. Everything in the management server for reporting and monitoring builds off this violation/event detail from the gateway, the gateway is where the enforcement and detection takes place so that should make sense. This is how we know the gateway is taking action on our behalf!

Assuming you haven’t disabled aggregation on the SecureSphere settings, each violation is aggregated into an alert. There are several criteria that the management server uses when aggregating a violation, so you’ll want to check the documentation for your version. The basic idea is that the SecureSphere manager will aggregate similar violations against a server group, an IP Address, a URL, a policy, or some combination of thereof in a 12 hour window. An alert in SecureSphere will have at least one violation/event tied to it, but depending on your aggregation settings it may have more.

So???

So! When you push security events to an external log monitor, you have to decide if you just want the initial Alert information or if you want each violation that occurs! If you build the Action Interface using ALERT Placeholders you’ll only get the Alert data with no additional details in the underlying violation/event stream. This could be problematic, if you’re trying to figure out if something is still going on because remember the SecureSphere aggregates violations under a single Alert for up to 12 hours!

In addition to using the correct placeholders, you also have to enable the “Run on every event” checkbox in the Action Interface/Action Set.

I tend to mix the Alert and Event placeholders so that I get relevant Event details wrapped in the Alert context. I see no reason to make my logging solution work extra hard to establish the same correlation of the Events into Alerts that SecureSphere does automatically.

How do you manage your SecureSphere alerts and events?

Imperva’s DIY syslog format

I have had the fortune to support a few WAF installations, my preference is Imperva’s WAF solution. For any security product, being able to know what it’s doing and what is going on within the product is as important as the actual security being provided.

One of the features of Imperva’s solution that I find tremendously useful in an enterprise setting, and possibly an MSSP as well,  is the ability to construct custom syslog formats for triggered alerts and system events in almost any format. I like to think of this as a Do-It-Yourself syslog formatter because the feed can be built and sent anywhere, using any number of options. More importantly, the feed can be bundled with specific policies or event types to provide limitless notification possibilities that often require professional services engagements to develop and implement.

In Imperva terminology, any policy or event can be configured to trigger an “Action Set” containing specific format options for among other things syslog messaging. If your logging platform (PLA) or SIEM requires a specific format, there’s a very strong chance that, with no more effort than building a policy, you can build the ${AlertXXX} or ${EventXXX} constructs necessary for your needs.

You can model the alerts to look like the Cisco PIX format, ARCSight’s CEF format can be used, or you can make your own as I’ve done in this screenshot:

Basic Syslog Alert Format

Basic Syslog Alert Format

In addition to allowing customized messaging format, Imperva’s SecureSphere platform allows unique message formats and destinations to be specified at the policy and event level. For example, a “Gateway Disconnect” or ” throughput of gateway IMPERVA-01 is 995 Mbps” message can be sent to the NOC’s syslog server for response, while XSS or SQL Injection policies can be directed to a SOC or MSSP for evaluation. Additionally, the “Action Set” policies can be setup so that the SOC is notified on both of the  messages above as well as security events.

The configuration of the custom logging format is very straightforward, using placeholders to build the desired message format.  The document “Imperva Integration with ARCSight using Common Event Framework” provides a number of examples, including a walk-through for building a syslog alert for system events, standard firewall violations, as well as custom violations. The guide is directed at the integration with ARCSight.

Depending on the version of Imperva SecureSphereyou are running / evaluating, the alert aggregation behavior will differ. Newer versions (6.0.6+) better support SIEM platforms with updated alert details, where older versions push syslog events on the initial event only.

You can request a copy of Imperva Integration with ARCSight using Common Event Framework to get additional ideas on customizing your syslog feeds for your SIEM product.

Getting more from your WAF (Sensitive URL Tracking)

I have had the fortune to support a few Imperva installations, alongside other WAF solutions. I would like to illustrate one use for logs available on the Impervaplatform that can be leveraged to augment website trend reports and monitor “exposure” on key URL’s.

If you’re not familiar with the Imperva platform, it is possible (as with other WAF vendor’s products) to build custom policies that must match specific criteria and upon triggering these events can feed data into various syslog feeds. The entire purpose of a WAF is to protect your web application from threats, although some argue this point, so it stands to reason there may be facets of a given web application that are more sensitive than others.

Take for example the check-out page for an online retailer where the customer enters credit card data and confirms their billing information. This location of a web application might benefit from heightened logging under certain conditions by a Web Application Firewall, such as: forced browsing, parameter tampering, XSS, Server Errors, etc. The application may be vulnerable to fraud activities, the business may want to keep a tab on who’s accessing these URLs, or there some other risk criteria than can be measured using this approach.

Traditional webserver logs will provide: client information such as user agent info, username, source ip, method, access URL, response time, response size, and response code. The logged data sits in the access log file on the specific web server by default, but this information is for the entire website.

The Imperva SecureSphere can provide some of the same information: username, IP, Port, user-agent info, accessed URL, response size, response time, etc – but in addition, the Imperva can track whether the session was authenticated, correlated database query (if you have Imperva database protection deployed), SOAP information, security details relevant to the specific policy. The kicker is that this can be sent in a format configured by the admin to a syslog listener in a format supported by web trend tools or SIEM products without engaging professional services.

I’m not advocating the replacement of web server logs for trend analysis, but I am suggesting the deployment of targeted logging for sensitive areas inside an application where this information would prove useful either in a fraud capacity, security monitoring capacity, or even in an end-to-end troubleshooting capacity where a WAF would have visibility beyond traditional network tools from the frontend of a N-tier web application. Deviations in response times, excessive response sizes, and unauthenticated access attempts to sensitive URLs are ideas that come to mind for leveraging the visibility a WAF can bring to the table.

Network Zoning – Be the Zone

A while back I started a series on Network Zoning and like most procrastinating, over-achievers: I got side-tracked (is that a self-induced form of ADD?) ! I have had the pleasure of interacting with a number of folks on the zoning topic, and so I wanted to take a moment to tack on an additional concept that doesn’t always get much attention but is very relevant in your network zoning design.

PERSPECTIVE and the impact of perspective.

Perspective in Network Zoning is a little like determine the perspective of an email without knowing the sender. If you’ve ever sent a witty email to someone who didn’t share your sense of humor, you’ve been impacted by perspective. Please be careful not to confuse perspective with context. Perspective deals with a vantage point, while a context is the surrounding details.

When zoning, the perspective of the actual components, users, and threats dictates a given device’s zoning requirements. Theoretically perspective actually defines the security posture.

Did that hurt? Just a little?

Sample Four-Zone Network

The configuration for each of these devices in this illustration is relative to their location in the network. Their perspective determines their configuration. Obvious right? Please keep in mind, the External Firewall or Internal Firewall could easily be a router with ACL’s

Consider that the External Firewall in this illustration sees untrusted incoming traffic and passes only traffic based on rules for the more-trusted networks.

This “trusted” traffic of the External Firewall is actually UNTRUSTED TRAFFIC for the Internal Firewall! After all this is the UNTRUSTED interface on the Internal Firewall.

The Internal firewall can be configured with the same blocking rules of the External Firewall in addition to new rules that are applicable to protecting the Internal Networks.

The addition or the difference in security configuration for internal or external firewalls will be controlled in-part due to perspective because you could obviously implement the same overall security policy on both firewalls but the expectation for what threats exist where will be based on perspective.

In the same light, your zones will have traffic or usage patterns and requirements relative to their placement in the network. External DNS servers will be configured and protected differently than Internal DNS servers. Network resources talking across zones will work differently than talking inside a zone. Your security practices and configuration will change accordingly. The configuration for a given zone will be driven by perspective – requirements will map out differently based on the perspective of users, threats, and policies.

Perspective will show up within the logs as well. When you review the logs on your devices, you will react differently to external threats to your internal servers logged on the actual internal server versus the External Firewall.

When you build out your network zone, be sure to keep perspective in mind. You may choose to overlap policies as a defense in depth practice, but please take care to define your zoning appropriately.

What’s your perspective?
Drop me a line and let me know!

PIX Logging Architecture is Back Online

Great news, after a brief recess, PIX Logging Architecture is back on the NET!

Be sure to checkout the screenshots for features / selling points and import the latest syslog-message database if you’ve not done that since installation. Remember PLA handles the following Cisco Security Devices:

PIX Logging Architecture v2.00 supports log messages from the following devices:

  • Cisco ASA (TESTED AND CONFIRMED)
  • Cisco PIX v6.x (TESTED AND CONFIRMED)
  • Cisco PIX v7.x (TESTED AND CONFIRMED)
  • Cisco FWSM 2.x (TESTED AND CONFIRMED)
  • Cisco FWSM 3.x (TESTED AND CONFIRMED)

PLA Documentation

PLA Screenshots and more PLA Screenshots

PLA: Latest PLA syslog message support

Support for alternate forms of syslog daemons can also be found here for parsing rsyslog.

Welcome back Kris!

PIX Logging Architecture

Tweaking PLA: Using rsyslog

PLA (PIX Logging Architecture) uses regular expressions (regex) to parse syslog messages received from Cisco firewallsand comes pre-configured to process standard “syslogd” message format. Most current Linux distributions ship with rsyslog (able to log directly to MySQL) while some administrators prefer syslog-ng.

The installation documentation distributed with PLA assumes a familiarity regex, so here you’ll see how to tweak PLA to parse your rsyslogd log file.

Perl is used to parse through the syslog message looking for matches to message formats described in the syslog_messages table in the pix database. The processing script pla_parsedcontains a regex pattern that must be matched in order for the processing to occur. The applicable section is:

### PIX Firewall Logs
### DEFINE YOUR SYSLOG LAYOUT HERE!
###
$regex_log_begin = “(.*):(.*) (.*) (.*) (.*) (.*) (.*)“;
$var_pixhost=3;
$var_pixmonth=4;
$var_pixdate=5;
$var_pixyear=6;
$var_pixtime=7;

Here, the variable regex_log_beginneeds to match up all the log information up to the PIX, ASA, or FWSM message code in order to understand date, time, and host for these messages. Take a look at the provided sample log entry, everything in red needs to be picked up by regex_log_begin while the remainder is standard for Cisco firewalls:

Oct 21 23:59:23 fwext-dmz-01 Oct 21 2006 23:58:23: %PIX-6-305011: Built dynamic TCP translation from inside:1.1.1.1/2244 to outside:2.2.2.2/3387

Explaining the operation of regex and wildcards is beyond the scope of this article; however, numerous guides have been written to fill the void. In our case, adjusting the default regex to match rsyslog is straight forward after noting which characters match which pattern, again we’re working with the basics of regex here – nothing fancy.

Take this sample rsyslog entry and notice the difference from the standard syslogd format:

Feb 21 10:59:32 Feb 21 2008 10:59:32 pix4us : %PIX-6-110001: No route to 1.1.1.1 from 3.4.5.6

Feb 21 10:59:32 Feb 21 2008 10:59:32 pix4us
Oct 21 23:59:23 fwext-dmz-01 Oct 21 2006 23:58:23

Here, the rsyslog entry includes the date twice and then the hostname of the log source versus the default format expected by pla_parsedof date hostname date. The original regex is set to pickup the first time entry’s “minutes and seconds” and picks up the next 5 words/entries separated by spaces:

$regex_log_begin = “(.*):(.*) (.*) (.*) (.*) (.*) (.*)“;

 Oct 21 23:59:23 fwext-dmz-01 Oct 21 2006 23:58:23

In order to process rsyslog, this will have to be changed. The initial (.*):(.*) is used to set a starting point in syslog message string. Since this new rsyslog format includes two date entries before the host name, the following can be used to allow pla_parsedto “see” the new syslog message string:

$regex_log_begin = “(.*):(.*) (.*) (.*) (.*) ((.*):(.*):(.*)) (.*)“;

Feb 21 10:59:32 Feb 21 2008 10:59:32 pix4us

The regex starts out the same, but looking at the colors you will notice the location of the information needed by pla_parsed to determine date, time, and host has moved. This time we used “(.*):(.*)” and “((.*):(.*):(.*))” to force a match on the time elements.

As a result of this change, the variables listed below the regex pattern must be modified to tell pla_parsed which (.*) contains which element:

$regex_log_begin = “(.*):(.*) (.*) (.*) (.*) ((.*):(.*):(.*)) (.*)“;

$var_pixhost=7;
$var_pixmonth=3;
$var_pixdate=4;
$var_pixyear=5;
$var_pixtime=6;

The numbering happens left to right and the color coding should help this make sense. The ()’s around the grey time entry are grouped together and count as one match/entity, the sixth variable. This same approach of keying off the timestamping can be applied to pla_parsedin order to allow processing of syslog-ng, ksyslogd, or any other syslog message format.

Need help with a different format? Have problems getting your PIX logs loaded? Paste in a sample message from your syslog server (IP Addresses santized please) in a comment below.

PLA: Setting up the Database

Updated installation documentation for PLA (PIX Logging Architecture).

If the PLA documentation leaves you looking for the DUMMIES guide, checkout my mini-article on BetterTechInfo regarding PLA Database Setup.  

I am getting involved in a new business venture and decided to publish some early projects / work to help out folks using PLA.

Be advised the construction dust is everywhere!

Network Zoning – In the Zone

Traditional network security featured two or three zones, whether they were thought of in terms of zones or not may depend on the organization. Certainly everyone is failure with DMZ; however, modern approaches to network security don’t stop at the perimeter. This is where these new zones come in.

In the early days of network security, the consensus was: place a firewall at the external touch-points of your network with two or more network interfaces. If you ran any public services, locate them inside something called a DMZ or screened-network and restrict access to/from those devices for internal systems. 

This should sound familiar, welcome to Network Zoning. The post-modern era of network security takes this DMZ approach and marries it with the principle of trust/privacy, so that the internal network can be carved up into segments providing increased security, compartmentalization, and privacy for users, services, servers, customers, etc.

In the earlier posts I referenced the idea of grouping like-functions of an internal network by vlan and/or IP subnet. This segmenting, grouping, partitioning, or zoning works just like the DMZ approach of traditional network security with the exception that the rules for access will be different and the “firewall” is internal. Here comes the tricky part.

Implementing the segmentation part can get complicated. I recommend vlans and different IP Address ranges as a general architectural practice, but it is possible with modern technology to insert transparent firewalls (let firewalls be firewalls) to facilitate rapid firewalling of network segments without having to implement vlans and IP Addresses.

These zones form boundaries inside the network, if you implement traditional firewalls they represent layer 3 boundaries and if you implement transparent firewalls they represent layer 2 boundaries. The outside boundaries should be thought of as untrusted, DMZ-like boundaries should be untrusted or semi-trusted, and depending on the user community the internal boundaries may be trusted.

These boundaries isolate trusted, untrusted, and semi-trusted devices and services from one-another and form what is referred to as a trust boundary. Trust boundaries can then be used to form privacy boundaries, where decisions are made segment trust implied within these various segments.

Getting back to the zoning concept, one design might feature an external firewall or set of firewalls (if redundancy is preferred) that provide the first barrier between the Internet and Internal users. Assuming the organization provides DNS, Email, Web, or other publicly available services, this firewall will provide an external DMZ as well. This will create one untrusted zone and one semi-trusted zone that will interface with an internal firewall/router acting as a semi-trusted barrier (zone) where the actual trusted zone(s) live.

The firewalls in this diagram can be routers, switch routers, firewalls, or a combination. The key is selecting hardware/software that will support the particular environment and needs of the organization. Firewalls come in all flavors, colors, and sizes these days and many do more than just filtering of packets, including deep packet inspection, IDS/IPS, load balancing, malware scanning, and web content filtering. Most routers on the market today can provide packet filtering / inspection in addition to traditional routing functions with minimal performance implications.

This specific architecture addresses minimal internal and external security from a zoning perspective needed to produce a trust barrier where internal systems should be protected from external (untrusted) systems. If the Internal Servers segment is firewalled from the Corporate LAN segment in this diagram, then four (4) security zones will form a privacy boundary in addition to the trust boundary where servers are isolated from users and outsiders are isolated from insiders.

Cisco IPS Event Viewer Database Hacking

Ever wished you get different snapshots from the Cisco IEV tool? Management ever asked what went on the last 30 days and your management platform can’t help you?

I found myself needing to provide a 30 day report from a Cisco IDSM2 blade, after finding no built-in option in the https servlet on the IDSM itself and nothing immediately available in the Event Viewer, I began to look around.

Cisco provides the freely available IPS Event Viewer (IEV) for their IPS/IDS productsthat makes use of java to load alert data into MySQL tables for display in the stand-alone software. If you want a full blown reporting engine and monitoring tool for Cisco IDS/IPS you’ll need to look at MARS and then look elsewhere. (Anyone I work with will tell you I’m not fond of CS-MARS.)

I found the MySQL Admin widget (mysqladmin.exe) and looked at the databases and tables installed by IEV. I’ve spent a fair amount of time with sql and MySQL databases, so I looked around the table structures to see if there was another option.

The database names and table names along with their configuration are viewable in the MySQLAdmin GUI. You could also do this with show commands at the mysql prompt: show databases, show tables, describe tables.

Using the field names, I constructed the following query:

select FROM_UNIXTIME(receive_time/1000,’%c-%d-%Y’) as date ,count(sig_id) as counted, sig_name from event_realtime_table group by sig_name, sig_id, date order by date, counted;

Note the receive_time and event_time fields are unix timestamped in milliseconds, not seconds. In the example above, I compensated by dividing by 1000, because I only needed calendar days.This results in the following response: 

+—————+———-+——————————————+
| date                 |counted| sig_name                                
+—————+———-+——————————————+
| 11-06-2007 |              1 | FTP Authorization Failure               
| 11-06-2007 |              1 | Storm Worm                              
| 11-06-2007 |              1 | DNS Tunneling                           
| 11-06-2007 |              2 | TCP Segment Overwrite                   
| 11-06-2007 |            44 | TCP SYN Host Sweep                      
| 11-07-2007 |               1 | SMB Remote Srvsvc Service Access Attempt
| 11-07-2007 |               1 | SSH CRC32 Overflow                      
| 11-07-2007 |               2 | MS-DOS Device Name DoS                  
| 11-07-2007 |               2 | FTP PASS Suspicious Length              
| 11-07-2007 |               3 | HTTP CONNECT Tunnel                      
+—————+———–+——————————————+

The event_realtime_table only contains the most recent data; depending on your setup this may be 1 day, 5 days, or 30 days. In my case, I only have 24 hours worth of data in the realtime table and have to look elsewhere for the prior 29 days.

If you’ve configured any archiving, you will need to tap into those extra tables in order to get the full 30 days. I elected to export all the tables to a single CSV file and do the parsing in Linux. Using the commands below, I created a file that contained the receive_time (MM/DD/YYYY), severity, sig_id, and sig_name:

tr -d ’47’ < /tmp/exported.csv |  awk ‘{FS=”,”}{print strftime(“%x”, $6*.001)”,”$3″,”$9″,”$10}’ > file.txt

This gave me a table of dates, severity, signature id’s, and signature names that I can use as needed. From here I used awk to mangle the columns and pre-format the results for loading into Excel as a chart:

awk ‘{FS=”,”} {print $1,$2}’ /file.txt | sort -rn | uniq -c | awk ‘{print $3″,”$2″,”$1}’ | sort > excel_ready.csv

This results in a comma delimited file that can be loaded into Excel and used to create charts or graphs as needed. The commands can be scripted to run every 30 days on archived files, if necessary.

     75 09/30/07,0,3030,TCP SYN Host Sweep
      8 09/30/07,0,6250,FTP Authorization Failure
      3 09/30/07,1,2100,ICMP Network Sweep w/Echo
      4 09/30/07,1,3002,TCP SYN Port Sweep
      7 09/30/07,2,6066,DNS Tunneling
      2 09/30/07,3,1300,TCP Segment Overwrite
      3 09/30/07,3,3251,TCP Hijack Simplex Mode
      4 10/01/07,0,1204,IP Fragment Missing Initial Fragment

You could easily import other fields, the ones of most interest to me where:

  • field 3         Alert Severity (0-4) [Informational – High]
  • field 4         Sensor name (if you have more than one sensor)
  • field 6         Timestamp epoch using milliseconds instead of seconds
  • field 9         Signature ID
  • field 10       Signature Name
  • field 14       Attacker IP Address
  • field 15       Attacker Port
  • field 17       Victim IP Address
  • field 18       Victim Port

About the Linux commands:

If these commands are new, or you’d like to understand more about using *nix tools to parse text, look here and hereto get started, google Linux, or go to your favorite bookstore and buy an O’Reilly book.

The tr commandused above removes the single quotes wrapped around each data element during the database export. This is done using the ascii code for the single quote character. This is necessary to perform the data formatting in the awk command.

The awk command uses several arguments to perform formatting of the data. First, the {FS} element is used to tell awk to use a comma as the field separator instead of spaces, which is the default. Once awk understands how to break up the fields, I format the receive_time field. awk sees the receive_time field as the sixth field and assigns it $6, accordingly the other field elements are addressed in the same sequential method. The print action tells awk to display the fields as output fields. I used the strftimeto convert the Unix timestamp back to human readable time. There is a caveat here: you must account for millisecond timestampingversus standard “seconds from epoch” in traditional timestamping. Each operation in awk is separated by {}’s.

I used standard sort and uniq to perform sorting and counting functions on the data I parsed using awk.