Blog Archives

Three reasons why IPS and WAF won’t converge anytime soon

Continuing on the IPS vs WAF theme, one might consider the notion that WAF will just converge with IPS. I wanted to point out three reasons why I think this will NOT happen anytime soon. The reasons center around demand, performance, and implications, and feed into each other.

It’s akin to Chris Rock’s joke about driving a car with your feet: You can drive your car with your feet if you want to, that don’t make it a good __ idea!
Read more…

WAF vs IPS (or Four Things Your IPS Can’t Do)

I see this often and I am always amused at the topic. I have worked with IDS/IPS for 8 years, so I know IPS when it was just a flavor of IDS that no one wanted to enable for fear of blocking access to users and customers. I chuckle at the thought of WAF being a glorified IPS. My how times have changed.
Here are four things that your WAF can do that your IPS can’t. I tried to keep this vendor agnostic.

Please feel free to pile on or comment, just no flames please!

Web Application Firewalls, as the name implies, work with web applications almost exclusively. Most WAF are often not best-of-breed traditional firewalls, and should not be implemented in place of a traditional network firewall. Typical WAF deployments feature SSL decryption of web application traffic and blocking of web-based threats after the WAF reassembles each web session. This is possible because the WAF operates at the application layer where HTML, XML, Cookies, Javascript, ActiveX, Client requests, and Server responses live.
Read more…

Five Classic Web Attacks

While reading through my blog inbox and writing up my 2010 Wishlist for work, I thought I’d drop a quick post to highlight five web security ‘problem areas’ that still exist after at least a decade of patches, pleas, and regulatory requirements.

  • SQL Injection
  • Hack the Web Server
  • Cross Site Scripting
  • Cookie Tampering
  • Session Hijacking

I often find myself explaining what these are and providing examples, in order to garner support for remediation.
Read more…


I wanted to cover some WAF topics I haven’t seen covered much. Most WAF vendors talk about the security their product provides in terms of blocking attacks. I would like to delve into these WAF Blockings as well as mention some ideas for alternative uses for your WAF through it’s interactions with web clients.

Web Application Firewalls are interesting bits of technology. Depending on the product and deployment method you chose, they can transparently protect your web infrastructure using various protections by generating blocks when threats are identified. Depending on the product, they can Vulcan mind meld with your Apache instance, live as another F5 device in your network, take over a slot in your XBeam, or live life as a network appliance inside your datacenters.

This intelligent device COULD interact with the client in additional ways outside generating BLOCKs. Read more…

Imperva Placeholders

I had an email asking what placeholders I usefor logging platform integration. Rather than reply in a comment or email, I thought I’d just make a post out of the response.

Looking at placeholders, here are some of the ones I use the most:

  • ${Alert.dn}  this is the alert id
  • ${Alert.createTime} this is the time the ALERT was created (note this can be misleading)
  • ${Alert.description} this is bound to the alert, so you may see “Distributed” or “Multiple” appended due to aggregation of events
  • ${Event.dn} this is the event (violation) id
  • ${Event.createTime} this is the time the EVENT was created (this is when the event happened}
  • ${Event.struct.user.user} this is the username from a web or database action
  • ${Event.sourceInfo.sourceIP}
  • ${Event.sourceInfo.sourcePort}
  • ${Event.sourceInfo.ipProtocol}
  • ${Event.destInfo.serverIP}
  • ${Event.destInfo.serverPort}
  • ${Event.struct.networkDirection} which way is the traffic flowing that triggered the event?
  • ${Rule.parent.displayName} this is the name of the Policy that was triggered

There are other placeholders you can leverage, but these are the core I start with. I like these because they’re used on the web gateway AND the database gateway. This lets me have a consistent intelligence feed to my log monitoring platform and my SIEM product.

The trick here is that I can see how may events roll up underneath a single Alert. In the syslog feed, I can track the duration of an attack as well as tell you when I last saw the activity, because I track Alert.createTime and Event.createTime.

There are lots of options for how you build your syslog feed:

  • You may be interested in the response time of the query or web page
  • Perhaps the response size is of concern to you
  • You may treat threats differently depending on where they occur in a database table or URL
  • You may be interested in the SOAP action or request

Last but not least, in addition to security events you can also push system level events in the same manner using different placeholders.

  • Configuration events can be syslog’d on complete with the user making the change
  • Gateway disconnect messages can be sent via syslog (snmp might be better, but you need to load the custom OIDs)
  • Excessive CPU or traffic levels can be sent via syslog

How are you using placeholders?

Imperva: Alerts and Events

I received some emails overnight on the Imperva DIY Syslog posting asking when to use the alert placeholders versus the event placeholders.

For anyone not familiar with the Imperva SecureSphere platform, the system has a handy feature that provides aggregation of events on the SecureSphere management server detected by the gateways. This works whether you’re using the web or database gateways but for today I want to focus on the relationship between the data coming from the gateways and the aggregated data on the manager,  I’ll let ImperViews get into the other details – you can read more in the Imperva documentation.

The first thing you have to take note of is the Imperva hierarchy for violations/events and alerts. When the Imperva detects a condition that meets the criteria of a policy, whether that’s correlation, signature, profile, custom, etc., a violation is triggered on the gateway and fed to the management server. Everything in the management server for reporting and monitoring builds off this violation/event detail from the gateway, the gateway is where the enforcement and detection takes place so that should make sense. This is how we know the gateway is taking action on our behalf!

Assuming you haven’t disabled aggregation on the SecureSphere settings, each violation is aggregated into an alert. There are several criteria that the management server uses when aggregating a violation, so you’ll want to check the documentation for your version. The basic idea is that the SecureSphere manager will aggregate similar violations against a server group, an IP Address, a URL, a policy, or some combination of thereof in a 12 hour window. An alert in SecureSphere will have at least one violation/event tied to it, but depending on your aggregation settings it may have more.


So! When you push security events to an external log monitor, you have to decide if you just want the initial Alert information or if you want each violation that occurs! If you build the Action Interface using ALERT Placeholders you’ll only get the Alert data with no additional details in the underlying violation/event stream. This could be problematic, if you’re trying to figure out if something is still going on because remember the SecureSphere aggregates violations under a single Alert for up to 12 hours!

In addition to using the correct placeholders, you also have to enable the “Run on every event” checkbox in the Action Interface/Action Set.

I tend to mix the Alert and Event placeholders so that I get relevant Event details wrapped in the Alert context. I see no reason to make my logging solution work extra hard to establish the same correlation of the Events into Alerts that SecureSphere does automatically.

How do you manage your SecureSphere alerts and events?

Imperva’s DIY syslog format

I have had the fortune to support a few WAF installations, my preference is Imperva’s WAF solution. For any security product, being able to know what it’s doing and what is going on within the product is as important as the actual security being provided.

One of the features of Imperva’s solution that I find tremendously useful in an enterprise setting, and possibly an MSSP as well,  is the ability to construct custom syslog formats for triggered alerts and system events in almost any format. I like to think of this as a Do-It-Yourself syslog formatter because the feed can be built and sent anywhere, using any number of options. More importantly, the feed can be bundled with specific policies or event types to provide limitless notification possibilities that often require professional services engagements to develop and implement.

In Imperva terminology, any policy or event can be configured to trigger an “Action Set” containing specific format options for among other things syslog messaging. If your logging platform (PLA) or SIEM requires a specific format, there’s a very strong chance that, with no more effort than building a policy, you can build the ${AlertXXX} or ${EventXXX} constructs necessary for your needs.

You can model the alerts to look like the Cisco PIX format, ARCSight’s CEF format can be used, or you can make your own as I’ve done in this screenshot:

Basic Syslog Alert Format

Basic Syslog Alert Format

In addition to allowing customized messaging format, Imperva’s SecureSphere platform allows unique message formats and destinations to be specified at the policy and event level. For example, a “Gateway Disconnect” or ” throughput of gateway IMPERVA-01 is 995 Mbps” message can be sent to the NOC’s syslog server for response, while XSS or SQL Injection policies can be directed to a SOC or MSSP for evaluation. Additionally, the “Action Set” policies can be setup so that the SOC is notified on both of the  messages above as well as security events.

The configuration of the custom logging format is very straightforward, using placeholders to build the desired message format.  The document “Imperva Integration with ARCSight using Common Event Framework” provides a number of examples, including a walk-through for building a syslog alert for system events, standard firewall violations, as well as custom violations. The guide is directed at the integration with ARCSight.

Depending on the version of Imperva SecureSphereyou are running / evaluating, the alert aggregation behavior will differ. Newer versions (6.0.6+) better support SIEM platforms with updated alert details, where older versions push syslog events on the initial event only.

You can request a copy of Imperva Integration with ARCSight using Common Event Framework to get additional ideas on customizing your syslog feeds for your SIEM product.

Getting more from your WAF (Sensitive URL Tracking)

I have had the fortune to support a few Imperva installations, alongside other WAF solutions. I would like to illustrate one use for logs available on the Impervaplatform that can be leveraged to augment website trend reports and monitor “exposure” on key URL’s.

If you’re not familiar with the Imperva platform, it is possible (as with other WAF vendor’s products) to build custom policies that must match specific criteria and upon triggering these events can feed data into various syslog feeds. The entire purpose of a WAF is to protect your web application from threats, although some argue this point, so it stands to reason there may be facets of a given web application that are more sensitive than others.

Take for example the check-out page for an online retailer where the customer enters credit card data and confirms their billing information. This location of a web application might benefit from heightened logging under certain conditions by a Web Application Firewall, such as: forced browsing, parameter tampering, XSS, Server Errors, etc. The application may be vulnerable to fraud activities, the business may want to keep a tab on who’s accessing these URLs, or there some other risk criteria than can be measured using this approach.

Traditional webserver logs will provide: client information such as user agent info, username, source ip, method, access URL, response time, response size, and response code. The logged data sits in the access log file on the specific web server by default, but this information is for the entire website.

The Imperva SecureSphere can provide some of the same information: username, IP, Port, user-agent info, accessed URL, response size, response time, etc – but in addition, the Imperva can track whether the session was authenticated, correlated database query (if you have Imperva database protection deployed), SOAP information, security details relevant to the specific policy. The kicker is that this can be sent in a format configured by the admin to a syslog listener in a format supported by web trend tools or SIEM products without engaging professional services.

I’m not advocating the replacement of web server logs for trend analysis, but I am suggesting the deployment of targeted logging for sensitive areas inside an application where this information would prove useful either in a fraud capacity, security monitoring capacity, or even in an end-to-end troubleshooting capacity where a WAF would have visibility beyond traditional network tools from the frontend of a N-tier web application. Deviations in response times, excessive response sizes, and unauthenticated access attempts to sensitive URLs are ideas that come to mind for leveraging the visibility a WAF can bring to the table.

WAFing it up

I should disclose up front that I derive my living today supporting WAF technologies for a large corporation, and so it will come as no surprise that I have a few opinions on the use of WAF technology and in general how to go about protecting web applications.

If you’re a purist and feel adamantly for or against Web Application Firewalls, I would urge you to consider the roots of defense-in-depth – just like the spoon in The Matrix, there is no silver bullet. OWASP‘s concepts are as close as we’ll ever get to that silver bullet.

Secure Coding won’t get you out of every vulnerability and neither will a WAF, if for no other reason than the sheer complexity of the equipment needed to stand up web-enabled services introduces too many interdependencies to think every coder, developer, and vendor got everything right and there will never be a problem. — If you disagree with that, put down the Vendor Kool-Aid now before it’s too late.

Positive / Negative Security Models
Good grief.  Techie speak if ever there was any. Reminds me of the James Garner movie Tank, where little Billy is exposed to negative feedback in order to arrest his “bad” behavior. In my house, that’s called a spanking and you get one when it’s appropriate. My kids know what a spanking is and so does anyone reading this thread. Without googling, name two WAF products based on each of these Security Models: Positive & Negative — It’s okay, I’ll wait for you.

And we’re back…
On the topic of Security Models, I tend to think it takes a combination of protective technologies to provide any actual risk/threat  mitigation. I would personally like to see developers take advantage of a WAF’s ability to see how an application behaves. Moste developers don’t think of in terms of which web page does what, instead they’re working with APIs and objects. This is unfortunate because the rest of the world sees these applications as URL’s. The WAF can be that bridge to the developers. A WAF could in theory help the developer ensure that a specific sequence of events happens before a transaction is processed or prompt the client before transactions occur in specific instances to avoid CRSF.

To bring things back around to my original point. I do agree that the more complex a web application is and the more servers required to make a service available online, the more vulnerable and difficult to secure that application or service will be. I’m not sure who’s law that is but I’m sure one exists, complexity breeds more complexity.

No surprise there, if you are protecting a complex asset then it will be high maintenance — I said to put down the Kool-Aid, it’s for you own good – nothing is free!

Network Zoning – Be the Zone

A while back I started a series on Network Zoning and like most procrastinating, over-achievers: I got side-tracked (is that a self-induced form of ADD?) ! I have had the pleasure of interacting with a number of folks on the zoning topic, and so I wanted to take a moment to tack on an additional concept that doesn’t always get much attention but is very relevant in your network zoning design.

PERSPECTIVE and the impact of perspective.

Perspective in Network Zoning is a little like determine the perspective of an email without knowing the sender. If you’ve ever sent a witty email to someone who didn’t share your sense of humor, you’ve been impacted by perspective. Please be careful not to confuse perspective with context. Perspective deals with a vantage point, while a context is the surrounding details.

When zoning, the perspective of the actual components, users, and threats dictates a given device’s zoning requirements. Theoretically perspective actually defines the security posture.

Did that hurt? Just a little?

Sample Four-Zone Network

The configuration for each of these devices in this illustration is relative to their location in the network. Their perspective determines their configuration. Obvious right? Please keep in mind, the External Firewall or Internal Firewall could easily be a router with ACL’s

Consider that the External Firewall in this illustration sees untrusted incoming traffic and passes only traffic based on rules for the more-trusted networks.

This “trusted” traffic of the External Firewall is actually UNTRUSTED TRAFFIC for the Internal Firewall! After all this is the UNTRUSTED interface on the Internal Firewall.

The Internal firewall can be configured with the same blocking rules of the External Firewall in addition to new rules that are applicable to protecting the Internal Networks.

The addition or the difference in security configuration for internal or external firewalls will be controlled in-part due to perspective because you could obviously implement the same overall security policy on both firewalls but the expectation for what threats exist where will be based on perspective.

In the same light, your zones will have traffic or usage patterns and requirements relative to their placement in the network. External DNS servers will be configured and protected differently than Internal DNS servers. Network resources talking across zones will work differently than talking inside a zone. Your security practices and configuration will change accordingly. The configuration for a given zone will be driven by perspective – requirements will map out differently based on the perspective of users, threats, and policies.

Perspective will show up within the logs as well. When you review the logs on your devices, you will react differently to external threats to your internal servers logged on the actual internal server versus the External Firewall.

When you build out your network zone, be sure to keep perspective in mind. You may choose to overlap policies as a defense in depth practice, but please take care to define your zoning appropriately.

What’s your perspective?
Drop me a line and let me know!