WAFing it up

I should disclose up front that I derive my living today supporting WAF technologies for a large corporation, and so it will come as no surprise that I have a few opinions on the use of WAF technology and in general how to go about protecting web applications.

Purists.
If you’re a purist and feel adamantly for or against Web Application Firewalls, I would urge you to consider the roots of defense-in-depth – just like the spoon in The Matrix, there is no silver bullet. OWASP‘s concepts are as close as we’ll ever get to that silver bullet.

Secure Coding won’t get you out of every vulnerability and neither will a WAF, if for no other reason than the sheer complexity of the equipment needed to stand up web-enabled services introduces too many interdependencies to think every coder, developer, and vendor got everything right and there will never be a problem. — If you disagree with that, put down the Vendor Kool-Aid now before it’s too late.

Positive / Negative Security Models
Good grief.  Techie speak if ever there was any. Reminds me of the James Garner movie Tank, where little Billy is exposed to negative feedback in order to arrest his “bad” behavior. In my house, that’s called a spanking and you get one when it’s appropriate. My kids know what a spanking is and so does anyone reading this thread. Without googling, name two WAF products based on each of these Security Models: Positive & Negative — It’s okay, I’ll wait for you.

And we’re back…
On the topic of Security Models, I tend to think it takes a combination of protective technologies to provide any actual risk/threat  mitigation. I would personally like to see developers take advantage of a WAF’s ability to see how an application behaves. Moste developers don’t think of in terms of which web page does what, instead they’re working with APIs and objects. This is unfortunate because the rest of the world sees these applications as URL’s. The WAF can be that bridge to the developers. A WAF could in theory help the developer ensure that a specific sequence of events happens before a transaction is processed or prompt the client before transactions occur in specific instances to avoid CRSF.

To bring things back around to my original point. I do agree that the more complex a web application is and the more servers required to make a service available online, the more vulnerable and difficult to secure that application or service will be. I’m not sure who’s law that is but I’m sure one exists, complexity breeds more complexity.

No surprise there, if you are protecting a complex asset then it will be high maintenance — I said to put down the Kool-Aid, it’s for you own good – nothing is free!

Network Zoning – Be the Zone

A while back I started a series on Network Zoning and like most procrastinating, over-achievers: I got side-tracked (is that a self-induced form of ADD?) ! I have had the pleasure of interacting with a number of folks on the zoning topic, and so I wanted to take a moment to tack on an additional concept that doesn’t always get much attention but is very relevant in your network zoning design.

PERSPECTIVE and the impact of perspective.

Perspective in Network Zoning is a little like determine the perspective of an email without knowing the sender. If you’ve ever sent a witty email to someone who didn’t share your sense of humor, you’ve been impacted by perspective. Please be careful not to confuse perspective with context. Perspective deals with a vantage point, while a context is the surrounding details.

When zoning, the perspective of the actual components, users, and threats dictates a given device’s zoning requirements. Theoretically perspective actually defines the security posture.

Did that hurt? Just a little?

Sample Four-Zone Network

The configuration for each of these devices in this illustration is relative to their location in the network. Their perspective determines their configuration. Obvious right? Please keep in mind, the External Firewall or Internal Firewall could easily be a router with ACL’s

Consider that the External Firewall in this illustration sees untrusted incoming traffic and passes only traffic based on rules for the more-trusted networks.

This “trusted” traffic of the External Firewall is actually UNTRUSTED TRAFFIC for the Internal Firewall! After all this is the UNTRUSTED interface on the Internal Firewall.

The Internal firewall can be configured with the same blocking rules of the External Firewall in addition to new rules that are applicable to protecting the Internal Networks.

The addition or the difference in security configuration for internal or external firewalls will be controlled in-part due to perspective because you could obviously implement the same overall security policy on both firewalls but the expectation for what threats exist where will be based on perspective.

In the same light, your zones will have traffic or usage patterns and requirements relative to their placement in the network. External DNS servers will be configured and protected differently than Internal DNS servers. Network resources talking across zones will work differently than talking inside a zone. Your security practices and configuration will change accordingly. The configuration for a given zone will be driven by perspective – requirements will map out differently based on the perspective of users, threats, and policies.

Perspective will show up within the logs as well. When you review the logs on your devices, you will react differently to external threats to your internal servers logged on the actual internal server versus the External Firewall.

When you build out your network zone, be sure to keep perspective in mind. You may choose to overlap policies as a defense in depth practice, but please take care to define your zoning appropriately.

What’s your perspective?
Drop me a line and let me know!

Off to the WAF races

PCI DSS called for implementation of code reviews and web-application firewalls (WAF’s) in order to continue compliance and fight off the Breach Boogieman. Organizations can also conduct code reviews, as outlined in section 6.

Some ‘experts’ believe the web firewalls are just another piece of technology being thrown on the bonfire, while others believe you will never find all the potential bugs and flaws in an organization’s custom code, let-alone commercial software.

Interestingly, there continue to be heated discussions debating the usefulness of WAF’s, where they have to be deployed, what they are supposed to inspect, and whether businesses should be distracted by WAF’s in the first place.  The most important aspect of all this is the functionality that is to be provided by this technology. The WAF requirements outlined in requirement 6.6:

  • Verify that an application-layer firewall is in place in front of web-facing applications to detect and prevent web-based attacks.

Make sure any WAF implementation meets the full extent of the requirement because “detect and prevent web-based attacks” can get a little sticky. As technology goes, there are a few variations in how WAF’s have been developed. Some products use reverse proxying to interrupt the web session for the ‘detect’ and accomplish the ‘prevent’ by only allowing valid sessions. This validation is being done in variations just like typical IDS/IPS’s operation: you get your choice of signatures, anomaly detection, protocol inspection, and combinations thereof. Some of the available products skip the proxy function and monitor the web traffic like a traditional IDS/IPS for known or suspicious threats either in-line or via a SPAN or TAP. Companies can not only choose their type of technology but can also decide on using open-source software or commercially supported products or a cross between the two.

The open-source route offers mod_security for apache and if companies need commercial support, you can get an appliance running mod_security. I found it interesting, in a recent Oracle Application deployment, Oracle recommends the use of mod_security to service as an application-layer firewall and URL-filtering firewall for DMZ-deployments. If mod_security doesn’t fit your needs, Guardian is also an open-sourced software with detection and prevention capabilities. Both have commercial support and product options.

mod_security has some other interesting options. It is possible to take the SNORT web signatures and convert them to mod_security rules via a script provided with mod_security. There are also several groups that provide signatures / rules for mod_security to identity new threats.

Outside the open-source space, there are products like Imperva’s SecureSphere gateways that use anomaly detection and profiling to determine whether something should or should not be allowed to access a web server. This company’s product line features an interesting twist, the dynamic profiling technology relied upon to ‘detect and prevent’ comes from none other than the man that developed ‘stateful packet inspection’ in CheckPoint firewalls.

Along with Imperva, are F5, Cisco, CheckPoint, and the usual list of security vendors ready to snatch up your “bail-out” funding 🙂 . As with any security technology, only after a review of your organizations needs and a thorough pilot of the prospective technology will identify the best-fit for any organization.

At the end of the day, the use of WAF technology to mitigate web application security is but one of the many defenses an organization should have in place to provide data security and data privacy.

What do you use to guard the security of your web applications?

Speeding up PIX Parsing

Recent questions, comments, and suggestions have prompted this post. I would like to collect ideas for improving PIX Logging Architecture or provide a place to point out issues with running PLA at your organization.

If you use PLA or another open-source tool, tell us how you solve logging latency, sluggish reporting, and other related bottleneck issues associated with centralized log collecting and monitoring.

Is YOUR HelpDesk hurting you?

In recent weeks I have had the misfortune of dealingwith a number of malware incidents, not necessarily all these were at work. What I found interesting was the reason for the call to me and how easily the call could have been avoided.

It isn’t the Helpdesk’s fault.

See… I am a Network Security guy. I don’t do Desktops; in fact, you don’t want me doing desktop support! Calling me in for a virus or malware issue, is along the lines of bringing a Vulcan cannon to a diplomatic dispute – i.e. not the most appropriate solution, until things escalate.

In this age of technology and Al Gore’s Internet (don’t laugh I’m from Tennessee), everyone runs an updated anti-virus software package (enough with the laughs already). Most of the anti-virus software will detect a high percentageof the garbage you’re likely to encounter (no laughing!); unless you are a retailer and someone wants your credit card data! So if the software detects the threats and takes action, what is the problem?

I have been called on a number of occasions lately to find out why a number of computers are running slow (is Vista actually a trojan?) or the firewall logs show strange Internet traffic, or the developer’s laptop won’t shutdown properly, etc. After I suggest a pre-boot scan of the system or an external scan of the suspicious system’s hard-drive, it seems the on-host AV scanner wasn’t working and now we’re picking up 10’s, maybe 100’s of malware instances. What happened?

Funny thing about malware and automated processing by most AV engines. The AV solutions on the market today will either delete or quarantineany infected files they encounter, as a default when the infection cannot be cleaned. This is a great start, unless you wanted to have a look at the deleted software or the AV deleted a perfectly legitimate file. This is where the problemcomes in for the Helpdesk, remember the Helpdesk lives and dies by procedure and process.

AV software had its hayday back in the days of Melissa and the plethora of other Word macro viruses. Everyone was into this email thing (is email dying?) and everyone had Microsoft Word, so the bad guys loved nothing better than to send out a piece of garbage and see who all it took down. Granted the solution was rarely as simple as deleting the offending / infected Word file, this deletion process became the pat answer: if you have an infected Word file, don’t mess with it, just DELETE IT.

This solution survives today, but “Word” has been removed such that the solution to any detection of an infected file is to delete it. Your AV solution happily does that for you. So when the AV solution deletes the file but malware is already memory resident, you have yourself a problem. The Helpdesk is not going to respond to a single AV detection of Trojan.Backdoor; that is too resource intensive and often fruitless.

The Helpdesk’s response to single incidents is the cause for larger problems because although they can’t possibly react to a slow, steady stream of one or two infections per location over a week or two, those infections are laying the groundwork for a larger problem. No one asks the question, HOW did this infected file get on here or WHY is this infected file on here? Those questions aren’t the Helpdesk’s mission, instead their mission is keep tickets resolved, answer support calls, and meet SLA’s. Someone needs to be able to answer THOSE questions and take appropriate action.

Your Helpdesk, my Helpdesk, anyone else’s Helpdesk has a set of procedures they follow. Any AV solution worth its annual subscriptionfee (now start laughing), will feature centralized logging and reporting, so the Helpdesk and IT can be notified upon infections – mass infections, that is. One or two infections and those files being deleted isn’t going to raise suspicion in most organizations. Which leads me back to my opening paragraph…

All it takes is one piece of malware to get into memory. It isn’t a joke and it isn’t hypothetical. If you want to be next, just keep ignoring those “deleted” infected files your AV solution keeps finding! My hourly rates aren’t as bad as a front-page headline.

Still laughing?

S w a m p e d . . .

I apologize for my blog “disappearance”. I have been working with people via email on pulling Cisco IDS events into MySQL databases and getting ready for a new adventure with a new employer dealing with web firewalls.

I have spent the last several weeks trying to get my IDS/IPS 6.1.1 E2 software to not implode. We finally got access to the Cisco IDS Team, so things are moving along better now. I have taken the time to enable a few cool tricks with the new IME – similar to how IEV works with CSM. I also have a script to run with PLA to push IDS events into the IDS table for viewing inside the PLA tool – I already had a tweak to pull real-time events from the Cisco IEV database directly into a new web tab on PLA but it didn’t have any history.

More information on these will be forth coming, along with my promised info on network zoning. I also owe Steve a website or two, so those will go up as well. There will also be plenty of WAF info coming to since I’ll be spending 40+ hours a week with the technology.

As always, if there is anything else you’d like to see or if you would like some help, just drop me a line or a comment.

IRS Exempt from Security?

It’s TAX TIME! or is it HACK TIME?

It comes as no surprise an organization as large as the IRS is lacking some security controls, but from the material provided in several news articles it appears the IRS is lacking some fundamental elements or the application of Security Policies and standard IT Management processes is spotty at best. This is a major issue given recent news that sensitive information for the Democratic and Republican Presidential candidates was leaked by contractors.

The report findings about the IRS are items that most other organization are apparently already required to meet, according to various sources: the Sarbanes-Oxley legislation, the Payment Card Industry’s Digital Secrity Standards, and even the Health Information Portability and Administration Act

Overall security for an organization is made up of the sum total of all the piece, parts, polices, and process surrounding the organization. For the IRS, security seems less than what it should be. Specifically of concern are the following passages in the article, which were likely quoted directly from a report provided to the AP:

  • [MSNBC] … system administrations circumvented authentication controls by setting up 34 unauthorized accounts that appeared to be shared-use accounts, the report found
  • [CNN.com] more than 84 percent of the 5.2 million occasions that employees accessed a system to administer and configure routers, they used “accounts” that were not properly authorized
  • [MSNBC] A review found that the IRS had authorized 374 accounts for employees and contractors that could be used to perform system administration duties. But of those, 141 either had expired authorizations or had never been properly authorized.
  • [CNN.com] … there was no record that 55 employee and contractor accounts had ever been authorized.
  • [CNN.com] In addition, nine accounts were still active, even though the employees and contractors had not accessed the system for more than 90 days, the report says.
  • [CNN.com] The report does not say whether taxpayer information was misused, but says it is continuing to review security to see whether changes made to the computer system were appropriate or warranted.

Unauthorized accounts made unknown, untracked, and potentially unauthorized changes to systems and networks at the IRS? Multiple users share the same administrative account for making changes to multiple systems? Accounts were unused and still active after 90 days of inactivity? Log reviews are not conducted?
We are talking about the Internal Revenue Service, right??

For any organization reading this thinking, we have those same issues – what’s the big deal?

  1. Unauthorized accounts making potentially unauthorized changes = a potential security breach
  2. Multiple users sharing administrative account access = inability to determine who made what change
  3. Unused accounts still active after 90 days = even Microsoft gets this one right!
  4. No Log Reviews = no proof that a breach happened, no notice that a breach is in progress, and no idea how wide spread an attack is/was

A popular statistic among security professionals is that most security incidents are caused by insiders violating security protocols, policies, or processes; percentage is over 70% of all security incidents are caused by insiders. Given the IRS’ report findings, this is again a serious issue.

The underlying problem at the IRS is likely the same problem that other businesses face, how to be secure without security getting in the way? The simple answer: Security is a mindset. Either management gets it and supports it or they don’t and authorize exceptions to policy under the guise of “Just get it done.” This “get it done” approach completes projects on-time, often avoids cost-overruns due to last minute security bolt-ons, and usually leaves system or process gaps that can be taken advantage of by disgruntled or otherwise motivated employees.

What’s the solution?

A realistic hard look at how an organization views security, how management feels about the impacts of security, and ultimatley what costs an organization is willing to pay for security. In the case of the IRS, “The IRS issued a statement Monday saying it had “taken a number of steps to improve the control and monitoring of routers and switches.” — [MSNBC]

PIX Logging Architecture is Back Online

Great news, after a brief recess, PIX Logging Architecture is back on the NET!

Be sure to checkout the screenshots for features / selling points and import the latest syslog-message database if you’ve not done that since installation. Remember PLA handles the following Cisco Security Devices:

PIX Logging Architecture v2.00 supports log messages from the following devices:

  • Cisco ASA (TESTED AND CONFIRMED)
  • Cisco PIX v6.x (TESTED AND CONFIRMED)
  • Cisco PIX v7.x (TESTED AND CONFIRMED)
  • Cisco FWSM 2.x (TESTED AND CONFIRMED)
  • Cisco FWSM 3.x (TESTED AND CONFIRMED)

PLA Documentation

PLA Screenshots and more PLA Screenshots

PLA: Latest PLA syslog message support

Support for alternate forms of syslog daemons can also be found here for parsing rsyslog.

Welcome back Kris!

PIX Logging Architecture

Tweaking PLA: Using rsyslog

PLA (PIX Logging Architecture) uses regular expressions (regex) to parse syslog messages received from Cisco firewallsand comes pre-configured to process standard “syslogd” message format. Most current Linux distributions ship with rsyslog (able to log directly to MySQL) while some administrators prefer syslog-ng.

The installation documentation distributed with PLA assumes a familiarity regex, so here you’ll see how to tweak PLA to parse your rsyslogd log file.

Perl is used to parse through the syslog message looking for matches to message formats described in the syslog_messages table in the pix database. The processing script pla_parsedcontains a regex pattern that must be matched in order for the processing to occur. The applicable section is:

### PIX Firewall Logs
### DEFINE YOUR SYSLOG LAYOUT HERE!
###
$regex_log_begin = “(.*):(.*) (.*) (.*) (.*) (.*) (.*)“;
$var_pixhost=3;
$var_pixmonth=4;
$var_pixdate=5;
$var_pixyear=6;
$var_pixtime=7;

Here, the variable regex_log_beginneeds to match up all the log information up to the PIX, ASA, or FWSM message code in order to understand date, time, and host for these messages. Take a look at the provided sample log entry, everything in red needs to be picked up by regex_log_begin while the remainder is standard for Cisco firewalls:

Oct 21 23:59:23 fwext-dmz-01 Oct 21 2006 23:58:23: %PIX-6-305011: Built dynamic TCP translation from inside:1.1.1.1/2244 to outside:2.2.2.2/3387

Explaining the operation of regex and wildcards is beyond the scope of this article; however, numerous guides have been written to fill the void. In our case, adjusting the default regex to match rsyslog is straight forward after noting which characters match which pattern, again we’re working with the basics of regex here – nothing fancy.

Take this sample rsyslog entry and notice the difference from the standard syslogd format:

Feb 21 10:59:32 Feb 21 2008 10:59:32 pix4us : %PIX-6-110001: No route to 1.1.1.1 from 3.4.5.6

Feb 21 10:59:32 Feb 21 2008 10:59:32 pix4us
Oct 21 23:59:23 fwext-dmz-01 Oct 21 2006 23:58:23

Here, the rsyslog entry includes the date twice and then the hostname of the log source versus the default format expected by pla_parsedof date hostname date. The original regex is set to pickup the first time entry’s “minutes and seconds” and picks up the next 5 words/entries separated by spaces:

$regex_log_begin = “(.*):(.*) (.*) (.*) (.*) (.*) (.*)“;

 Oct 21 23:59:23 fwext-dmz-01 Oct 21 2006 23:58:23

In order to process rsyslog, this will have to be changed. The initial (.*):(.*) is used to set a starting point in syslog message string. Since this new rsyslog format includes two date entries before the host name, the following can be used to allow pla_parsedto “see” the new syslog message string:

$regex_log_begin = “(.*):(.*) (.*) (.*) (.*) ((.*):(.*):(.*)) (.*)“;

Feb 21 10:59:32 Feb 21 2008 10:59:32 pix4us

The regex starts out the same, but looking at the colors you will notice the location of the information needed by pla_parsed to determine date, time, and host has moved. This time we used “(.*):(.*)” and “((.*):(.*):(.*))” to force a match on the time elements.

As a result of this change, the variables listed below the regex pattern must be modified to tell pla_parsed which (.*) contains which element:

$regex_log_begin = “(.*):(.*) (.*) (.*) (.*) ((.*):(.*):(.*)) (.*)“;

$var_pixhost=7;
$var_pixmonth=3;
$var_pixdate=4;
$var_pixyear=5;
$var_pixtime=6;

The numbering happens left to right and the color coding should help this make sense. The ()’s around the grey time entry are grouped together and count as one match/entity, the sixth variable. This same approach of keying off the timestamping can be applied to pla_parsedin order to allow processing of syslog-ng, ksyslogd, or any other syslog message format.

Need help with a different format? Have problems getting your PIX logs loaded? Paste in a sample message from your syslog server (IP Addresses santized please) in a comment below.

PLA: Setting up the Database

Updated installation documentation for PLA (PIX Logging Architecture).

If the PLA documentation leaves you looking for the DUMMIES guide, checkout my mini-article on BetterTechInfo regarding PLA Database Setup.  

I am getting involved in a new business venture and decided to publish some early projects / work to help out folks using PLA.

Be advised the construction dust is everywhere!