Category Archives: Security / Risk

Three reasons why IPS and WAF won’t converge anytime soon

Continuing on the IPS vs WAF theme, one might consider the notion that WAF will just converge with IPS. I wanted to point out three reasons why I think this will NOT happen anytime soon. The reasons center around demand, performance, and implications, and feed into each other.

It’s akin to Chris Rock’s joke about driving a car with your feet: You can drive your car with your feet if you want to, that don’t make it a good __ idea!
Read more…

Advertisements

WAF vs IPS (or Four Things Your IPS Can’t Do)

I see this often and I am always amused at the topic. I have worked with IDS/IPS for 8 years, so I know IPS when it was just a flavor of IDS that no one wanted to enable for fear of blocking access to users and customers. I chuckle at the thought of WAF being a glorified IPS. My how times have changed.
Here are four things that your WAF can do that your IPS can’t. I tried to keep this vendor agnostic.

Please feel free to pile on or comment, just no flames please!

WAF vs IPS?
Web Application Firewalls, as the name implies, work with web applications almost exclusively. Most WAF are often not best-of-breed traditional firewalls, and should not be implemented in place of a traditional network firewall. Typical WAF deployments feature SSL decryption of web application traffic and blocking of web-based threats after the WAF reassembles each web session. This is possible because the WAF operates at the application layer where HTML, XML, Cookies, Javascript, ActiveX, Client requests, and Server responses live.
Read more…

Five Classic Web Attacks

While reading through my blog inbox and writing up my 2010 Wishlist for work, I thought I’d drop a quick post to highlight five web security ‘problem areas’ that still exist after at least a decade of patches, pleas, and regulatory requirements.

  • SQL Injection
  • Hack the Web Server
  • Cross Site Scripting
  • Cookie Tampering
  • Session Hijacking

I often find myself explaining what these are and providing examples, in order to garner support for remediation.
Read more…

Top 4 WAF Protections

The traditional network security approach to securing your web servers and database servers is more than likely going to get you in trouble some day. Think about it. Network Security preaches deny everything and permit only what you need. Great, open up port 443 and send encrypted traffic to your web server. KaBOOM gotcha!

Think about your Web Application Firewall and the reasons for your investment in web application security.
Read more…

Getting more from your WAF (Sensitive URL Tracking)

I have had the fortune to support a few Imperva installations, alongside other WAF solutions. I would like to illustrate one use for logs available on the Impervaplatform that can be leveraged to augment website trend reports and monitor “exposure” on key URL’s.

If you’re not familiar with the Imperva platform, it is possible (as with other WAF vendor’s products) to build custom policies that must match specific criteria and upon triggering these events can feed data into various syslog feeds. The entire purpose of a WAF is to protect your web application from threats, although some argue this point, so it stands to reason there may be facets of a given web application that are more sensitive than others.

Take for example the check-out page for an online retailer where the customer enters credit card data and confirms their billing information. This location of a web application might benefit from heightened logging under certain conditions by a Web Application Firewall, such as: forced browsing, parameter tampering, XSS, Server Errors, etc. The application may be vulnerable to fraud activities, the business may want to keep a tab on who’s accessing these URLs, or there some other risk criteria than can be measured using this approach.

Traditional webserver logs will provide: client information such as user agent info, username, source ip, method, access URL, response time, response size, and response code. The logged data sits in the access log file on the specific web server by default, but this information is for the entire website.

The Imperva SecureSphere can provide some of the same information: username, IP, Port, user-agent info, accessed URL, response size, response time, etc – but in addition, the Imperva can track whether the session was authenticated, correlated database query (if you have Imperva database protection deployed), SOAP information, security details relevant to the specific policy. The kicker is that this can be sent in a format configured by the admin to a syslog listener in a format supported by web trend tools or SIEM products without engaging professional services.

I’m not advocating the replacement of web server logs for trend analysis, but I am suggesting the deployment of targeted logging for sensitive areas inside an application where this information would prove useful either in a fraud capacity, security monitoring capacity, or even in an end-to-end troubleshooting capacity where a WAF would have visibility beyond traditional network tools from the frontend of a N-tier web application. Deviations in response times, excessive response sizes, and unauthenticated access attempts to sensitive URLs are ideas that come to mind for leveraging the visibility a WAF can bring to the table.

WAFing it up

I should disclose up front that I derive my living today supporting WAF technologies for a large corporation, and so it will come as no surprise that I have a few opinions on the use of WAF technology and in general how to go about protecting web applications.

Purists.
If you’re a purist and feel adamantly for or against Web Application Firewalls, I would urge you to consider the roots of defense-in-depth – just like the spoon in The Matrix, there is no silver bullet. OWASP‘s concepts are as close as we’ll ever get to that silver bullet.

Secure Coding won’t get you out of every vulnerability and neither will a WAF, if for no other reason than the sheer complexity of the equipment needed to stand up web-enabled services introduces too many interdependencies to think every coder, developer, and vendor got everything right and there will never be a problem. — If you disagree with that, put down the Vendor Kool-Aid now before it’s too late.

Positive / Negative Security Models
Good grief.  Techie speak if ever there was any. Reminds me of the James Garner movie Tank, where little Billy is exposed to negative feedback in order to arrest his “bad” behavior. In my house, that’s called a spanking and you get one when it’s appropriate. My kids know what a spanking is and so does anyone reading this thread. Without googling, name two WAF products based on each of these Security Models: Positive & Negative — It’s okay, I’ll wait for you.

And we’re back…
On the topic of Security Models, I tend to think it takes a combination of protective technologies to provide any actual risk/threat  mitigation. I would personally like to see developers take advantage of a WAF’s ability to see how an application behaves. Moste developers don’t think of in terms of which web page does what, instead they’re working with APIs and objects. This is unfortunate because the rest of the world sees these applications as URL’s. The WAF can be that bridge to the developers. A WAF could in theory help the developer ensure that a specific sequence of events happens before a transaction is processed or prompt the client before transactions occur in specific instances to avoid CRSF.

To bring things back around to my original point. I do agree that the more complex a web application is and the more servers required to make a service available online, the more vulnerable and difficult to secure that application or service will be. I’m not sure who’s law that is but I’m sure one exists, complexity breeds more complexity.

No surprise there, if you are protecting a complex asset then it will be high maintenance — I said to put down the Kool-Aid, it’s for you own good – nothing is free!

Off to the WAF races

PCI DSS called for implementation of code reviews and web-application firewalls (WAF’s) in order to continue compliance and fight off the Breach Boogieman. Organizations can also conduct code reviews, as outlined in section 6.

Some ‘experts’ believe the web firewalls are just another piece of technology being thrown on the bonfire, while others believe you will never find all the potential bugs and flaws in an organization’s custom code, let-alone commercial software.

Interestingly, there continue to be heated discussions debating the usefulness of WAF’s, where they have to be deployed, what they are supposed to inspect, and whether businesses should be distracted by WAF’s in the first place.  The most important aspect of all this is the functionality that is to be provided by this technology. The WAF requirements outlined in requirement 6.6:

  • Verify that an application-layer firewall is in place in front of web-facing applications to detect and prevent web-based attacks.

Make sure any WAF implementation meets the full extent of the requirement because “detect and prevent web-based attacks” can get a little sticky. As technology goes, there are a few variations in how WAF’s have been developed. Some products use reverse proxying to interrupt the web session for the ‘detect’ and accomplish the ‘prevent’ by only allowing valid sessions. This validation is being done in variations just like typical IDS/IPS’s operation: you get your choice of signatures, anomaly detection, protocol inspection, and combinations thereof. Some of the available products skip the proxy function and monitor the web traffic like a traditional IDS/IPS for known or suspicious threats either in-line or via a SPAN or TAP. Companies can not only choose their type of technology but can also decide on using open-source software or commercially supported products or a cross between the two.

The open-source route offers mod_security for apache and if companies need commercial support, you can get an appliance running mod_security. I found it interesting, in a recent Oracle Application deployment, Oracle recommends the use of mod_security to service as an application-layer firewall and URL-filtering firewall for DMZ-deployments. If mod_security doesn’t fit your needs, Guardian is also an open-sourced software with detection and prevention capabilities. Both have commercial support and product options.

mod_security has some other interesting options. It is possible to take the SNORT web signatures and convert them to mod_security rules via a script provided with mod_security. There are also several groups that provide signatures / rules for mod_security to identity new threats.

Outside the open-source space, there are products like Imperva’s SecureSphere gateways that use anomaly detection and profiling to determine whether something should or should not be allowed to access a web server. This company’s product line features an interesting twist, the dynamic profiling technology relied upon to ‘detect and prevent’ comes from none other than the man that developed ‘stateful packet inspection’ in CheckPoint firewalls.

Along with Imperva, are F5, Cisco, CheckPoint, and the usual list of security vendors ready to snatch up your “bail-out” funding 🙂 . As with any security technology, only after a review of your organizations needs and a thorough pilot of the prospective technology will identify the best-fit for any organization.

At the end of the day, the use of WAF technology to mitigate web application security is but one of the many defenses an organization should have in place to provide data security and data privacy.

What do you use to guard the security of your web applications?

Speeding up PIX Parsing

Recent questions, comments, and suggestions have prompted this post. I would like to collect ideas for improving PIX Logging Architecture or provide a place to point out issues with running PLA at your organization.

If you use PLA or another open-source tool, tell us how you solve logging latency, sluggish reporting, and other related bottleneck issues associated with centralized log collecting and monitoring.

Is YOUR HelpDesk hurting you?

In recent weeks I have had the misfortune of dealingwith a number of malware incidents, not necessarily all these were at work. What I found interesting was the reason for the call to me and how easily the call could have been avoided.

It isn’t the Helpdesk’s fault.

See… I am a Network Security guy. I don’t do Desktops; in fact, you don’t want me doing desktop support! Calling me in for a virus or malware issue, is along the lines of bringing a Vulcan cannon to a diplomatic dispute – i.e. not the most appropriate solution, until things escalate.

In this age of technology and Al Gore’s Internet (don’t laugh I’m from Tennessee), everyone runs an updated anti-virus software package (enough with the laughs already). Most of the anti-virus software will detect a high percentageof the garbage you’re likely to encounter (no laughing!); unless you are a retailer and someone wants your credit card data! So if the software detects the threats and takes action, what is the problem?

I have been called on a number of occasions lately to find out why a number of computers are running slow (is Vista actually a trojan?) or the firewall logs show strange Internet traffic, or the developer’s laptop won’t shutdown properly, etc. After I suggest a pre-boot scan of the system or an external scan of the suspicious system’s hard-drive, it seems the on-host AV scanner wasn’t working and now we’re picking up 10’s, maybe 100’s of malware instances. What happened?

Funny thing about malware and automated processing by most AV engines. The AV solutions on the market today will either delete or quarantineany infected files they encounter, as a default when the infection cannot be cleaned. This is a great start, unless you wanted to have a look at the deleted software or the AV deleted a perfectly legitimate file. This is where the problemcomes in for the Helpdesk, remember the Helpdesk lives and dies by procedure and process.

AV software had its hayday back in the days of Melissa and the plethora of other Word macro viruses. Everyone was into this email thing (is email dying?) and everyone had Microsoft Word, so the bad guys loved nothing better than to send out a piece of garbage and see who all it took down. Granted the solution was rarely as simple as deleting the offending / infected Word file, this deletion process became the pat answer: if you have an infected Word file, don’t mess with it, just DELETE IT.

This solution survives today, but “Word” has been removed such that the solution to any detection of an infected file is to delete it. Your AV solution happily does that for you. So when the AV solution deletes the file but malware is already memory resident, you have yourself a problem. The Helpdesk is not going to respond to a single AV detection of Trojan.Backdoor; that is too resource intensive and often fruitless.

The Helpdesk’s response to single incidents is the cause for larger problems because although they can’t possibly react to a slow, steady stream of one or two infections per location over a week or two, those infections are laying the groundwork for a larger problem. No one asks the question, HOW did this infected file get on here or WHY is this infected file on here? Those questions aren’t the Helpdesk’s mission, instead their mission is keep tickets resolved, answer support calls, and meet SLA’s. Someone needs to be able to answer THOSE questions and take appropriate action.

Your Helpdesk, my Helpdesk, anyone else’s Helpdesk has a set of procedures they follow. Any AV solution worth its annual subscriptionfee (now start laughing), will feature centralized logging and reporting, so the Helpdesk and IT can be notified upon infections – mass infections, that is. One or two infections and those files being deleted isn’t going to raise suspicion in most organizations. Which leads me back to my opening paragraph…

All it takes is one piece of malware to get into memory. It isn’t a joke and it isn’t hypothetical. If you want to be next, just keep ignoring those “deleted” infected files your AV solution keeps finding! My hourly rates aren’t as bad as a front-page headline.

Still laughing?

IRS Exempt from Security?

It’s TAX TIME! or is it HACK TIME?

It comes as no surprise an organization as large as the IRS is lacking some security controls, but from the material provided in several news articles it appears the IRS is lacking some fundamental elements or the application of Security Policies and standard IT Management processes is spotty at best. This is a major issue given recent news that sensitive information for the Democratic and Republican Presidential candidates was leaked by contractors.

The report findings about the IRS are items that most other organization are apparently already required to meet, according to various sources: the Sarbanes-Oxley legislation, the Payment Card Industry’s Digital Secrity Standards, and even the Health Information Portability and Administration Act

Overall security for an organization is made up of the sum total of all the piece, parts, polices, and process surrounding the organization. For the IRS, security seems less than what it should be. Specifically of concern are the following passages in the article, which were likely quoted directly from a report provided to the AP:

  • [MSNBC] … system administrations circumvented authentication controls by setting up 34 unauthorized accounts that appeared to be shared-use accounts, the report found
  • [CNN.com] more than 84 percent of the 5.2 million occasions that employees accessed a system to administer and configure routers, they used “accounts” that were not properly authorized
  • [MSNBC] A review found that the IRS had authorized 374 accounts for employees and contractors that could be used to perform system administration duties. But of those, 141 either had expired authorizations or had never been properly authorized.
  • [CNN.com] … there was no record that 55 employee and contractor accounts had ever been authorized.
  • [CNN.com] In addition, nine accounts were still active, even though the employees and contractors had not accessed the system for more than 90 days, the report says.
  • [CNN.com] The report does not say whether taxpayer information was misused, but says it is continuing to review security to see whether changes made to the computer system were appropriate or warranted.

Unauthorized accounts made unknown, untracked, and potentially unauthorized changes to systems and networks at the IRS? Multiple users share the same administrative account for making changes to multiple systems? Accounts were unused and still active after 90 days of inactivity? Log reviews are not conducted?
We are talking about the Internal Revenue Service, right??

For any organization reading this thinking, we have those same issues – what’s the big deal?

  1. Unauthorized accounts making potentially unauthorized changes = a potential security breach
  2. Multiple users sharing administrative account access = inability to determine who made what change
  3. Unused accounts still active after 90 days = even Microsoft gets this one right!
  4. No Log Reviews = no proof that a breach happened, no notice that a breach is in progress, and no idea how wide spread an attack is/was

A popular statistic among security professionals is that most security incidents are caused by insiders violating security protocols, policies, or processes; percentage is over 70% of all security incidents are caused by insiders. Given the IRS’ report findings, this is again a serious issue.

The underlying problem at the IRS is likely the same problem that other businesses face, how to be secure without security getting in the way? The simple answer: Security is a mindset. Either management gets it and supports it or they don’t and authorize exceptions to policy under the guise of “Just get it done.” This “get it done” approach completes projects on-time, often avoids cost-overruns due to last minute security bolt-ons, and usually leaves system or process gaps that can be taken advantage of by disgruntled or otherwise motivated employees.

What’s the solution?

A realistic hard look at how an organization views security, how management feels about the impacts of security, and ultimatley what costs an organization is willing to pay for security. In the case of the IRS, “The IRS issued a statement Monday saying it had “taken a number of steps to improve the control and monitoring of routers and switches.” — [MSNBC]