Monthly Archives: October 2007

PIX Parsing (Usable Logs!)

If you have a Cisco PIX you are responsible for, don’t have a logging solution for it, or haven’t seen PLA, keep reading you’re going to be impressed. PDM is out and ASDM is an improvement but who wants to run https-server on a firewall?

There are plenty of commercial solutions available, some just perform log analysis and others attempt to perform event correlation. If you just want to solve the logging problem and are unable to implement a log collection system, take a look at PIX Logging Architecture.

I recently implemented Pix Logging Architecture (PLA) to help manage FWSM, PIX, and ASA logs. I hate having to cull through syslog files looking for traffic stats, tcp port usage, or accepted vs denied traffic information. This point and click world we live in has me spoiled, I don’t want to CLI everything – especially when I’m headed into a meeting to discuss who has access to the Oracle Applications database server.

What if you could get a daily snapshot of your firewall(s)? How about running queries instead of text-based finds? The following log messages are already supportedand reg-ex can be used to further extend the capabilities. Defining traffic types and being able to make graphs? Once you involve a database, you have to decide how often to purge the data.

If you like that and aren’t afraid of using open source, you’ll need apache, MySQL, perl (with some additional modules), syslog daemon or the syslog file, and one more Cisco PIX, ASA, or FWSMs. The OS is up to you, but Linux is highly recommended.

What is PLA?

The PIX Logging Architecture [PLA] is a free and open-source project allowing for correlation of Cisco PIX, Cisco FWSM and Cisco ASA Firewall Traffic, IDS and Informational Logs.

PIX Log message parsing is performed through the use of the PLA parsing module or PLA Msyslogd module. Centralization of the logs is provided using a MySQL database, supported by a Web-based frontend for Log Viewing, Searching, and Event Management. PIX Logging Architecture is completely coded in the Perl programming language, and uses various Perl modules including Perl::DBI and Perl::CGI.

Where is PLA?

PLA’s website has been down since late December but Kris reports it’s BACK ONLINE! If you are interested in downloading the software or documentation, you can find it on SourceForge but your best bet is to go directly to the source itself: PIX Logging Architecture.

I’m working on uploading the documentation and some tweaks to pull in Cisco IPS Event Viewer data into the IDS tab onto my other website, but haven’t been able to reach the original author (Kris) yet.  

How does PLA work?

The PLA software is divided into a parsing piece, database server, and web front-end. Depending on your security policy, network zoning, and server capabilities, you can run this on one server or spread it across several. I was fortunate enough to have access to a dual processor duo2 server, so I implemented everything on the same Linux server.

The underlying architecture is very clever. Message formats are loaded into memory and the syslog file is tailed (via perl module). Messages are flagged by type and processed accordingly, in near-real-time:

The PIX Logging Architecture parsing module, which is responsible for extracting the necessary fields from the PIX system log messages, has been extended to gather new information including, but not limited to, Translations (Xlate’s), Informative Log Messages (i.e. PIX Failover, PIX VPN Establishment, PIX Interface Up/Down, PIX PPPoE VPDN establishment and the like). All the parsing information needed by the PLA Parsing Daemon (pla_parsed) in order to extract data from the logs is now stored in the database, allowing for easy updates of the supported log messages without having to replace the parsing scripts. Moreover, the PLA Parsing Daemon runs as a daemonized Perl process in the background and reads straight and in quasi real-time from the system log files, so no more need to create crontab jobs like before and having to restart syslogd all the time.

The PLA Team has created detailed documentation for installing, tweaking, and supporting PLA. [edit: Google cache of documentation. PLA Documentation can be downloaded at SourceForge].

 In order to get started, you’ll want to read through the first few sections to get a feel for what will be required and figure out what you’ll need in your environment. For example, if you run standard syslogd the parsing of the syslog file works one way and if you run ksyslog or syslog-ng you’ll need to make adjustments. Section 5.5 of the documentation covers tweaking the regex for parsing other syslog engine formats. The documentation explains centralized installation versus distributed, as well as what ports and services will be needed on the server(s).

Security is obviously a concern when processing security log files. Securing the MySQL database access and Apache access is recommended. I would also recommend securing the server itself, if you’re new to Linux this may help.

Enjoy!

Advertisements

Regulatory Flaws

I learned many things working for an former DOJ attorney but the thing that stands out the most is his savvy when it came to predicting what organizations would do in given situations. Legal regulations and contracts produce the same effect in businesses: do the minimum to get by aka comply.

Doing only what you have to, will catch up to you in one form or another. This minimalist philosophy is infectious. Take for example the rampant security breaches of the 21st century. Hackers unite!

Companies of all size grow and extend their business in order to satisfy their customers. If the company is publicly traded, then the company also seeks to please its stockholders. If security was a priority, companies would solve the data breach problem voluntarily. There are any number of reasons for security being what it is today.

We don’t need regulations in order to promote and achieve security within organizations, rather courage and leadership are needed. Security is not a profit-center, it is risk-avoidance; just like insurance. No one buys insurance (except maybe Allstate’s insurance) expecting to make money, insurance protects you against loss [whatever form that loss may take] but the insurance is only as good as the language in the policy. Security is no different; however, a company can reduce the cost and improve the effectiveness of security by including it throughout the organization and by building it into products and services instead of adding it on after a data breach has occurred. Unfortunately, too often security is implemented in the form of damage control following an incident because little financial justification can be made in advance of such an incident.

Regardless, politicians’ calls for “stronger” regulation are predictable because “stronger” regulation is “better”—in a press conference. In the real world, however, regulation is no more capable of divining threats to data security than, say, a common law liability regime, or even businesses’ natural interest in maintaining their operations, integrity, image, brand, and assets. [CATO]

Businesses will stay ahead of both the law and law breakers if strategy, business processes, and the big picture replace procedural band-aids, reactionary planning, and cost as their chief motivators. [ITCI]

If all that fails, you can still buy insurance!

Log Management 101

If you are in any way involved with technology, you have at one time or another dealt with logging. You either didn’t have what you needed, spent hours looking a needle in the haystack, or had to go find someone to help you get what you needed.

In networking and security circles, logging is usually a hot-button topic. I’ve spent many man-months working with logs, investigating logging solutions, praying to God for a solution to logging, tweaking open-source logging programs, and discussing logging technology with various vendors. I’ve collected some of the more relevant articles on the topic and have some additional commentary / insight.

First and foremost, the primary problem of Log Management is often not the log(ging) portion – most systems spew limitless log data – the problem is the management of the log(ged) data. You’ve heard 1000 times, management styles vary by manager. Management of network, servers, and security differ wildly. Networks and servers are managed by exception, they get attention when they need it otherwise they just purr and no one bothers them (until Patch Tuesday or until Cisco has a bad day). Managing security is altogether different, because security for a given organization is the net sum of all the parts of said organization, and in order to properly manage security, information from all the parts must be reviewed on a routine basis.

This routine review of information is where the problem comes in and it is why nearly every regulation, guideline, or contract stipulation (read PCI DSS) has mandatory language about reviewing logs. Again there is seldom anything about requiring systems to output logs, because that is rarely the problem. Technology is often times misapplied in attempts solve human-created problems, log management is an ideal instance for technology to come to our rescue. How else would a sane person cull through gigabytes or terabytes of log data from ten’s or hundred’s of systems or network elements? <Don’t answer that question, I’ve done it myself .>

Logging resources

Dealing with the actual logs and delving into whether you want to collect log data, consolidate the log data, correlate log data, and or perform event management usher in the next set of issues / challenges when dealing with logging. Originally logs might have been aggregated onto one server or they might have been aggregated by log severity using syslog-ng or syslogd. Network or systems management systems (CiscoWorks or HP OpenView) might have received logs and performed parsing or filtering on key events that drove alerts and alarms but these systems likely discarded the log data itself, so investigating the root cause of an alert or incident became next to impossible.

Compliance came to the scene and brought innovation in log collection and log management; software became available to manage logs from multiple sources but you had to pick your poison: Windows with MS SQL back-end, Solaris or HP-UX with Oracle, or Linux and MySQL. Aggregation and consolidation were solved by tiered deployments and agents were usually installed to siphon off Windows Event Data. Security hit the mainstream and event correlation was the big buzz word.

The problem was how the intensive resources necessary to store the log data competed with the intensive resources necessary to correlate dissimilar log formats to produce alerts or suppress logged events. Gartner and others coined the the term SIEM, which combined log collection and event correlation, and a new market arrived. Most SIEM (SIM, then SEM, finally SIEM) are really good at the collection piece (historical) or really good at the correlation (real-time) piece, while few are good at both. Go, go Magic Quad. rant! If you hadn’t already noticed, I’m not a fan of combining these technologies (I don’t mix my food while I’m eating either). I like my logging solution to have plenty of evidence preservation goodness and I don’t want it muddied because a correlator had to normalize the data before it could parse, alarm on, or display the log data.

Some of the options for solving log management challenges

Just scratching the surface, more please?

Network Zoning (Parallel Dimensions)

Further expanding on the Zone concept, I created another diagram to demonstrate the layering that will be accomplished when the zones are implemented.

In effect, the ZONE-NETCORE becomes the underlying fabric that enables the other zones, in this series.

You certainly don’t need to secure your zones in this way and you can definitely just trunk your ZONES back to a firewall or router or a cluster of either, as indicated in this diagram. You can combine this diagram with the other and create clusters of zones, if you choose.

Trust becomes the deciding factor in the approach you take to zoning, followed by comfort with and understanding of the nuances of trunking versus routing. Often times, the idea that multiple zones are traversing the same physical link with only logical (software) separation is enough to drive implementations towards a routed model.

Network Zoning (The Zone)

I decided to start a series on Network Zoning after seeing too many “you need to zone your network” articles and best practice guidelines that never give any aide to the reader on HOW to go about this other than segregate your network or firewall off the servers. Before you attempt this, make sure you understand the limitations of your infrastructure and the concepts outlined below.

In this series, a ‘Zone’ will be the LAN segment set aside for a specific function or IP Range. This zone will route or switch to a firewall-like interface which will provide the “networking” part of the puzzle between the various zones. Depending on the ruleset, data from one zone may or may not be transported to another zone.

Make sure you’re familiar with TCP/IP, Networking, VLANs, IP Addresssing, and subnets; you don’t need to be an expert to proceed. The examples, configurations, and how-to will assume you’re working with Cisco equipment. If you need help with other equipment, you’re welcome to drop me a line.

In this example, the zoning will be accomplished by assigning like-resources to a specific VLAN and having these VLANs end on a router/firewall interface whereBasic Network Zoning access decisions can be made in accordance with local Security Policies and Practices. 

First thing’s first.

Zoning implies some grouping of computing resources. This grouping could be by location, function, purpose, access type, subnet, etc. Because this is my how-to, I’m going to zone according to functional area and subnet. Take a look at this basic diagram where users, administrators, application servers, and sensitive data servers are zoned off and tie back to a firewall.

Functional zoning is very beneficial once the firewall side comes into play because the access rules and restrictions should be largely similar across the specific zone. All the servers shouldn’t need Y! Instant Messenger access but the Desktop Users might run The Weather Channel on their desktop. Zoning like resources makes management of the firewalling and routing simpler over-time as well, because the zoned areas can be extended without re-inventing the wheel.

Once you know how you want to group the resources, it is important to describe and qualify what is unique and different about each grouping. This ensures that your groups don’t overlap as well as preserves the groupings going forward. Depending on your experience with firewalling and routing, at this point you may want to begin clarifying what each zone can and cannot access, for instance – the Sensitive Data Servers DO NOT surf the web or have access to email.

The example here assumes the subnets of these groups allows for grouping or summarization, this may or may not be the case in your grouping but if you have the options it does simplify things in other areas as well. For my example I have allocated 256 IP Addresses to each of the two server zones along with the administrator zone and I have allocated 1024 IP Addresses for the user zone:

  • Zone – APPS = 10.0.0.0/24 (10.0.0.0 – 10.0.0.255) [256 Server IP Addresses]
    • Description: Zone dedicated to application servers and services, no end-users and no sensitive customer data
    • Examples: Intranet server, Email server, File server
  • Zone – SENSITIVE = 10.0.1.0/24 (10.0.1.0 – 10.0.1.255) [256 Server IP Addresses]
    • Description: Zone dedicated to servers that contain sensitive customer data (could also be employee data)
    • Examples: Oracle database server
  • Zone – SYSADMIN = 10.0.2.0/24 (10.0.2.0 – 10.0.2.255) [256 System Administrator IP Addresses]
    • Description: Zone dedicated to privileged administrators of systems, applications, or infrastructure, requires extra access to servers, network elements, etc.
    • Examples: Network Management Team, Firewall Administrators, Database Administrators, etc.
  • Zone – USERS = 10.0.3.0/22 (10.0.3.0 – 10.0.6.255 [1,024 Desktop User IP Addresses
    • Description: Zone dedicated to the general user base
    • Example: Average Joe user

These examples and zones will  not apply to every organization. These are hypotethical and designed to get your imagination flowing. The key ingredient is being able to combine ‘like users’ and ‘like access’. The zone members will be placed into their own VLAN and will not be able to talk to devices outside their VLAN unless a router or firewall allows them to do so. In the example above, this VLAN configuration will allow the use of routed VLAN interfaces and switched VLAN interfaces. The difference between routed and switched interfaces being: switched interfaces only talk to similar switched interfaces (ZONE-USERS talks to ZONE-USERS over switched interface while ZONE-USERS cannot talk to ZONE-APPS without passing through a routed interface).

There is one more zone to introduce, that I have come to rely heavily on, ZONE-NETCORE. If you think about it, you’ve zoned the users, the applications, and the servers. What about the network?

  • Zone – NETCORE = 10.255.0.0/24 (10.255.0.0 – 10.255.0.255) [256 Network Core IP Addresses]
    • Description: Zone dedicated to network interface on routers to facilitate core communications and isolate zones
    • Examples: each router has an interface on this Zone

The ZONE-NETCORE is not required, but it serves to isolate the VLANs from one-another across a core or the network, without this zone each VLAN/ aka Zone must come all the way back to the firewall, as indicated in the basic diagram. This approach creates communities that are aggregated and connected via conduits, as depicted here. Note each zone is self-contained and isolated from other zones before reaching the firewall.

Now we have zones defined and understand basic functionality inside the zones. How are the VLANs setup? What security do we need for each zone and how is that accomplished?

Stay tuned…

Leaking at the speed of light

I read in disbelief: “Details are emerging of security leaks at the White House which have shut down an internet spying operation that had successfully cracked al-Qaeda’s computers… Within an hour of the publicity al-Qaeda’s intranet was taken offline.”

An organization formed to study and report on terrorist activities, Site Institute, accessed al-Qaeda’s intranet and obtained the recent Bin Laden video before it was released to the press. Somehow screenshots of this intranet found their way to Fox News and other news organizations and shortly thereafter the intranet was shutdown. Way to go!!!

“Techniques that took years to develop are now ineffective and worthless,” Rita Katz, Site’s founder, told the Washington Post. “The US government was responsible for the leak of this document.”  — VNUNet

Newsflash: the bad guys have cable and satellite tv too!

iPhone: Cracking the Dream

Moore is the man. I have lost count of the number of times I have uttered those words. I am a huge fan of Metasploit and the framework it provides is unrivaled. I recently wrote about the hacking platform that an iPhone provides, noting it would be a great tool for a bad guy. Moore is a man on a mission…

HDM has an updated ARM hack that promises to take over all iPhones, but for now takes over modified iPhones. Techie speak here, English here.

We can store our shellcode at offset 0x12C and patch the return value with 0x0006b400 + 0xA4 to return back to it. A quick test, by setting offset 0x12C to 0xffffffff (an invalid instruction), demonstrates that this works. We have successfully exploited the iPhone libtiff vulnerability using a return-to-libc back to memcpy().

Modified iPhones make this stack/heap overflow easier to accomplish, while “native” iPhones require some additional manipulation to consistently produce the exploit.

This attack exploits libtiff (TIFF Image Library in OS X) by writing to the stack a memory location that is writable and then execute that code (gross oversimplification). The manner in which this exploit is delivered opens the door for other exploits and shows how research to “modify” the iPhone for freedom from AT&T can be used to 0wn the iPhone!

Metasploit continues to be a great tool for “evaluating” security of just about anything:

While using a hex editor to write this exploit is possible, the Metasploit Framework provides a much easier method of testing different contents for the TIFF file. A working exploit for this flaw (successfully tested on 1.00 and 1.02 firmwares) can be found in the development version of the Metasploit Framework (available via Subversion).

Governator Terminates Data Protection Law

I love the Terminator movies, Arnold is great in them. (No flames please!)

He apparently has some savvy advisers who have flexed their political and technical muscle in a way similar to Arnold’s physical: see Governor Kills California Data Protection Law. I find this line of logic amazing, especially given that Arnold is supposed to be some dumb jock elected Governor of California:

However, the current version of the bill, Schwarzenegger said, “attempts to legislate in an area where the marketplace has already assigned responsibilities and liabilities that provide for the protection of consumers. In addition, the Payment Card Industry has already established minimum data security standards when storing, processing, or transmitting credit or debit cardholder information.”

The governor argued that “the industry”—presumably a reference to credit card companies and the PCI Council—is in a better position to know what is realistic and reasonable for credit card security.” Also, he said, signing such a bill could actually create a conflict.

“This industry has the contractual ability to mandate the use of these standards, and is in a superior position to ensure that these standards keep up with changes in technology and the marketplace,” he said. “This measure creates the potential for California law to be in conflict with private sector data security standards.”   —eWeek / Security Focus

I know security experts that couldn’t come up with the logic behind that statement. I’m not a fan of legislating everything and I think what the Payment Card Industry is doing with Data Security is great, if significantly late to the game.

The major problem I have with the PCI DSS requirements is the subjectiveness of assessment, audit, and enforcement. If the PCI DSS actually had teeth, then the breaches we read about would be less likely to occur because of the financial impact associated, For instance, when a merchant can’t process credit cards due to noncompliance with the PCI DSS, they would signficantly more interested in complying with PCI DSS.

Go Arnold!!!

BlackBerry: International Exposure?

George Gardiner wrote an editorial post on Oct 5th for IT Week noting the potential security / legal issues surrounding RIM’s Canadian-based data center that receives nearly all BlackBerry traffic to/from BlackBerry handhelds.

Having worked in the telecommunications industry, it came as no surprise that to learn that nearly all customers and carriers accessing or providing BlackBerry services are routed through RIM’s Canadian data center. George’s concerns centered around differences in privacy, security, and intercept laws between US, Canada, UK, EU, etc. (Original here, cached here)

This concern comes in stark contrast to the security and privacy offered by the BlackBerry handhelds. They almost all offer hardware-based 3DES encryption or AES encryption (AES is the current defacto), each handheld can encrypt ALL the content on the BlackBerry. Your data communications channel is encrypted back to RIM over your carrier’s network and your carrier likely has a dedicated data circuit connecting them directly to RIM’s data center.

The article suggests your data may be at additional exposure risk because it lives in a central point outside the control of your phone carrier’s control or your direct control and likely your country. Newsflash: Because your data is now in Canada it may not be as protected from interception as protections available in the US, UK, or EU. I read some interesting forum postings about the article, so I thought I’d pile on to the blog side of it.

Securing WiFi

Wireless is everywhere. McDonald’s and Starbuck’s come to mind as popular WiFi hot spots. Hacking wireless has become a major threat for businesses and consumers. Legislation was passed requiring wireless manufacturers to provide details on securing wireless services in response to the rampant abuse of insecure wireless access points.

In case you haven’t heard, WEP is not secure. In fact, WEP was NEVER designed to be used to secure WiFi networks, instead it was originally released to provide a privacy measure. Just how insecure is it? The FBI demonstratedhow to break into a WiFi network running WEP at a security conference two years ago, using tools downloaded off the Internet.

WPA must be better, right? Joshua Wrightwrote a program to help break WPA security, called coWPAtty. It is based on capturing packets and brute forcing the passphrase used. This can be very time consuming, so rainbow tables can be used in some instances to speed up the cracking process significantly.

The easiest way to get started evaluating the security of wireless networks is to grab a WHAX, Knoppix, or BackTrackLive CD and combine it with an Atheros-based WiFi card on your laptop. BackTrack would be my preference because it has other tools for use after WiFi access has been obtained.

In order to keep your WPA or WPA2 network secure, you should use long passphrases with random characters, upper/lower case letters, numbers, symbols, and spaces that are not based on dictionary words or common phrases. Some additional measures to consider:

  1. MAC filtering can help restrict access, but it can be overcome if the attack is savvy enough so don’t use it alone.
  2. Most WiFi routers allow you to disable DHCP or limit the number of addresses handed out by the router; limiting the number of available DHCP addresses can help.
  3. Some WiFi routers also allow static DHCP assignments, so your laptop always gets the same IP Address.
  4. Some WiFi routers provide options for static routing, routing non-DHCP IP Addresses to a non-existent IP Address can slow down the bad guys also. This can stop would-be Internet free-loaders.

Got any other helpful tips?