Monthly Archives: November 2007

PCI: How to break the piggy bank!

For any company, merchants included, who haven’t been meeting the PCI Data Security Standards, I have some bad news: you’re about to spend at least twice the money you never spent on security and privacy in the first place.

If you purchased bargain basement discounted Point-of-Sale systems, you’re in for a surprise courtesy of Visa’s Application Securityrequirements. In fact PCI has adopted these standards and will begin enforcing them in 2008 and 2009. The requirements associated with the Application Security portion will target all levels, not just the top tier.

If you lucked out in the PCI Compliance lottery by outsourcing everything to do with credit card data, that outsourcing is likely to get expensive as the outsourcer will likely pass those costs on to its customers.

If you seriously need to get up to speed quickly, I would advise following the steps outlined in a recent SearchSecurity Article. I’ve numbered the steps as Mike Rothman presented them in the article:

  1. First, pick off the low-hanging fruit such as Requirement 1, which is to have a firewall to protect cardholder data, and Requirement 5, which mandates the use and updating of antivirus software.
  2. Requirement 2, which is to change default passwords and other security parameters.
  3. Also take a look at Requirement 4, which requires encryption to protect cardholder data that is sent over open networks. Simply using SSL allows an organization to check the box on that requirement.
  4. After picking off the simplest stuff, address the requirements that can be difficult or nebulous, like Requirement 3 to protect stored cardholder data, or Requirement 6 to develop and maintain secure systems and applications.

The last thing you want to do is try PCI Compliance blind-folded. You and/or your team need to understand the requirements before you attempt to comply and before you bring in any outside consultant to document data flows or perform site assessments because it is easy to go broke and/or break the company complying with PCI.

PCI Ramblings

Practicing security is not the art it use to be. I read an article on Ambersail’s blog that reminded me of the youth soccer team I used to coach.

In particular, I was struck by the similarity between people’s attitude towards security, and a group of kids playing football. Somebody kicks the ball, and the other 21 players chase after it. No strategy, no gameplan, no big picture. Everyone likes to think they have the answer (me included, of course) and that’s what they pitch in with. But in the end, it’s just a single kick – and off we all go again, chasing the ball. [Ambersail Blog]

The post was in response to the Fasthost breach, reported in The Register, but what stuck me as I read the Ambersail post was just how true the point was and how the 7-8 year old soccer kids I coached a few years back all blindly followed that soccer ball around and would rarely get in front of it to stop the ball. Comments and suggestions can be helpful after a breach but they’re more powerful BEFORE the breach.

It’s been said hindsight is 20/20; security is no different. What should and shouldn’t be done from a security perspective becomes painfully clear after a breach happens. The same is true for almost any operational environment where something has gone amiss.

Rarely are there huge “Ah hah!” moments in our day and age, where the lessons learned following an incident or breach are new discoveries. I’ve said it before, the security landscape ends up being the sum of the compromises and consessions a company makes.

Most often the very things that lead to breaches, compromises, or even operational failures are the result of business decisions made in order to reduce cost, lower support impacts, be user firendly, or reduce operational burdens associated with observing appropriate security and privacy controls. Obviously this doesn’t account for 100% of security breaches, but certainly more than half of reported breaches could have been prevented by proper security controls.

This is one reason why security requirements are showing up everywhere and another reason why, just like my soccer kids, everyone will continue chasing the ball no matter where it goes and breaches will continue until someone gets in front of the ball!

Cisco IPS Event Viewer Database Hacking

Ever wished you get different snapshots from the Cisco IEV tool? Management ever asked what went on the last 30 days and your management platform can’t help you?

I found myself needing to provide a 30 day report from a Cisco IDSM2 blade, after finding no built-in option in the https servlet on the IDSM itself and nothing immediately available in the Event Viewer, I began to look around.

Cisco provides the freely available IPS Event Viewer (IEV) for their IPS/IDS productsthat makes use of java to load alert data into MySQL tables for display in the stand-alone software. If you want a full blown reporting engine and monitoring tool for Cisco IDS/IPS you’ll need to look at MARS and then look elsewhere. (Anyone I work with will tell you I’m not fond of CS-MARS.)

I found the MySQL Admin widget (mysqladmin.exe) and looked at the databases and tables installed by IEV. I’ve spent a fair amount of time with sql and MySQL databases, so I looked around the table structures to see if there was another option.

The database names and table names along with their configuration are viewable in the MySQLAdmin GUI. You could also do this with show commands at the mysql prompt: show databases, show tables, describe tables.

Using the field names, I constructed the following query:

select FROM_UNIXTIME(receive_time/1000,’%c-%d-%Y’) as date ,count(sig_id) as counted, sig_name from event_realtime_table group by sig_name, sig_id, date order by date, counted;

Note the receive_time and event_time fields are unix timestamped in milliseconds, not seconds. In the example above, I compensated by dividing by 1000, because I only needed calendar days.This results in the following response: 

+—————+———-+——————————————+
| date                 |counted| sig_name                                
+—————+———-+——————————————+
| 11-06-2007 |              1 | FTP Authorization Failure               
| 11-06-2007 |              1 | Storm Worm                              
| 11-06-2007 |              1 | DNS Tunneling                           
| 11-06-2007 |              2 | TCP Segment Overwrite                   
| 11-06-2007 |            44 | TCP SYN Host Sweep                      
| 11-07-2007 |               1 | SMB Remote Srvsvc Service Access Attempt
| 11-07-2007 |               1 | SSH CRC32 Overflow                      
| 11-07-2007 |               2 | MS-DOS Device Name DoS                  
| 11-07-2007 |               2 | FTP PASS Suspicious Length              
| 11-07-2007 |               3 | HTTP CONNECT Tunnel                      
+—————+———–+——————————————+

The event_realtime_table only contains the most recent data; depending on your setup this may be 1 day, 5 days, or 30 days. In my case, I only have 24 hours worth of data in the realtime table and have to look elsewhere for the prior 29 days.

If you’ve configured any archiving, you will need to tap into those extra tables in order to get the full 30 days. I elected to export all the tables to a single CSV file and do the parsing in Linux. Using the commands below, I created a file that contained the receive_time (MM/DD/YYYY), severity, sig_id, and sig_name:

tr -d ’47’ < /tmp/exported.csv |  awk ‘{FS=”,”}{print strftime(“%x”, $6*.001)”,”$3″,”$9″,”$10}’ > file.txt

This gave me a table of dates, severity, signature id’s, and signature names that I can use as needed. From here I used awk to mangle the columns and pre-format the results for loading into Excel as a chart:

awk ‘{FS=”,”} {print $1,$2}’ /file.txt | sort -rn | uniq -c | awk ‘{print $3″,”$2″,”$1}’ | sort > excel_ready.csv

This results in a comma delimited file that can be loaded into Excel and used to create charts or graphs as needed. The commands can be scripted to run every 30 days on archived files, if necessary.

     75 09/30/07,0,3030,TCP SYN Host Sweep
      8 09/30/07,0,6250,FTP Authorization Failure
      3 09/30/07,1,2100,ICMP Network Sweep w/Echo
      4 09/30/07,1,3002,TCP SYN Port Sweep
      7 09/30/07,2,6066,DNS Tunneling
      2 09/30/07,3,1300,TCP Segment Overwrite
      3 09/30/07,3,3251,TCP Hijack Simplex Mode
      4 10/01/07,0,1204,IP Fragment Missing Initial Fragment

You could easily import other fields, the ones of most interest to me where:

  • field 3         Alert Severity (0-4) [Informational – High]
  • field 4         Sensor name (if you have more than one sensor)
  • field 6         Timestamp epoch using milliseconds instead of seconds
  • field 9         Signature ID
  • field 10       Signature Name
  • field 14       Attacker IP Address
  • field 15       Attacker Port
  • field 17       Victim IP Address
  • field 18       Victim Port

About the Linux commands:

If these commands are new, or you’d like to understand more about using *nix tools to parse text, look here and hereto get started, google Linux, or go to your favorite bookstore and buy an O’Reilly book.

The tr commandused above removes the single quotes wrapped around each data element during the database export. This is done using the ascii code for the single quote character. This is necessary to perform the data formatting in the awk command.

The awk command uses several arguments to perform formatting of the data. First, the {FS} element is used to tell awk to use a comma as the field separator instead of spaces, which is the default. Once awk understands how to break up the fields, I format the receive_time field. awk sees the receive_time field as the sixth field and assigns it $6, accordingly the other field elements are addressed in the same sequential method. The print action tells awk to display the fields as output fields. I used the strftimeto convert the Unix timestamp back to human readable time. There is a caveat here: you must account for millisecond timestampingversus standard “seconds from epoch” in traditional timestamping. Each operation in awk is separated by {}’s.

I used standard sort and uniq to perform sorting and counting functions on the data I parsed using awk.