Blog Archives

Cisco IPS Event Viewer Database Hacking

Ever wished you get different snapshots from the Cisco IEV tool? Management ever asked what went on the last 30 days and your management platform can’t help you?

I found myself needing to provide a 30 day report from a Cisco IDSM2 blade, after finding no built-in option in the https servlet on the IDSM itself and nothing immediately available in the Event Viewer, I began to look around.

Cisco provides the freely available IPS Event Viewer (IEV) for their IPS/IDS productsthat makes use of java to load alert data into MySQL tables for display in the stand-alone software. If you want a full blown reporting engine and monitoring tool for Cisco IDS/IPS you’ll need to look at MARS and then look elsewhere. (Anyone I work with will tell you I’m not fond of CS-MARS.)

I found the MySQL Admin widget (mysqladmin.exe) and looked at the databases and tables installed by IEV. I’ve spent a fair amount of time with sql and MySQL databases, so I looked around the table structures to see if there was another option.

The database names and table names along with their configuration are viewable in the MySQLAdmin GUI. You could also do this with show commands at the mysql prompt: show databases, show tables, describe tables.

Using the field names, I constructed the following query:

select FROM_UNIXTIME(receive_time/1000,’%c-%d-%Y’) as date ,count(sig_id) as counted, sig_name from event_realtime_table group by sig_name, sig_id, date order by date, counted;

Note the receive_time and event_time fields are unix timestamped in milliseconds, not seconds. In the example above, I compensated by dividing by 1000, because I only needed calendar days.This results in the following response: 

+—————+———-+——————————————+
| date                 |counted| sig_name                                
+—————+———-+——————————————+
| 11-06-2007 |              1 | FTP Authorization Failure               
| 11-06-2007 |              1 | Storm Worm                              
| 11-06-2007 |              1 | DNS Tunneling                           
| 11-06-2007 |              2 | TCP Segment Overwrite                   
| 11-06-2007 |            44 | TCP SYN Host Sweep                      
| 11-07-2007 |               1 | SMB Remote Srvsvc Service Access Attempt
| 11-07-2007 |               1 | SSH CRC32 Overflow                      
| 11-07-2007 |               2 | MS-DOS Device Name DoS                  
| 11-07-2007 |               2 | FTP PASS Suspicious Length              
| 11-07-2007 |               3 | HTTP CONNECT Tunnel                      
+—————+———–+——————————————+

The event_realtime_table only contains the most recent data; depending on your setup this may be 1 day, 5 days, or 30 days. In my case, I only have 24 hours worth of data in the realtime table and have to look elsewhere for the prior 29 days.

If you’ve configured any archiving, you will need to tap into those extra tables in order to get the full 30 days. I elected to export all the tables to a single CSV file and do the parsing in Linux. Using the commands below, I created a file that contained the receive_time (MM/DD/YYYY), severity, sig_id, and sig_name:

tr -d ’47’ < /tmp/exported.csv |  awk ‘{FS=”,”}{print strftime(“%x”, $6*.001)”,”$3″,”$9″,”$10}’ > file.txt

This gave me a table of dates, severity, signature id’s, and signature names that I can use as needed. From here I used awk to mangle the columns and pre-format the results for loading into Excel as a chart:

awk ‘{FS=”,”} {print $1,$2}’ /file.txt | sort -rn | uniq -c | awk ‘{print $3″,”$2″,”$1}’ | sort > excel_ready.csv

This results in a comma delimited file that can be loaded into Excel and used to create charts or graphs as needed. The commands can be scripted to run every 30 days on archived files, if necessary.

     75 09/30/07,0,3030,TCP SYN Host Sweep
      8 09/30/07,0,6250,FTP Authorization Failure
      3 09/30/07,1,2100,ICMP Network Sweep w/Echo
      4 09/30/07,1,3002,TCP SYN Port Sweep
      7 09/30/07,2,6066,DNS Tunneling
      2 09/30/07,3,1300,TCP Segment Overwrite
      3 09/30/07,3,3251,TCP Hijack Simplex Mode
      4 10/01/07,0,1204,IP Fragment Missing Initial Fragment

You could easily import other fields, the ones of most interest to me where:

  • field 3         Alert Severity (0-4) [Informational – High]
  • field 4         Sensor name (if you have more than one sensor)
  • field 6         Timestamp epoch using milliseconds instead of seconds
  • field 9         Signature ID
  • field 10       Signature Name
  • field 14       Attacker IP Address
  • field 15       Attacker Port
  • field 17       Victim IP Address
  • field 18       Victim Port

About the Linux commands:

If these commands are new, or you’d like to understand more about using *nix tools to parse text, look here and hereto get started, google Linux, or go to your favorite bookstore and buy an O’Reilly book.

The tr commandused above removes the single quotes wrapped around each data element during the database export. This is done using the ascii code for the single quote character. This is necessary to perform the data formatting in the awk command.

The awk command uses several arguments to perform formatting of the data. First, the {FS} element is used to tell awk to use a comma as the field separator instead of spaces, which is the default. Once awk understands how to break up the fields, I format the receive_time field. awk sees the receive_time field as the sixth field and assigns it $6, accordingly the other field elements are addressed in the same sequential method. The print action tells awk to display the fields as output fields. I used the strftimeto convert the Unix timestamp back to human readable time. There is a caveat here: you must account for millisecond timestampingversus standard “seconds from epoch” in traditional timestamping. Each operation in awk is separated by {}’s.

I used standard sort and uniq to perform sorting and counting functions on the data I parsed using awk.