Blog Archives

Cisco IPS Event Viewer Database Hacking

Ever wished you get different snapshots from the Cisco IEV tool? Management ever asked what went on the last 30 days and your management platform can’t help you?

I found myself needing to provide a 30 day report from a Cisco IDSM2 blade, after finding no built-in option in the https servlet on the IDSM itself and nothing immediately available in the Event Viewer, I began to look around.

Cisco provides the freely available IPS Event Viewer (IEV) for their IPS/IDS productsthat makes use of java to load alert data into MySQL tables for display in the stand-alone software. If you want a full blown reporting engine and monitoring tool for Cisco IDS/IPS you’ll need to look at MARS and then look elsewhere. (Anyone I work with will tell you I’m not fond of CS-MARS.)

I found the MySQL Admin widget (mysqladmin.exe) and looked at the databases and tables installed by IEV. I’ve spent a fair amount of time with sql and MySQL databases, so I looked around the table structures to see if there was another option.

The database names and table names along with their configuration are viewable in the MySQLAdmin GUI. You could also do this with show commands at the mysql prompt: show databases, show tables, describe tables.

Using the field names, I constructed the following query:

select FROM_UNIXTIME(receive_time/1000,’%c-%d-%Y’) as date ,count(sig_id) as counted, sig_name from event_realtime_table group by sig_name, sig_id, date order by date, counted;

Note the receive_time and event_time fields are unix timestamped in milliseconds, not seconds. In the example above, I compensated by dividing by 1000, because I only needed calendar days.This results in the following response: 

+—————+———-+——————————————+
| date                 |counted| sig_name                                
+—————+———-+——————————————+
| 11-06-2007 |              1 | FTP Authorization Failure               
| 11-06-2007 |              1 | Storm Worm                              
| 11-06-2007 |              1 | DNS Tunneling                           
| 11-06-2007 |              2 | TCP Segment Overwrite                   
| 11-06-2007 |            44 | TCP SYN Host Sweep                      
| 11-07-2007 |               1 | SMB Remote Srvsvc Service Access Attempt
| 11-07-2007 |               1 | SSH CRC32 Overflow                      
| 11-07-2007 |               2 | MS-DOS Device Name DoS                  
| 11-07-2007 |               2 | FTP PASS Suspicious Length              
| 11-07-2007 |               3 | HTTP CONNECT Tunnel                      
+—————+———–+——————————————+

The event_realtime_table only contains the most recent data; depending on your setup this may be 1 day, 5 days, or 30 days. In my case, I only have 24 hours worth of data in the realtime table and have to look elsewhere for the prior 29 days.

If you’ve configured any archiving, you will need to tap into those extra tables in order to get the full 30 days. I elected to export all the tables to a single CSV file and do the parsing in Linux. Using the commands below, I created a file that contained the receive_time (MM/DD/YYYY), severity, sig_id, and sig_name:

tr -d ’47’ < /tmp/exported.csv |  awk ‘{FS=”,”}{print strftime(“%x”, $6*.001)”,”$3″,”$9″,”$10}’ > file.txt

This gave me a table of dates, severity, signature id’s, and signature names that I can use as needed. From here I used awk to mangle the columns and pre-format the results for loading into Excel as a chart:

awk ‘{FS=”,”} {print $1,$2}’ /file.txt | sort -rn | uniq -c | awk ‘{print $3″,”$2″,”$1}’ | sort > excel_ready.csv

This results in a comma delimited file that can be loaded into Excel and used to create charts or graphs as needed. The commands can be scripted to run every 30 days on archived files, if necessary.

     75 09/30/07,0,3030,TCP SYN Host Sweep
      8 09/30/07,0,6250,FTP Authorization Failure
      3 09/30/07,1,2100,ICMP Network Sweep w/Echo
      4 09/30/07,1,3002,TCP SYN Port Sweep
      7 09/30/07,2,6066,DNS Tunneling
      2 09/30/07,3,1300,TCP Segment Overwrite
      3 09/30/07,3,3251,TCP Hijack Simplex Mode
      4 10/01/07,0,1204,IP Fragment Missing Initial Fragment

You could easily import other fields, the ones of most interest to me where:

  • field 3         Alert Severity (0-4) [Informational – High]
  • field 4         Sensor name (if you have more than one sensor)
  • field 6         Timestamp epoch using milliseconds instead of seconds
  • field 9         Signature ID
  • field 10       Signature Name
  • field 14       Attacker IP Address
  • field 15       Attacker Port
  • field 17       Victim IP Address
  • field 18       Victim Port

About the Linux commands:

If these commands are new, or you’d like to understand more about using *nix tools to parse text, look here and hereto get started, google Linux, or go to your favorite bookstore and buy an O’Reilly book.

The tr commandused above removes the single quotes wrapped around each data element during the database export. This is done using the ascii code for the single quote character. This is necessary to perform the data formatting in the awk command.

The awk command uses several arguments to perform formatting of the data. First, the {FS} element is used to tell awk to use a comma as the field separator instead of spaces, which is the default. Once awk understands how to break up the fields, I format the receive_time field. awk sees the receive_time field as the sixth field and assigns it $6, accordingly the other field elements are addressed in the same sequential method. The print action tells awk to display the fields as output fields. I used the strftimeto convert the Unix timestamp back to human readable time. There is a caveat here: you must account for millisecond timestampingversus standard “seconds from epoch” in traditional timestamping. Each operation in awk is separated by {}’s.

I used standard sort and uniq to perform sorting and counting functions on the data I parsed using awk.

Advertisements

Network Zoning (The Zone)

I decided to start a series on Network Zoning after seeing too many “you need to zone your network” articles and best practice guidelines that never give any aide to the reader on HOW to go about this other than segregate your network or firewall off the servers. Before you attempt this, make sure you understand the limitations of your infrastructure and the concepts outlined below.

In this series, a ‘Zone’ will be the LAN segment set aside for a specific function or IP Range. This zone will route or switch to a firewall-like interface which will provide the “networking” part of the puzzle between the various zones. Depending on the ruleset, data from one zone may or may not be transported to another zone.

Make sure you’re familiar with TCP/IP, Networking, VLANs, IP Addresssing, and subnets; you don’t need to be an expert to proceed. The examples, configurations, and how-to will assume you’re working with Cisco equipment. If you need help with other equipment, you’re welcome to drop me a line.

In this example, the zoning will be accomplished by assigning like-resources to a specific VLAN and having these VLANs end on a router/firewall interface whereBasic Network Zoning access decisions can be made in accordance with local Security Policies and Practices. 

First thing’s first.

Zoning implies some grouping of computing resources. This grouping could be by location, function, purpose, access type, subnet, etc. Because this is my how-to, I’m going to zone according to functional area and subnet. Take a look at this basic diagram where users, administrators, application servers, and sensitive data servers are zoned off and tie back to a firewall.

Functional zoning is very beneficial once the firewall side comes into play because the access rules and restrictions should be largely similar across the specific zone. All the servers shouldn’t need Y! Instant Messenger access but the Desktop Users might run The Weather Channel on their desktop. Zoning like resources makes management of the firewalling and routing simpler over-time as well, because the zoned areas can be extended without re-inventing the wheel.

Once you know how you want to group the resources, it is important to describe and qualify what is unique and different about each grouping. This ensures that your groups don’t overlap as well as preserves the groupings going forward. Depending on your experience with firewalling and routing, at this point you may want to begin clarifying what each zone can and cannot access, for instance – the Sensitive Data Servers DO NOT surf the web or have access to email.

The example here assumes the subnets of these groups allows for grouping or summarization, this may or may not be the case in your grouping but if you have the options it does simplify things in other areas as well. For my example I have allocated 256 IP Addresses to each of the two server zones along with the administrator zone and I have allocated 1024 IP Addresses for the user zone:

  • Zone – APPS = 10.0.0.0/24 (10.0.0.0 – 10.0.0.255) [256 Server IP Addresses]
    • Description: Zone dedicated to application servers and services, no end-users and no sensitive customer data
    • Examples: Intranet server, Email server, File server
  • Zone – SENSITIVE = 10.0.1.0/24 (10.0.1.0 – 10.0.1.255) [256 Server IP Addresses]
    • Description: Zone dedicated to servers that contain sensitive customer data (could also be employee data)
    • Examples: Oracle database server
  • Zone – SYSADMIN = 10.0.2.0/24 (10.0.2.0 – 10.0.2.255) [256 System Administrator IP Addresses]
    • Description: Zone dedicated to privileged administrators of systems, applications, or infrastructure, requires extra access to servers, network elements, etc.
    • Examples: Network Management Team, Firewall Administrators, Database Administrators, etc.
  • Zone – USERS = 10.0.3.0/22 (10.0.3.0 – 10.0.6.255 [1,024 Desktop User IP Addresses
    • Description: Zone dedicated to the general user base
    • Example: Average Joe user

These examples and zones will  not apply to every organization. These are hypotethical and designed to get your imagination flowing. The key ingredient is being able to combine ‘like users’ and ‘like access’. The zone members will be placed into their own VLAN and will not be able to talk to devices outside their VLAN unless a router or firewall allows them to do so. In the example above, this VLAN configuration will allow the use of routed VLAN interfaces and switched VLAN interfaces. The difference between routed and switched interfaces being: switched interfaces only talk to similar switched interfaces (ZONE-USERS talks to ZONE-USERS over switched interface while ZONE-USERS cannot talk to ZONE-APPS without passing through a routed interface).

There is one more zone to introduce, that I have come to rely heavily on, ZONE-NETCORE. If you think about it, you’ve zoned the users, the applications, and the servers. What about the network?

  • Zone – NETCORE = 10.255.0.0/24 (10.255.0.0 – 10.255.0.255) [256 Network Core IP Addresses]
    • Description: Zone dedicated to network interface on routers to facilitate core communications and isolate zones
    • Examples: each router has an interface on this Zone

The ZONE-NETCORE is not required, but it serves to isolate the VLANs from one-another across a core or the network, without this zone each VLAN/ aka Zone must come all the way back to the firewall, as indicated in the basic diagram. This approach creates communities that are aggregated and connected via conduits, as depicted here. Note each zone is self-contained and isolated from other zones before reaching the firewall.

Now we have zones defined and understand basic functionality inside the zones. How are the VLANs setup? What security do we need for each zone and how is that accomplished?

Stay tuned…