Thursday, June 25, 2009

BASE / ACID outdated reference links - a fix

Recently, with changes to the snort.org site, the Snort mailing lists have been quite inundated with questions about the link to the SID reference and how it is no more. As a partial means of compensating for this and to help the community, we have recently added an up-to-date tool at rootedyour.com that will allow for you to once again have a valid snort reference link.


In BASE, simply locate the following section of your base_conf.php:
/* Signature references */
$external_sig_link = array('bugtraq' => array('http://www.securityfocus.com/bid/', ''),
'snort' => array('http://www.snort.org/pub-bin/sigs.cgi?sid=', ''),
'cve' => array('http://cve.mitre.org/cgi-bin/cvename.cgi?name=', ''),
'arachnids' => array('http://www.whitehats.com/info/ids', ''),
'mcafee' => array('http://vil.nai.com/vil/content/v_', '.htm'),
'icat' => array('http://icat.nist.gov/icat.cfm?cvename=CAN-', ''),
'nessus' => array('http://www.nessus.org/plugins/index.php?view=single&id=', ''),
'url' => array('http://', ''),
'local' => array('signatures/', '.txt'));


and modify the 'snort' line to match:
'snort' => array('http://www.rootedyour.com/snortsid?sid=', ''),
Once this is done, you are all set, the snort documentation link will now take you to rootedyour.com and display the info for that SID.

Obviously if you want to do this in other applications, simply point them to http://www.rootedyour.com/snortsid?sid=xxxxx where xxxxx is the SID that you want to know about. ex: http://rootedyour.com/snortsid?sid=234

Cheers,
JJC

Tuesday, June 23, 2009

Fly Clear, Sensitive Data Disposal Concerns

Early today, the company that produces the Clear Pass announced via press release and on their website that they were shutting down operations effective at 23:00 on June 22.

Noted on their website:
Spokespeople at various Clear equipped airports said that qualified clear users would be allowed to pass through the "premium" lanes at said airports.

Of course, to me, this leaves a big question out there: WHAT IS GOING TO HAPPEN WITH THE BIOMETRIC DATA? I mean, these guys collected BIOMETRIC and more info (retinal scans, complete fingerprint sets, background information, credit information etc...) and what is going to happen to this data? Will it be sold off to the highest bidder, handed over to one of the many alphabet soup government agencies, placed into a dumpster by an angry employee or what? That is of course the only question that I have. If you were one of the many that signed up, you had the option to opt in or out of their program that shared the biometric information with the feds, but what now? My largest concern is of course the first and thirt item that I listed. What do you think?

Cheers,
JJC

Tuesday, June 16, 2009

pulledpork included in Security Onion LiveCD

Today, Doug Burks (the creator of the Security Onion LiveCD) announced the release of the latest rev of this tool. Included in this tool are "you guessed it" pulledpork and a number of other useful tools to the sekuritah professional :-)

Read more here => http://securityonion.blogspot.com/2009/06/security-onion-livecd-20090613.html

I would like to extend a thanks to Doug for his work on this tool and the inclusion of pulledpork and the other tools. While I have not yet had the opportunity to download and try out this LiveCD, I will be doing so soon.

Cheers,
JJC

Friday, June 5, 2009

How to block robots.. before they hit robots.txt - ala: mod_security

As many of you know, robots (in their many forms) can be quite pesky when it comes to crawling your site, indexing things that you don't want indexed. Yes, there is the standard of putting a robots.txt in your webroot, but that is often not highly effective. This is due to a number of facts... the least of which is not that robots tend to be poorly written to begin with and thus simply ignore the robots.txt anyway.

This comes up because a friend of mine that runs a big e-com site recently asked me.. "J, how can I block everything from these robots, I simply don't want them crawling our site." My typical response to this was "you know that you will then block these search engines and keep them from indexing your site"... to whit "yes, none of our sales are organic, they all come from referring partners and affiliate programs".... That's all that I needed to know... as long as it doesn't break anything that they need heh.

After puting some thought into it, and deciding that there was no really easy way to do this on a firewall, I decided that the best way to do it was to create some mod_security rules that looked for known robots and returned a 404 whenever any such monster hit the site. This made the most sense because they are running an Apache reverse proxy in front of their web application servers with mod_security (and some other fun).

A quick search on the internet found the robotstxt.org site that contained a listing (http://www.robotstxt.org/db/all.txt) of quite a few common robots. Looking through this file, all that I really cared about was the robots-useragent value. As such, I quickly whipped up the following perl that automaticaly creates a file named modsecurity_crs_36_all_robots.conf. Simply place this file in the apt path (for me /usr/local/etc/apache/Includes/mod_security2/) and restart your apache... voila.. now only (for the most part) users can browse your webserver. I'll not get into other complex setups, but you could do this on a per directory level also, from your httpd.conf, and mimic robots.txt (except the robots can't ignore the 404 muahahaha).

#####################Begin Perl#######################
#!/usr/bin/perl

##
## Quick little routine to pull the user-agent string out of the
## all.txt file from the robots project, with the intention of creating
## regular expression block rules so that they can no longer crawl
## against the rules!
## Copyright JJ Cummings 2009
## cummingsj@gmail.com
##

use strict;
use warnings;
use File::Path;

my ($line,$orig);
my $c = 1000000;
my $file = "all.txt";
my $write = "modsecurity_crs_36_all_robots.conf";
open (DATA,"<$file");
my @lines = ;
close (DATA);

open (WRITE,">$write");
print WRITE "#\n#\tQuick list of known robots that are parsable via http://www.robotstxt.org/db/all.txt\n";
print WRITE "#\tgenerated by robots.pl written by JJ Cummings \n\n";
foreach $line(@lines){
if ($line=~/robot-useragent:/i){
$line=~s/robot-useragent://;
$line=~s/^\s+//;
$line=~s/\s+$//;
$orig=$line;
$line=~s/\//\\\//g;
#$line=~s/\s/\\ /g;
$line=~s/\./\\\./g;
$line=~s/\!/\\\!/g;
$line=~s/\?/\\\?/g;
$line=~s/\$/\\\$/g;
$line=~s/\+/\\\+/g;
$line=~s/\|/\\\|/g;
$line=~s/\{/\\\{/g;
$line=~s/\}/\\\}/g;
$line=~s/\(/\\\(/g;
$line=~s/\)/\\\)/g;
$line=~s/\*/\\\*/g;
$line=~s/X/\./g;
$line=lc($line);
chomp($line);
if (($line ne "") && ($line !~ "no") && ($line !~ /none/i)) {
$c++;
$orig=~s/'//g;
$orig=~s/`//g;
chomp($orig);
print WRITE "SecRule REQUEST_HEADERS:User-Agent \"$line\" \\\n";
print WRITE "\t\"phase:2,t:none,t:lowercase,deny,log,auditlog,status:404,msg:'Automated Web Crawler Block Activity',id:'$c',tag:'AUTOMATION/BOTS',severity:'2'\"\n";
}
}
}
close (WRITE);
$c=$c-1000000;
print "$c total robots\n";


#####################End Perl#######################

To use the above, you have to save the all.txt file to the same directory as the perl.. and of course have +w permissions so that the perl can create the apt new file. This is a pretty basic routine... I wrote it in about 5 minutes (with a few extra minutes for tweaking of the ruleset format output (displayed below). So please, feel free to modify / enhance / whatever to fit your own needs as best you deem. **yes, I did shrink it so that it would format correctly here**

#####################Begin Example Output#######################
SecRule REQUEST_HEADERS:User-Agent "abcdatos botlink\/1\.0\.2 \(test links\)" \
"phase:2,t:none,t:lowercase,deny,log,auditlog,status:404,msg:'Automated Web Crawler Block Activity',id:'1000001',tag:'AUTOMATION/BOTS',severity:'2'"
SecRule REQUEST_HEADERS:User-Agent "'ahoy\! the homepage finder'" \
"phase:2,t:none,t:lowercase,deny,log,auditlog,status:404,msg:'Automated Web Crawler Block Activity',id:'1000002',tag:'AUTOMATION/BOTS',severity:'2'"
SecRule REQUEST_HEADERS:User-Agent "alkalinebot" \
"phase:2,t:none,t:lowercase,deny,log,auditlog,status:404,msg:'Automated Web Crawler Block Activity',id:'1000003',tag:'AUTOMATION/BOTS',severity:'2'"
SecRule REQUEST_HEADERS:User-Agent "anthillv1\.1" \
"phase:2,t:none,t:lowercase,deny,log,auditlog,status:404,msg:'Automated Web Crawler Block Activity',id:'1000004',tag:'AUTOMATION/BOTS',severity:'2'"
SecRule REQUEST_HEADERS:User-Agent "appie\/1\.1" \
"phase:2,t:none,t:lowercase,deny,log,auditlog,status:404,msg:'Automated Web Crawler Block Activity',id:'1000005',tag:'AUTOMATION/BOTS',severity:'2'"

#####################End Example Output#######################

And that folks, is how you destroy robots that you don't like.. you can modify the error that returns to fit whatever suits you best.. 403, 404.....

Cheers,
JJC

Wednesday, June 3, 2009

pulledpork tarball

It's up... get it while it's hot -> http://code.google.com/p/pulledpork/downloads/list

Cheers,
JJC

Tuesday, June 2, 2009

v0.2 Beta 1 is the outed! -> pulledpork that is <-



As the title indicates, the first beta for v0.2 of pulledpork has just been checked in to the pulledpork svn..

A shortlist of the current featuresets below



Release 0.1:

Release 0.2:

So, as you can see above I have added quite a bit of code and functionality to pulled pork. The disablesid function should be pretty robust (perhaps I'll add some additional error handling), but for the most part it should rock and roll!

I'll likely be adding a modifysid section to mirror what oinkmaster does with their modifysid function.. but that's probably still a few weeks out.

Having said all of this, please download, test and post any bugs/issues that you find on the google code page for pulledpork or catch me in #snort on freenode.

And now, the gratuatis screenshot ;-)


Cheers,
JJC

Monday, June 1, 2009

PulledPork Checkin

Quick update today with big enhancements coming this week in the bbq pulledpork arena! (hopefully).

This past Friday I checked in some code for PulledPork that allows for the handling of any format contents of md5 file from the snort.org servers.. we won't be foiled again ;-)

Get your great tasting pulledpork here => http://code.google.com/p/pulledpork

Cheers,
JJC