Wednesday, October 14, 2009

Pulledpork v0.2.5 - Released

A new and updated version of pulledpork is out, this version adds functionality and also addresses a number of previously reported bugs, a few simple examples:

  • Improved and cleaned up code for efficiency and speed
  • Do not overwrite local.rules on run
  • Do not attempt to copy . and .. as rules files
  • Much more...
The primary feature that has been added allows for the capability to download rules from sites other than (VRT). Any url can be specified to download a rules tarball from, however md5 hash verification will only work when VRT or ET locations are specified. If a different location (i.e. a local redistribution point) is specified, please be sure to specify the -d (do not verify md5) option. Please see the README and pulledpork.conf files for more information on usage of new and existing options and features.

New option runtime flag:
  • -u Where do you want me to pull the rules tarball from
(ET,, see pulledpork config base_url option for value ideas)

A new tarball containing all of the new features will be published today at

Wednesday, September 16, 2009

Snort 2.8.5 at get it while it's hot!

Snort 2.8.5 is teh outed, get it or DIAF!

Snort 2.8.5 introduces:

- Ability to specify multiple configurations (snort.conf and everything
it includes), bound either by Vlan ID or IP Address. This allows you
to run one instance of Snort with multiple snort.conf files, rather
than having separate processes. See README.multipleconfigs for

- Continued inspection of traffic while reloading a configuration.
Add --enable-reload option to your configure script prior to building.
See README.reload for details.

- Rate Based Attack Prevention for Connection Attempts, Concurrent
Connections, and improved rule/event filtering. See README.filters
for details.

- SSH preprocessor

- Performance improvements in various places

Please see the Release Notes and ChangeLog for more details.



Thursday, July 16, 2009

pulledpork google group

Not that anyone actually needs help, but if you want a different place where you can share comments, thought, desired features or complaints, I have created a google group for pulled pork:



pulledpork 0.2.2 and new features

Get it while it's hot @here!

I have received a few requests to build support into pulledpork for the restarting of processes (i.e. snort after downloading new rules or modifying the ruleset using disablesid). In response to this, it is done ^-^. You will note in the pulledpork.conf file that there is a new option at the bottom called pid_path. Simply list the path to your pid files (/var/run/,/path/to/another/ etc... and specify -H at runtime.. you will be magically pleased (assuming you run pulledpork under a context that has permissions to restart said PID).

I also added a second option "-n" that will allow you to make modifications to the disablesid.conf file and re-execute pulledpork without attempting to download the current ruleset or md5 again (ala tuning exercises...).

Please see the included README for additional info and general guidelines on usage... below is some sample output.

./ -c ../pulledpork.conf -i disablesid.conf -THn
Prepping files for work....
Copying rules files....
Disabling your chosen SID's....
Disabled 1 rules in /usr/local/etc/snort/rules/web-iis.rules
Disabled 2 rules in /usr/local/etc/snort/rules/backdoor.rules
Disabled 1 rules in /usr/local/etc/snort/rules/rpc.rules
Disabled 1 rules in /usr/local/etc/snort/rules/exploit.rules
HangUP Time....
Fly Piggy Fly!
That's all for now, enjoy!


Wednesday, July 15, 2009

Snorby for Snort, a Recipe with Barnyard2 and Unified2

Snorby, an all new frontend (yes, it's still Beta) for snort has recently emerged. As such I decided that I would take a look and give my thoughts as well as a quick recipe to get it running fairly quickly using barnyard2. During my testing of Snorby, I talked with the creator (mephux) about his plans for Snorby and also worked through a couple of bugs, that he jumped on right away.

Note: This posting details how to get Snorby working with apache and passenger, NOT Webrick.. if you want that please read the details of how to do so at the Snorby site.

Recipe Components:
  • FreeBSD 8.0R
  • apache22
  • ruby-gems
  • ruby-iconv
  • prawn (gem)
  • rake (gem)
  • mysql (gem)
  • rails (gem)
  • passenger (formerly modrails)
  • mysql
  • snort
  • barnyard2
  • git
Ok, let's get the dependencies and such out of the way. I am making several assumptions in writing this... the least of which is that you know how to use google if you can't figure something out... also that you already have the base of some of these items installed (ala, FreeBSD, apache, snort). If not, I have previous posts that discuss the setup of said items, and I am again going to drop the google bomb!

We need ruby-gems to get passenger running and ultimately Snorby:
$ cd /usr/ports/devel/git/ && sudo make install clean
...I deselect all of the options, I just want regular old git for this exercise
...output suppressed
$ cd /usr/ports/devel/ruby-gems/ && sudo make install clean
...output suppressed
$ sudo gem install prawn --no-rdoc --no-ri
...output suppressed
$ sudo gem install rake --no-rdoc --no-ri
...output suppressed
$ sudo gem install rails --no-rdoc --no-ri
...output suppressed
$ sudo gem install mysql --no-rdoc --no-ri
...output suppressed
$ sudo gem install passenger --no-rdoc --no-ri
...output suppressed
$ sudo passenger-install-apache2-module through the setup and perform the steps that are noted to activate the passenger capabilities with apache.. ala vi httpd.conf and add the 3 lines that you are told to.
$ cd /usr/local/www/ && sudo git clone git://
...output suppressed/usr/ports/converters/ruby-iconv
$ cd /usr/ports/converters/ruby-iconv && sudo make install clean

At this point you are ready to modify your database and email configuration for Snorby. If you have not done so, you should create a snort database (I have called mine snort and created a user "snorby" with password "snorby".. ok that's not really the password but for this writeup it is! This user has full access (not grant) to the snort database. I have also created the apt tables in this database using the create_mysql sql that is included in both Snorby and Snort!
$ sudo cp /usr/local/www/Snorby/config/database.yml.example /usr/local/www/Snorby/config/database.yml
$ sudo cp /usr/local/www/Snorby/config/email.yml.example /usr/local/www/Snorby/config/email.yml

Now choose your preferred editor and modify the /usr/local/www/Snorby/config/database.yml file.. we are only concerned with the production info... you can also modify the email.yml but don't have to for our current purposes.

Install additional gem requirements and setup Snorby to run!
$ cd /usr/local/www/Snorby && sudo rake gems:install
...output suppressed
$ cd /usr/local/www/Snorby && sudo rake snorby:setup RAILS_ENV=production
...output suppressed

At this point you are ready to tell apache all about Snorby, so lets modify our vhost or apache config again. Simply add the following under the vhost of your choice, you need to be sure that RewriteEngine On and RewriteOptions inherit are specified in this vhost (or in scope of your config):
DocumentRoot /usr/local/www/Snorby/public

RailsBaseURI /

<directory "/usr/local/www/Snorby/public">
AllowOverride All
Order deny,allow
Allow from all

Once this is complete, restart apache and you will get the login for Snorby when you browse to that vhost. The default username is snorby and password is admin.

We are now ready to modify our snort config to output unified2, modify your snort.conf and comment out your old output plugins or simply replace them with the following:
output unified2: filename snortunified2.log, limit 128

Note that unified2 contains all log and alert data, so no longer do you need two files! And now it's time for barnyard2. Go ahead and fetch the latest version from, configure with "--with-mysql" option. Once that is done copy the barnyard.conf to /usr/local/etc/snort/ and let's go ahead and edit that file, putting in the mysql information that you used with Snorby earlier and making sure that we have our input specified as unified2. You should go through and make sure that all of the paths to the map and ref files are specified correctly. Once that's done, you are ready to fire it up!
sudo barnyard2 -c /usr/local/etc/snort/barnyard2.conf -d /var/log/snort -f snortunified2.log -w /var/log/snort/barnyard2.waldo -D

You should now be receiving events in the snort mysql database and seeing them in Snorby.

Please note that there are a number of security considerations that I did not take into account (ala running all this stuff under root) so please take that into consideration.

Overall, I give Snorby a good rating, it certainly has lots of eye candy at this point. Mephux promises that much of the functionality that everyone wants is coming shortly... I would say that Snorby has a good start and promises to be a decent usable frontend for viewing snort events. Is it a sguil, certainly not... but it does look like it will be a decent alternative to BASE.


PayPal shuts Hackers for Chartity down?

Yesterday, paypal froze the assets of down, please read more here and spread the word of the evils ;-)
"I had a subscription system running under WP-MEMBER for about a year before that software flaked out on me. Multiple domains caused problems that were irreconcilable. I had donations for our work in Africa coming in (not through wp-member) and a few hundred subscribers to Informer through wp-member. All said, when I switched to Suma, I had 10,000$US in my personal paypal account. That was my family’s support money as well as money for our food program in Kenya."

I thought about writing a long rant today, but simply don't have the energy... please read the above link for rant material.


Thursday, June 25, 2009

BASE / ACID outdated reference links - a fix

Recently, with changes to the site, the Snort mailing lists have been quite inundated with questions about the link to the SID reference and how it is no more. As a partial means of compensating for this and to help the community, we have recently added an up-to-date tool at that will allow for you to once again have a valid snort reference link.

In BASE, simply locate the following section of your base_conf.php:
/* Signature references */
$external_sig_link = array('bugtraq' => array('', ''),
'snort' => array('', ''),
'cve' => array('', ''),
'arachnids' => array('', ''),
'mcafee' => array('', '.htm'),
'icat' => array('', ''),
'nessus' => array('', ''),
'url' => array('http://', ''),
'local' => array('signatures/', '.txt'));

and modify the 'snort' line to match:
'snort' => array('', ''),
Once this is done, you are all set, the snort documentation link will now take you to and display the info for that SID.

Obviously if you want to do this in other applications, simply point them to where xxxxx is the SID that you want to know about. ex:


Tuesday, June 23, 2009

Fly Clear, Sensitive Data Disposal Concerns

Early today, the company that produces the Clear Pass announced via press release and on their website that they were shutting down operations effective at 23:00 on June 22.

Noted on their website:
Spokespeople at various Clear equipped airports said that qualified clear users would be allowed to pass through the "premium" lanes at said airports.

Of course, to me, this leaves a big question out there: WHAT IS GOING TO HAPPEN WITH THE BIOMETRIC DATA? I mean, these guys collected BIOMETRIC and more info (retinal scans, complete fingerprint sets, background information, credit information etc...) and what is going to happen to this data? Will it be sold off to the highest bidder, handed over to one of the many alphabet soup government agencies, placed into a dumpster by an angry employee or what? That is of course the only question that I have. If you were one of the many that signed up, you had the option to opt in or out of their program that shared the biometric information with the feds, but what now? My largest concern is of course the first and thirt item that I listed. What do you think?


Tuesday, June 16, 2009

pulledpork included in Security Onion LiveCD

Today, Doug Burks (the creator of the Security Onion LiveCD) announced the release of the latest rev of this tool. Included in this tool are "you guessed it" pulledpork and a number of other useful tools to the sekuritah professional :-)

Read more here =>

I would like to extend a thanks to Doug for his work on this tool and the inclusion of pulledpork and the other tools. While I have not yet had the opportunity to download and try out this LiveCD, I will be doing so soon.


Friday, June 5, 2009

How to block robots.. before they hit robots.txt - ala: mod_security

As many of you know, robots (in their many forms) can be quite pesky when it comes to crawling your site, indexing things that you don't want indexed. Yes, there is the standard of putting a robots.txt in your webroot, but that is often not highly effective. This is due to a number of facts... the least of which is not that robots tend to be poorly written to begin with and thus simply ignore the robots.txt anyway.

This comes up because a friend of mine that runs a big e-com site recently asked me.. "J, how can I block everything from these robots, I simply don't want them crawling our site." My typical response to this was "you know that you will then block these search engines and keep them from indexing your site"... to whit "yes, none of our sales are organic, they all come from referring partners and affiliate programs".... That's all that I needed to know... as long as it doesn't break anything that they need heh.

After puting some thought into it, and deciding that there was no really easy way to do this on a firewall, I decided that the best way to do it was to create some mod_security rules that looked for known robots and returned a 404 whenever any such monster hit the site. This made the most sense because they are running an Apache reverse proxy in front of their web application servers with mod_security (and some other fun).

A quick search on the internet found the site that contained a listing ( of quite a few common robots. Looking through this file, all that I really cared about was the robots-useragent value. As such, I quickly whipped up the following perl that automaticaly creates a file named modsecurity_crs_36_all_robots.conf. Simply place this file in the apt path (for me /usr/local/etc/apache/Includes/mod_security2/) and restart your apache... voila.. now only (for the most part) users can browse your webserver. I'll not get into other complex setups, but you could do this on a per directory level also, from your httpd.conf, and mimic robots.txt (except the robots can't ignore the 404 muahahaha).

#####################Begin Perl#######################

## Quick little routine to pull the user-agent string out of the
## all.txt file from the robots project, with the intention of creating
## regular expression block rules so that they can no longer crawl
## against the rules!
## Copyright JJ Cummings 2009

use strict;
use warnings;
use File::Path;

my ($line,$orig);
my $c = 1000000;
my $file = "all.txt";
my $write = "modsecurity_crs_36_all_robots.conf";
open (DATA,"<$file");
my @lines = ;
close (DATA);

open (WRITE,">$write");
print WRITE "#\n#\tQuick list of known robots that are parsable via\n";
print WRITE "#\tgenerated by written by JJ Cummings \n\n";
foreach $line(@lines){
if ($line=~/robot-useragent:/i){
#$line=~s/\s/\\ /g;
if (($line ne "") && ($line !~ "no") && ($line !~ /none/i)) {
print WRITE "SecRule REQUEST_HEADERS:User-Agent \"$line\" \\\n";
print WRITE "\t\"phase:2,t:none,t:lowercase,deny,log,auditlog,status:404,msg:'Automated Web Crawler Block Activity',id:'$c',tag:'AUTOMATION/BOTS',severity:'2'\"\n";
close (WRITE);
print "$c total robots\n";

#####################End Perl#######################

To use the above, you have to save the all.txt file to the same directory as the perl.. and of course have +w permissions so that the perl can create the apt new file. This is a pretty basic routine... I wrote it in about 5 minutes (with a few extra minutes for tweaking of the ruleset format output (displayed below). So please, feel free to modify / enhance / whatever to fit your own needs as best you deem. **yes, I did shrink it so that it would format correctly here**

#####################Begin Example Output#######################
SecRule REQUEST_HEADERS:User-Agent "abcdatos botlink\/1\.0\.2 \(test links\)" \
"phase:2,t:none,t:lowercase,deny,log,auditlog,status:404,msg:'Automated Web Crawler Block Activity',id:'1000001',tag:'AUTOMATION/BOTS',severity:'2'"
SecRule REQUEST_HEADERS:User-Agent "'ahoy\! the homepage finder'" \
"phase:2,t:none,t:lowercase,deny,log,auditlog,status:404,msg:'Automated Web Crawler Block Activity',id:'1000002',tag:'AUTOMATION/BOTS',severity:'2'"
SecRule REQUEST_HEADERS:User-Agent "alkalinebot" \
"phase:2,t:none,t:lowercase,deny,log,auditlog,status:404,msg:'Automated Web Crawler Block Activity',id:'1000003',tag:'AUTOMATION/BOTS',severity:'2'"
SecRule REQUEST_HEADERS:User-Agent "anthillv1\.1" \
"phase:2,t:none,t:lowercase,deny,log,auditlog,status:404,msg:'Automated Web Crawler Block Activity',id:'1000004',tag:'AUTOMATION/BOTS',severity:'2'"
SecRule REQUEST_HEADERS:User-Agent "appie\/1\.1" \
"phase:2,t:none,t:lowercase,deny,log,auditlog,status:404,msg:'Automated Web Crawler Block Activity',id:'1000005',tag:'AUTOMATION/BOTS',severity:'2'"

#####################End Example Output#######################

And that folks, is how you destroy robots that you don't like.. you can modify the error that returns to fit whatever suits you best.. 403, 404.....


Wednesday, June 3, 2009

pulledpork tarball

It's up... get it while it's hot ->


Tuesday, June 2, 2009

v0.2 Beta 1 is the outed! -> pulledpork that is <-

As the title indicates, the first beta for v0.2 of pulledpork has just been checked in to the pulledpork svn..

A shortlist of the current featuresets below

Release 0.1:

Release 0.2:

So, as you can see above I have added quite a bit of code and functionality to pulled pork. The disablesid function should be pretty robust (perhaps I'll add some additional error handling), but for the most part it should rock and roll!

I'll likely be adding a modifysid section to mirror what oinkmaster does with their modifysid function.. but that's probably still a few weeks out.

Having said all of this, please download, test and post any bugs/issues that you find on the google code page for pulledpork or catch me in #snort on freenode.

And now, the gratuatis screenshot ;-)


Monday, June 1, 2009

PulledPork Checkin

Quick update today with big enhancements coming this week in the bbq pulledpork arena! (hopefully).

This past Friday I checked in some code for PulledPork that allows for the handling of any format contents of md5 file from the servers.. we won't be foiled again ;-)

Get your great tasting pulledpork here =>


Thursday, May 28, 2009

Pimping Tha All New

The home of Snort, received a facelift last night! The site has been largely static and unchanged for some time now.

A shortlist of the new features on the new that should make life easier for all:

• New navigation
• Improved account management
• New user forums
• Persistent link panel
• Improved VRT subscription management

What this does NOT mean is that your tools that automatically fetch snort rules tarballs will be broken... everything is still 100% functional and up in that area.

Having said all of this, please check out the new for yourself!

I extend a hearty good job to the entire team for their efforts in this, it looks and functions excellently!


Tuesday, May 26, 2009

Baconator Renamed => Pulled_Pork

So, for some "mostly obvious reasons" I have renamed the Baconator project to Pulled_Pork. This was for a variety of reasons and if you really want to know I'll explain it.. Just drop by #snort on freenode... suffice it to say that this new name is more fitting. Please also note the google code location has changed from /p/baconator to /p/pulledpork. I did note on the baconator page that this change has occured.

The new location =>

As always, thanks for the support and please fetch the latest version to do some testing for me!


Monday, May 18, 2009

Baconator 0.1 Beta 2 (try me)

I have completed the 0.1 Beta 2 of Baconator and believe it to be fairly stable and user friendly! Please give it a roll (it's not in a tarball yet, so you will have to check it out as noted below) and let me know if you experience any issues or have any updates / features that you would like to see.

The timeline:
Release 0.1:(This is complete)

Release 0.2:(I have started to work on this piece, probably finished in a few more weeks)

Next Release...

Visit the google code site for info on how to check out the code etc..


N.J. accidentally reveals personal data of 28K unemployed residents

Article here =>

Somehow these statements make it ok? => "This is a fluke," department spokesman Kevin Smith said. "This was just a clerical error."

Right, it's just a clerical error that affects 28,000 individuals lol. I'll grant them that it's not as major as many other items that have occurred.. but they seem to not take it seriously is my short and sweet point!

Yes, they (as I have stated in the past) like all other agencies have a standard =>, but evidently as long as "It's just a clerical error" again, it's ok.

Anyway, just wanted to start the week off on a small soap box ;-)


Thursday, May 14, 2009

Snort 2.8.5 at get it while it's hot!

A beta version of Snort 2.8.5 is now available on, at

Snort 2.8.5 introduces:

- Ability to specify multiple configurations (snort.conf and everything
it includes), bound either by Vlan ID or IP Address. This allows you
to run one instance of Snort with multiple snort.conf, rather than
having separate processes.

- Continued inspection of traffic while reloading a configuration.
Add --enable-reload option to your configure script prior to building.

- Rate Based Attack prevention for Connection Attempts, Concurrent
Connections, and improved rule/event filtering. See README.filters
for details.

- SSH preprocessor (no longer experimental)

- Performance improvements in various places

Please see the Release Notes and ChangeLog for more details.

Please submit bugs, questions, and feedback to

Wednesday, May 13, 2009

DC Agency Accidentally Emails PII about College Financial Aide Applicants <= WHAT?

Yes, the headline is indeed true. Yet another in a seemingly endless series of silly (stupid) mistakes made by individuals that lead to significant data leakage.

The Article:
D.C. Agency Accidentally E-Mails Personal Data About College Financial Aid Applicants

How many times is this going to happen before people begin to take things as simple as user education / training, as related to security, seriously? Having worked for a variety of branches within the federal government, I can tell you that they do have some fairly basic protocols in-place that allow for basic online (depending on the agency/organization either annual, semi-annual etc...) instruction and in the same session, testing. This then creates a nifty little certificate that you can hang in your little cubicle and is tracked by the CSO (or equivalent thereof) to provide for proof that said Agency/organization is meeting with their requirements.

Evidently though, the "don't email sensitive rubbish out" section was missing in the OSSE's online curriculum?

You tell me...

Tuesday, April 21, 2009

Baconator - Shared Object Snort Rule Management!

Recently while taking a plane ride from one lovely airport to another and doing some snort shared object rule development, I realized that I did not have a clean and easy way of fetching the latest snort rule tarball.

Don't get me wrong and misinterpret this post, I love Oinkmaster and have been a user of it for many a year!

Now, having said that... Oinkmaster does have it's shortcomings (for me anyway); the least of which is certainly not the fact that it currently does NOT handle shared object rules. With the release of Snort 2.8.4 and it's awesome new dcerpc2 preprocessor... the use of so_rules will most likely be much more prevalent.. and as such, with threats like Conficker and it's varients out there, I needed a way to handle this.

I did consider modifying Oinkmaster to fit my needs, but when I started writing the code at 30,000 feet... I didn't have the Oinkmaster codebase with me.

As a direct result of this thought and the lack of codebase on the plane... I started Baconator. Baconator is a Snort rule management tool that also handles so_rules, the creation of stub files from said so_rules, complete file validation (via MD5) against current VRT releases. It also does much more... or, will anyway.

I'll be posting more about Baconator as I complete the code. For now, if you want to try it out (it's not yet complete) you can checkout the code from the svn repo at

The current code will fetch the latest ruleset from (ultimately I'll probably build the functionality in to fetch from ET). If you have an existing copy of the rules tarball from it will fetch the latest rule tarball md5 from and compare so that it doesn't re-fetch the same tarball again. It then performs the various extraction routines as defined in the conf file or at runtime and puts the files where you tell it to.. the rules files that is!

More info can be found on the google code page for Baconator. I'll also be updating that site regularly with updates to the timeline, current svn etc...


Tuesday, April 7, 2009

Can Haz Snort 2.8.4!

With the new release of snort 2.8.4 you will need to upgrade immediately from whatever version you are on. If you do not upgrade, your sensorfail will be epic when you try to run any updated rules. This is due to the new DCERPC preprocessor and all new rules being built to use this new functionality.

Snort 2.8.4 is now available on, at

Snort 2.8.4 introduces:

- A revised DCE/RPC preprocessor with more rule options

With the new DCE/RPC preprocessor, there will be a number of updates
to the rules. Please be sure to update your rules to the latest
when that package is available (next few days).

- Support for IPv6 in Frag3 and all application preprocessors

- Improved target-based support in preprocessors

- Option to automatically pre-filter traffic that is not inspected in
order to improve performance

- Several other improvements and fixes

Please see the release notes and changelog for more details.


Sunday, April 5, 2009

Snort 3.0 Beta 3!

This last Thursday, Martin Roesch published a new blog entry discussing the Snort 3.0 architecture and some testing that has been conducted and has yet to be conducted. Definitely a good read and example of how software should be optimized and developed to work with current architectures and feature-sets!

Find this here:


Friday, March 27, 2009

Unofficial Snort 2.8.3.x patch for GCC 4.3.x build errors

Noted some issues lately in the community with build issues when building snort 2.8.3.x using GCC 4.3.x. Specifically you may receive output as follows:

In function ‘open’,
inlined from ‘server_stats_save’ at server_stats.c:349:
/usr/include/bits/fcntl2.h:51: error: call to ‘__open_missing_mode’ declared with attribute error: open with O_CREAT in second argument needs 3 arguments
make[5]: *** [server_stats.o] Error 1
make[5]: Leaving directory `~/snort-'
make[4]: *** [all-recursive] Error 1
make[4]: Leaving directory `~/snort-'
make[3]: *** [all-recursive] Error 1
make[3]: Leaving directory `~/snort-'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `~/snort-'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `~/snort-'
make: *** [all] Error 2
If you are receiving the aforementioned error on build, it's likely a simple fix that you can apply to src/preprocessors/flow/portscan/server_stats.c... yes, the patch is below:

if you don't know how to patch the file, I suggest using google to figure it out ;-)

Click me for the patch


Monday, March 23, 2009

InProtect 1.00.0 Beta_2 VMWare Image

Given recent developments that the team has made on the InProtect project and the many emails that I see floating about on the lists, I decided to create a VMware image of an "almost" fully functioning InProtect installation. I say "almost" because, of course, like the LiveUSB that I released some time ago, I can't put the latest version of Nessus on the VM due to licensing restrictions imposed by Tenable. Note that I did not include greatly detailed instructions on the use of InProtect, I may do this later but haven't the time right now.

Please try to remember that this is a BETA, and as such may not be fully functional... if you find bugs or the like, please feel free to file them at the sf site or hit us up !

So, the quick and dirty of it is that all you will need to do is go to the Nessus website and download the latest Nessus tarball from them, upload it to the VM (scp), install it (pkg_add), start it, register it and run the /opt/Inprotect/sbin/ script! Whew, that was one long runon sentence!. For everything to match up, create a user "inprotect" with password "inprotect" in your Nessus daemon. Once you have completed the aforementioned steps, you are all set and should be able to scan, note that if you want to scan outside of the VM, you will need to modify the configuration of the interface to be bridged etc... The interface is set for DHCP and everything will startup just fine with any address that you assign it or that it receives.

You will also need to throw the jpgraph stuff in /opt/Inprotect/html if you want the nifty graphs to work... but I'll probably speak more to this in an upcoming post.

I essentially used the install script to install in /opt/Inprotect on, you guessed it, FreeBSD 7.1R but of course had to make a few minor adjustments (it's not always 100% out of the gate) to get everything working together. That being said, you can probably do the same on your own distro.

some important info that you will (or may) need, i.e. username/password/medium

admin/password/inprotect web interface

phpMyAdmin is installed: http://ipofyourvm/phpmyadmin/ for your mysqling pleasure.

To access InProtect simply browse to the ip of your VM: http://ipofyourvm

If you want nmap, build it from ports: /usr/ports/security/nmap

Get the VMWare Image Here


Wednesday, March 18, 2009

PHPIDS Phase 1.1

I have been reviewing PHPIDS for some time now, and have come to the conclusion that while a novel idea... it is simply overkill and extra rubbish to include in your php code. I also have some ideas surrounding evasion techniques.... Don't get me wrong, I think that in the right place (i.e. a server that you can not load a real IDS/IPS such as mod_security on) it is better than nothing. I will place one caveat on that though, I am not 100% sure what it does to load capacity (or increasing the load of) and existing site. I'll be conducting some extensive load testing on it over the next week or so and posting those results.



I have been having some fun on twitter lately (instead of evaluating security foo hah!), though I have been on it for some time and not really using it. If you want to join into the fun, I am enhancedx.

Obviously the whole web2.0 movement introduces all new concerns surrounding security, especially as related to physical security of ones person. Specifically I am talking about social networking apps like twitter, loopt and the like. These are fun to play with and share your daily travels / ramblings with people, but if the user does not pay attention, they can also lead people directly to you. Of course, I am sure that EVERYONE is well versed it the features of these apps and therefore only shares their location when they want to, right? Of course people don't reuse the same password for multiple accounts and don't have their identity stolen ever either.. so what am I worrying about, sheesh!


Wednesday, March 11, 2009

I recently took over managing and maintaining from of TaoSecurity. I would like to extend my thanks to Richard for his time and efforts in getting off the ground.

The mission of is to provide quality network traffic traces to researchers, analysts, and other members of the digital security community. One of the most difficult problems facing researchers, analysts, and others is understanding traffic carried by networks. At present there is no central repository of traces from which a student of network traffic could draw samples. provides one possible solution to this problem.

Analysts looking for network traffic of a particular type can visit, query the capture repo for matching traces, and download those packets in their original format (e.g., Libpcap, etc.). The analyst will be able to process and analyze that traffic using tools of their choice, like Tcpdump, Snort, Ethereal, and so on.

Analysts who collect their own traffic will be able to submit it to the database after they register.

Anonymous users can download any trace that's published. Only registered users can upload. This system provides a level of accountability for trace uploads.

Our moderators will review the trace to ensure it does not contain any sensitive information that should not be posted publicly. Besides appearing on the site, once a trace has been published you can receive notice of it via this published trace RSS feed.

If you have any doubt regarding the publication of a trace, do not try to submit it. When moderators are unsure of the nature of a trace, we will reject it. is not a vehicle for publishing enterprise data as contained in network traffic.

In the upcoming months you will see significant changes and improvements to the site. Many of these suggestions are the result of user feedback, so please keep it coming and stay tuned as updates are released!


Thursday, January 15, 2009

New IDS/IPS technologies

Recently while parusing the intertubes I ran across a new IDS/IPS technology (PHPIDS) "". This is an interesting and simple concept that can add an additional layer of security to your web application(s). This being said, I am not sure that I would run it solely, but I will be testing it over the week and posting the results subsequently.