To be able to restore some old personal PHP projects from around 2004 I needed to recreate the environment in which they were built. Otherwise I would probably have to make changes to the source code in order to get them working.

Recreating the environment means I needed to install a server with roughly the same software, meaning Debian 3.1 “Sarge”, Apache 2.0.54, MySQL 4.0.24 and PHP 4.3.

So I downloaded the network installer ISO image for Debian 3.1 “Sarge” and created a virtual machine using VirtualBox. Make sure you add a virtual hard disk under the IDE controller, I don’t think SATA was supported by default at the time. At my first attempt, I forgot to do this and I got the error “No partitionable media were found.” and I had to abort the setup.

During the setup you get the option to change the APT repository file, which you must do, or you won’t be able to install any packages.

Add the following line to sources.list:

deb sarge main contrib

And in case you need to be able to install backports, add the following line to source.list:

deb sarge-backports main

In 2006 one of my IDE hard disks crashed, or at least stopped working. It is a 120GB Maxtor DiamondMax Plus 9 series. I remember buying it at an exchange in december of 2004. I kept it in case I would some day try to restore the data.

Recently I bought a NAS and while I was organizing my files I got curious to restore the data on the old disk. So I decided to search some of the local classified advertising sites to see if anyone was selling. Two days later I received an identical hard disk through the mail.

The controller is attached to the hard disk casing with some kind of special screws which can be (un)screwed using a Torx screw driver. After replacing the faulty controller with the one I obtained via the ad, I placed the hard disk in an old computer and booted Windows. The hard disk would not show up in explorer, so I checked disk management and it listed the disk, with two unknown partitions. I figured it were probably Linux file systems. So I booted the old Windows PC with an Xubuntu live CD and voila, I could now access the data on the revived hard disk.

Turns out there is roughly 14 GB of data on the disk. Various stuff from school and internships, but mostly hobby projects. To anyone other but me, it would be simply garbage, but to me it’s a pure goldmine.

Here’s how to add OpenID delegation to your Octopress blog.

Create a new file called ‘openid.html’ inside the _includes directory and paste the following content:

{% if site.openid_server %}
<link rel="openid.server" href="{{ site.openid_server }}" />
<link rel="openid.delegate" href="{{ site.openid_delegate }}" />
{% endif %}

Add a reference to openid.html in head.html (also inside the _includes directory):

{% include openid.html %}

Update the _config.yml file with the OpenID server and delegate values:


Now run rake generate to update your blog.

In October 2013 my home Linux server crashed after three years of loyal service. To replace my server I bought a Raspberry Pi, a credit-card-sized single-board computer. Due to other projects and the many social obligations I had in December I did not have time to restore my blog.

Today I took the time to install nginx, Ruby (trough RVM) and finally Octopress. This setup is completely different from the last one, which was hosting a WordPress blog (PHP) with a MySQL data back-end and served by the Apache web server (on a Mini ATX).

Whenever time allows me I will import old blog posts.

Recently I had to add a whole bunch of .NET assemblies to an ASP.NET MVC project. I am not kidding when I say it were about 60 DLL’s. Instead of adding all those files separately I was wondering if it was somehow possible to merge those files into one.

After a short search I found a free tool from Microsoft Research called ILMerge.

Here’s the command for merging all (.NET framework v4) DLL’s in the current working directory into a single DLL file:

ilmerge /wildcards /t:library /targetplatform:v4 /out:Framework.dll *.dll

And here’s the direct download link:

Would be nice to see this stuff integrated in Visual Studio…

Recently a couple of serious security issues (see CVE-2013-0155 and CVE-2013-0156) have been discovered in the Rails framework, leaving practically all versions of Rails vulnerable to attack. To my surprise, even the Dutch government owned DigID (an identity management platform which government agencies of the Netherlands can use to verify the identity of Dutch citizens on the Internet) was using Rails and had to temporarily shutdown its service until the security hole was plugged.

Since I maintain one Rails 3.1 application on a production server I needed to upgrade to 3.1.10 as well. Simply installing Rails 3.1.10 will NOT be enough! You will have Rails 3.1.10 with the security fixes installed on your system, but your Rails application will use whatever it has in the Gemfile.

Here’s how to migrate to 3.1.10 (or later):

In your Gemfile you will see a fixed version, e.g.:

source ''
gem 'rails', '3.1.0'
# rest of file omitted

Change ‘3.1.0’ or whatever you have there so it reads:

source ''
gem 'rails', '~> 3.1.0'
# rest of file omitted

The ‘~>’ means to use 3.1.x, currently this would use 3.1.10, since that’s the latest stable version within the 3.1 release branch. In theory, you shouldn’t have to worry that upgrading might break things, since 3.1.10 (or later) should be backwards compatible with any previous 3.1.x version.

Now you update Rails using:

sudo bundle update rails

Be sure to redeploy and/or restart your Rails application and you’re set!

If you have previously used bundle pack to include gems as part of your deployment (in the vendor/cache directory), than those too will be automatically updated.

In case you run into a Ruby compile error, this is probably because you have an outdated version of Ruby.

You might see e.g.:

Fetching source index for
/usr/local/lib/site_ruby/1.8/rubygems/requirement.rb:109:in `hash': bignum too big to convert into `long' (RangeError)
        from /usr/local/lib/site_ruby/1.8/rubygems/requirement.rb:109:in `hash'
        from /usr/local/lib/site_ruby/1.8/rubygems/specification.rb:1308:in `hash'

You can either upgrade Ruby to a higher patch level or patch it yourself. More info here.

Hope this helps someone.

Almost two weeks and 50 commits later since the last post. A lot of spare time has been spent on Ruby hackery. :)

Long story short, we now have a parser, (re)written in Ruby, which can parse IRC logfiles in either Eggdrop or Irssi logfile format and save the parsed lines to a MySQL database. The Rails web application basically consists of three pages: 1) the logfile index page 2) the logfile detail page and 3) the search page. Search is powered by a lightning fast search engine called Sphinx.

Some remarkable features:

  • The parser can parse a single IRC logfile (in Eggdrop or Irssi format), a whole directory or a subset (when using a mask).
  • The logfile index page shows a list with links to the top 25 most recent log files, in chronologically reverse order.
  • The logfile index page is paged and when JavaScript is enabled in the browser it acts as an endless page, meaning when you scroll down more and more  links to logfiles are shown.
  • The logfile detail page has links to the previous and next logfiles and is pJAX enabled, meaning you get a faster browsing experience by not reloading the entire page.
  • The web application comes with page output caching enabled by default. Making subsequent calls to a previously generated logfile page as fast as possible.
  • Each line on the logfile detail page has a timestamp which acts as an anchor. This makes it easy to link to that specific line.
  • Search is powered by Sphinx search engine, which is extremely fast at indexing and searching.
  • Clicking on a search result will redirect you to the logfile detail page at the correct line (using anchors) and the search keyword will be highlighted as well.
  • Automatically redirected to the logfile detail page when there was exactly one search result.
  • The nick names on the logfile detail page are colorized so it’s easier to follow the conversation. Colors are preserved when someone changes his or her nick.
  • Chat messages from the same nick are grouped as long as no other events have occurred.

Known issues/limitations:

  • Supported timestamps are limited to hours and minutes.
  • Search and highlighting is currently limited to one single keyword.
  • Nick colorizer currently only supports up to fourteen unique colors.
  • No page output cache expire methods have been implemented yet.
  • Eggdrop and Irssi are the only two currently supported logfile formats.
  • MySQL is the only supported database back-end.
  • Not a lot of error checking, also no custom error pages or exception notification yet.

The above issue and limitations, as well as new features, will be resolved in the future of course. :)

The source code can be found on GitHub:

When I was trying to install the IRC Collective parser onto my Debian home server I ran into some issues which are the result of not maintaining the source code. It has been roughly four (4!) years ago since the last commit (February 3rd, 2009). The ConfigFile module was deprecated and I couldn’t figure out how to implement its successor. My Perl was very rusty, since I don’t normally program in Perl and it seems I’ve used quiet some Perl shortcuts in the source code, which I have forgotten over the past few years. Perl is known to be “write once, read never”. :)

Since I have been doing a lot of development with Ruby (and Rails) lately, I pondered if I could easily rewrite the core parts of the parser (parse an IRC logfile and store the parsed lines to a SQL database). One of the preconditions was that I could reuse the regular expressions from the Perl version. After a quick test it seemed it would work!

The Ruby version currently parses Eggdrop and Irssi logfile formats and can store the parsed lines in a MySQL database. Support for other logfile formats and SQL databases will be added later.

The source code (which is a lot more object oriented and DRY) for the new Ruby version of IRC Collective, can be found on GitHub:

The next step is to create a nice Rails web application for viewing the logfiles and searching through them…

Happy new year!

I spent the last 4 evenings learning about Linux security and putting it to practice right away.

After the attack was discovered and the server was shutdown, I took some time to analyze the attack in detail. You can read about my analysis in a separate blog post here: “Debian Linux home server compromised – Discovery and Analysis”.

Shortly after I shutdown the server I took some time to come up with a contingency plan:

  • Take down compromised server
  • Alert users that system is offline
  • Disconnect/disable any network interfaces
  • Connect monitor and keyboard to compromised server
  • Analyze (disconnected!) compromised server
  • Learn from attack
  • Re-install server (start from scratch!) with hardened security
  • Bring server back online
  • Notify users
  • Important: pro-actively monitor server and keep installed software updated

The contingency plan differs from situation to situation, from organization to organization (policies), and from person to person (skill) etc. You could for example make an image of the hard disk (or system partition) and mount it read only somewhere, so you can thoroughly inspect it without the risk of making any changes whatsoever. But this is simply a home server with a pretty vanilla Debian Linux installation. So there’s no need to take it to such levels. And there’s plenty of reason to assume that the attacker is only a script kiddie.

After the analysis was done I proceeded by wiping the hard disk (of course, if you’re really paranoid you could consider trashing the hard disk and buy a new one) and installed Debian Linux on the box.

This time I did my homework and the new setup is quiet different as you can tell from the comparison matrix below:

Comparison of old setup versus new setup

  Old setup New setup
Firewall Not present (behind NAT, couple of ports forwarded) Rules to allow certain services only from LAN. Rules to block hammering.
Intrusion detection Not present Installed portsentry and tripwire. Report anomalies to me via email.
SSH Password auth Disabled password auth. User white list (no root). Limit authentication retries. Keys with pass phrases. Installed fail2ban.
Bash No precaution Extended default history size. Added time stamping. Protected variables. Append only. Show logged on users whenever I logon.
Temp directories No precaution Prevent executing commands in /dev/shm, /tmp and /var/tmp.
Logfiles Default logrotate configuration Installed logcheck. Extended defaults for logrotate to store logfiles longer.
Anti-virus and rootkit checkers Not present Installed clamav. Installed chkrootkit. Installed rkhunter. Report anomalies to me via email.
Password policy Not present Configured using PAM.

I have tested the installation with some basic penetration testing and everything seems to work perfectly!

There’s still some stuff left for me to do such as:

  • Automatically poll for new updates and install them.
  • Transfer backup archives to a remote server (over SSH+rsync).
  • Setup disk/user quota.

I considered reporting anomalies/alerts via SMS, but to be honest I can’t be bothered with it when I’m at work or at school. There are more important things in life then your home server. ;)

If you’re interested in securing/hardening your Debian Linux installation I suggest you take a look at my bookmarks on Delicious: My bookmarks on tagged security. Look for links in the period from 10th of January to the 14th of January.

Copyright © 2015 - Ramses Matabadal - Powered by Octopress