To be able to restore some old personal PHP projects from around 2004 I needed to recreate the environment in which they were built. Otherwise I would probably have to make changes to the source code in order to get them working.

Recreating the environment means I needed to install a server with roughly the same software, meaning Debian 3.1 “Sarge”, Apache 2.0.54, MySQL 4.0.24 and PHP 4.3.

So I downloaded the network installer ISO image for Debian 3.1 “Sarge” and created a virtual machine using VirtualBox. Make sure you add a virtual hard disk under the IDE controller, I don’t think SATA was supported by default at the time. At my first attempt, I forgot to do this and I got the error “No partitionable media were found.” and I had to abort the setup.

During the setup you get the option to change the APT repository file, which you must do, or you won’t be able to install any packages.

Add the following line to sources.list:

deb http://archive.debian.org/debian/ sarge main contrib

And in case you need to be able to install backports, add the following line to source.list:

deb http://archive.debian.org/debian-backports sarge-backports main

In 2006 one of my IDE hard disks crashed, or at least stopped working. It is a 120GB Maxtor DiamondMax Plus 9 series. I remember buying it at an exchange in december of 2004. I kept it in case I would some day try to restore the data.

Recently I bought a NAS and while I was organizing my files I got curious to restore the data on the old disk. So I decided to search some of the local classified advertising sites to see if anyone was selling. Two days later I received an identical hard disk through the mail.

The controller is attached to the hard disk casing with some kind of special screws which can be (un)screwed using a Torx screw driver. After replacing the faulty controller with the one I obtained via the ad, I placed the hard disk in an old computer and booted Windows. The hard disk would not show up in explorer, so I checked disk management and it listed the disk, with two unknown partitions. I figured it were probably Linux file systems. So I booted the old Windows PC with an Xubuntu live CD and voila, I could now access the data on the revived hard disk.

Turns out there is roughly 14 GB of data on the disk. Various stuff from school and internships, but mostly hobby projects. To anyone other but me, it would be simply garbage, but to me it’s a pure goldmine.

Here’s how to add OpenID delegation to your Octopress blog.

Create a new file called ‘openid.html’ inside the _includes directory and paste the following content:

{% if site.openid_server %}
<link rel="openid.server" href="{{ site.openid_server }}" />
<link rel="openid.delegate" href="{{ site.openid_delegate }}" />
{% endif %}

Add a reference to openid.html in head.html (also inside the _includes directory):

{% include openid.html %}

Update the _config.yml file with the OpenID server and delegate values:

openid_server: http://www.myopenid.com/server
openid_delegate: http://ramsesmatabadal.myopenid.com

Now run rake generate to update your blog.

In October 2013 my home Linux server crashed after three years of loyal service. To replace my server I bought a Raspberry Pi, a credit-card-sized single-board computer. Due to other projects and the many social obligations I had in December I did not have time to restore my blog.

Today I took the time to install nginx, Ruby (trough RVM) and finally Octopress. This setup is completely different from the last one, which was hosting a WordPress blog (PHP) with a MySQL data back-end and served by the Apache web server (on a Mini ATX).

Whenever time allows me I will import old blog posts.

What I like about Lost Planet 3:

  • single player campaign focused
  • story (and the fact that it’s a prequel!)
  • character development (both main and NPC)
  • atmosphere (even more ice-age like than LP1)
  • graphics (Unreal Tech 3)
  • plenty of cutscenes
  • main character is likeable

What I don’t like about Lost Planet 3:

  • many (minor) glitches
  • limited variety among akrid enemies
  • no vital suits like in LP1 (but understandable given the timeline)
  • too many NPCs look and sound alike
  • required QTE (quick time events) to win boss fights
  • cannot save at random times (played PC version)

My review verdict/score:

Lost Planet 3 is overall a good game and a worthy prequel to the first two installments. But as the story progresses I can’t help but feel that it is somewhat rushed and it becomes too predictable.

3 out of 5 stars from me.

Recently I had to add a whole bunch of .NET assemblies to an ASP.NET MVC project. I am not kidding when I say it were about 60 DLL’s. Instead of adding all those files separately I was wondering if it was somehow possible to merge those files into one.

After a short search I found a free tool from Microsoft Research called ILMerge.

Here’s the command for merging all (.NET framework v4) DLL’s in the current working directory into a single DLL file:

ilmerge /wildcards /t:library /targetplatform:v4 /out:Framework.dll *.dll

And here’s the direct download link: http://www.microsoft.com/en-us/download/details.aspx?id=17630

Would be nice to see this stuff integrated in Visual Studio…

Recently a couple of serious security issues (see CVE-2013-0155 and CVE-2013-0156) have been discovered in the Rails framework, leaving practically all versions of Rails vulnerable to attack. To my surprise, even the Dutch government owned DigID (an identity management platform which government agencies of the Netherlands can use to verify the identity of Dutch citizens on the Internet) was using Rails and had to temporarily shutdown its service until the security hole was plugged.

Since I maintain one Rails 3.1 application on a production server I needed to upgrade to 3.1.10 as well. Simply installing Rails 3.1.10 will NOT be enough! You will have Rails 3.1.10 with the security fixes installed on your system, but your Rails application will use whatever it has in the Gemfile.

Here’s how to migrate to 3.1.10 (or later):

In your Gemfile you will see a fixed version, e.g.:

source 'http://rubygems.org'
gem 'rails', '3.1.0'
# rest of file omitted

Change ‘3.1.0’ or whatever you have there so it reads:

source 'http://rubygems.org'
gem 'rails', '~> 3.1.0'
# rest of file omitted

The ‘~>’ means to use 3.1.x, currently this would use 3.1.10, since that’s the latest stable version within the 3.1 release branch. In theory, you shouldn’t have to worry that upgrading might break things, since 3.1.10 (or later) should be backwards compatible with any previous 3.1.x version.

Now you update Rails using:

sudo bundle update rails

Be sure to redeploy and/or restart your Rails application and you’re set!

If you have previously used bundle pack to include gems as part of your deployment (in the vendor/cache directory), than those two will be automatically updated.

In case you run into a Ruby compile error, this is probably because you have an outdated version of Ruby.

You might see e.g.:

Fetching source index for http://rubygems.org/
/usr/local/lib/site_ruby/1.8/rubygems/requirement.rb:109:in `hash': bignum too big to convert into `long' (RangeError)
        from /usr/local/lib/site_ruby/1.8/rubygems/requirement.rb:109:in `hash'
        from /usr/local/lib/site_ruby/1.8/rubygems/specification.rb:1308:in `hash'
        ...

You can either upgrade Ruby to a higher patch level or patch it yourself. More info here.

Hope this helps someone.

Almost two weeks and 50 commits later since the last post. A lot of spare time has been spent on Ruby hackery. :)

Long story short, we now have a parser, (re)written in Ruby, which can parse IRC logfiles in either Eggdrop or Irssi logfile format and save the parsed lines to a MySQL database. The Rails web application basically consists of three pages: 1) the logfile index page 2) the logfile detail page and 3) the search page. Search is powered by a lightning fast search engine called Sphinx.

Some remarkable features:

  • The parser can parse a single IRC logfile (in Eggdrop or Irssi format), a whole directory or a subset (when using a mask).
  • The logfile index page shows a list with links to the top 25 most recent log files, in chronologically reverse order.
  • The logfile index page is paged and when JavaScript is enabled in the browser it acts as an endless page, meaning when you scroll down more and more  links to logfiles are shown.
  • The logfile detail page has links to the previous and next logfiles and is pJAX enabled, meaning you get a faster browsing experience by not reloading the entire page.
  • The web application comes with page output caching enabled by default. Making subsequent calls to a previously generated logfile page as fast as possible.
  • Each line on the logfile detail page has a timestamp which acts as an anchor. This makes it easy to link to that specific line.
  • Search is powered by Sphinx search engine, which is extremely fast at indexing and searching.
  • Clicking on a search result will redirect you to the logfile detail page at the correct line (using anchors) and the search keyword will be highlighted as well.
  • Automatically redirected to the logfile detail page when there was exactly one search result.
  • The nick names on the logfile detail page are colorized so it’s easier to follow the conversation. Colors are preserved when someone changes his or her nick.
  • Chat messages from the same nick are grouped as long as no other events have occurred.

Known issues/limitations:

  • Supported timestamps are limited to hours and minutes.
  • Search and highlighting is currently limited to one single keyword.
  • Nick colorizer currently only supports up to fourteen unique colors.
  • No page output cache expire methods have been implemented yet.
  • Eggdrop and Irssi are the only two currently supported logfile formats.
  • MySQL is the only supported database back-end.
  • Not a lot of error checking, also no custom error pages or exception notification yet.

The above issue and limitations, as well as new features, will be resolved in the future of course. :)

The source code can be found on GitHub:

https://github.com/rmatabadal/irc-collective-ruby

When I was trying to install the IRC Collective parser onto my Debian home server I ran into some issues which are the result of not maintaining the source code. It has been roughly four (4!) years ago since the last commit (February 3rd, 2009). The ConfigFile module was deprecated and I couldn’t figure out how to implement its successor. My Perl was very rusty, since I don’t normally program in Perl and it seems I’ve used quiet some Perl shortcuts in the source code, which I have forgotten over the past few years. Perl is known to be “write once, read never”. :)

Since I have been doing a lot of development with Ruby (and Rails) lately, I pondered if I could easily rewrite the core parts of the parser (parse an IRC logfile and store the parsed lines to a SQL database). One of the preconditions was that I could reuse the regular expressions from the Perl version. After a quick test it seemed it would work!

The Ruby version currently parses Eggdrop and Irssi logfile formats and can store the parsed lines in a MySQL database. Support for other logfile formats and SQL databases will be added later.

The source code (which is a lot more object oriented and DRY) for the new Ruby version of IRC Collective, can be found on GitHub:

https://github.com/rmatabadal/irc-collective-ruby

The next step is to create a nice Rails web application for viewing the logfiles and searching through them…

Happy new year!

Copyright © 2014 - Ramses Matabadal - Powered by Octopress