EyeInTheSky

EyeInTheSky is one of my newer projects, an idea which I’ve stolen from two other people.

Wikipedia has so many modifications being made to it that it’s just not possible to keep an eye on everything you want to watch. While the MediaWiki software has a feature known as a watchlist, it’s neither flexible nor easy to use in my opinion.

EyeInTheSky is an IRC bot (seems to be my speciality!) which monitors the Wikimedia IRC recent changes feed, compares every entry to a set of regular expressions and reports them to a different network.

It’s possible to set up the bot with an entire XML tree of regular expressions matching on the username, edit summary, and page title. There are also logical constructs which allow more-or-less unlimited regexes to specify what exactly you want to watch.

For example, with this tree, I could specify I wanted to stalk all the edits which are made by someone with “the” in their name, “and” or “or” but not “xor” in the page title, and with “train” in the edit summary:

I also can set a flag, something that I can then set my IRC client to respond to, and it will speak that flag for every stalked edit.

Of course, it’s not just edits that can be stalked this way – log entries are sent to the IRC RC feed in the exact same way. It’s just a case of specifying Special:Log/delete as the page title to get the deletion log, for example. The entire log entry except for the time/user is sent in the edit summary field. This means the same system can be used to stalk log entries as well.

The bot logs all stalked edits, and is capable of emailing the entire log to me, so I can clear the log when I disconnect from IRC, and when I get back on, I can email the log, go through what I’ve missed, and catch up.

I’m planning on making it multi-channel too, with probably multiple people able to command it to email them the log. I can already tell it to not email certain stalks, especially as some of the stalks that have been set up are not things that interest me, but rather interest other people. I just ignore those when it reports them, and have it set not to email me for those stalks.

There’s quite a lot this little bot can do, if you want to learn more, I’d recommend taking a look over the source code, and see what you think!

The source code is available on GitHub here.

Wikipedia Account Request System – Password Storage

The current ACC system has some really useless bits which are hard to change, such as the password storage system. At the moment, the database is filled with “securely” stored passwords, such as “5f4dcc3b5aa765d61d8327deb882cf99”. Any quick Google search will quickly tell you exactly how the passwords are currently stored, a simple MD5 hash. This is quite clearly inadequate, so as part of the rewrite I’ve been aiming to store the passwords much more securely.

In all the examples, I’m going to use the password “password”.

At the moment, it’s simple to set a password, just store

md5("password");

into the database. It’s also simple to check the password, just check

md5($suppliedpassword) === $storedpassword

However, I was wanting to store the passwords with a salt, a different salt for each user – hence making cracking the MD5 hash much less feasible.

The function I’m now using to encrypt a password is this:

The $2$ at the front indicates the version of the password hash for later use. For a password “password” and a username “username”, this gives the encrypted result $2$8c6e7b658b4be4bb325870a1764ca4fb

When a password is checked, the code looks at the first three chars of the stored password, and determines if it matches $2$ or not. If it does, the provided password is encrypted with the new hashing function, and compared to the stored password. If they match, it’s the right password.

If the first three chars are not $2$, then it hashes the password using the old method, compares it, and if it matches, takes the provided password, hashes it with the new function, saves it to the database, and returns that it’s the right password.

This has the effect of being transparent to the user, but increasing the security of their password the first time they log in to the new system.

Wikipedia Account Request System

I thought it was about time I did a bit of a technical post on the new Wikipedia Account Request System that’s been sat around slowly being worked on over what’s nearly a year(!) now.

It’s still a long way off, but I’ve not had time to actually buckle down and do work on it, so I’m hoping that I’ll be able to spend a bit more time with it in the near future.

Since the migration to GitHub, I’ve been doing quite a bit of development work on it, and have recently (semi) finalised the database, which will hopefully speed things up a bit, and stop me from saying “ooh, let’s do this with the database”, “nah, nevermind”, “ooh, let’s do this instead”, etc.

The database finalisation comes after writing the conversion script to convert the database from the current format into the new format – there’s roughly 35 operations to be done to make the database sort-of OK, 28 of which are done on one single database table.

I’m taking this opportunity to make these somewhat huge database changes to the core of the system as there’s not much that’s using the database at the moment in the new system, and a huge migration would have to happen in order to swap from one system to another anyway, so I’m not too fussed about making more changes like this.

As the developers of the current system will know, the code is quite frankly shocking. I’m pretty certain that SQL injection and XSS attacks are prevented, but only because we apply about 15000 sanitisation operations to the input data, mangling anything that’s remotely cool such as unicode chars – to cite a recent example: • – into a mess that MIGHT be displayed correctly on the tool, but any other areas just don’t work. In this case, MediaWiki rejected it as a bad title, because it was passed • instead of •.

The new system should hopefully solve some of these issues.

For starters, all the database quote escaping is going – I’m not even going to do database input sanitising – and I’m going to actively reject any change that adds it.

There’s a reason for this, and that is because of the database abstraction layer I’m using for this new system – PDO.

PDO handles all the database connection details for me automatically, and supports both raw SQL queries, and prepared statements. Where the former requires sanitisation to be secure, the latter doesn’t. You simply pass in place-holders (called parameters) to the query where your input goes. You can then bind values or variables to the parameters, and execute the query. Because the query and parameters are passed separately to the server, no sanitisation ever needs to happen because it’s just impossible to inject anything in the first place.

The really cool thing that I’m planning to (ab)use a lot is the ability to retrieve a database row from the database as an instance of a class you’ve previously defined.

The above is an actual excerpt from the User class of WARS at the moment, and the database structure of the acc_user table.

As you can see, the class has a set of fields which exactly match the names of the columns in the table. This is a key part of making the code work – all you need to do is create a query which pulls out all the columns for one row in the database, pass it the parameter which tells it which row to return, and then tell it to fetch an object, telling it which class to instantiate. A simple four-line function dealing with the searching and retrieval from the database, and instantiating a class with the relevant data – it’s actually beautiful! :D

My plan is to use this structure of data access objects for all the other database tables, and then I should be able to deal with the entire system on a purely object-based level, rather than constantly mashing in database queries here and there.

So, I’m building a contribs tool….

So, I’ve decided to build a contributions tool in a similar style to the popular Huggle anti-vandalism tool.

Initially, I was asked to review the contribs of a specific user who was considering running for adminship. So, my lazy brain decided that I couldn’t be bothered reviewing contrib after contrib using tab after tab, so instead I wrote an app to load contribs, and load a diff for me.

I’ve still not started reviewing the contribs, but it’s pretty cool for an hour or two’s worth of coding and tapping the MediaWiki API
Screenshot of Chronological Contributions Walker

This is just an example setup going through some of Dusti’s contribs, randomly clicking skip and flag until I got a good screenshot, but I’m planning on adding an open in browser option, and an export flagged option too.

Eventually it’ll probably find it’s way into a subversion repo on this server somewhere (kinda surprised it hasn’t already actually), and I’ll probably release it for general use sooner or later. It’s pretty cool though for not much time developing it :)

Well, that’s all folks!

Well, that’s all folks!
My last exam A Level is now out of the way, unless I end up retaking a year, which given my epic maths failures recently wouldn’t surprise me.

Now I’m just waiting for the 22nd June, when I start work at a local company, a week’s holiday for a french family wedding, and then exam results in August, and hopefully University in October, although it could be September if I am unlucky and miss my firm choice, although if I do end
up in clearing I have no idea when I’ll start.

Now exams are over, I can start to concentrate more fully on various projects that I’ve been saving up throughout the last few months.

  • Helpmebot version 6 is a major project that I’m hoping to continue and finish off over summer, it’s a complete re-write moving towards an object-oriented approach to the bot. This will make it much easier to add database caching, which should reduce it’s load on the database servers, and ultimately, decrease execution time.
  • ACC: OverlordQ has nicely proposed a complete re-write of the tool, which is a good idea given the code is now a stinking pile of crap, and a nightmare to work with. I was planning on a staged re-write of the tool, converting to OO, but OQ seems to think that it would be better to do a complete re-write, and I’m inclined to agree.
  • A. S. S.: Slug Wars: This is a project that will be hopefully started over Summer, and also hopefully nearing completion when we go to Uni. More info will come on http://albinoslugstudios.co.uk/ in the near future hopefully :D

There are several other projects that I have forgotten, but those are the main ones.

Also distracting me over summer will be Project Euler, and various other small programs that I’m hoping to write.

One thing that has got my attention is a permutation calculation program, which can be done nicely using recursion. I’m hoping to implement some version of these ( http://www.bearcave.com/random_hacks/permute.html ) over summer at some point, which will be useful at calculating all the different possibilities for “wimt”, for instance: imtw, imwt, itmw, itwm, iwmt, iwtm, mitw, miwt, mtiw, mtwi, mwit, mwti, timw, tiwm, tmwi, twim, twmi,
wimt, witm, wmit, wmti, wtim, and wtmi. While it’s possible to work that one out manually, I want to get the results for something like “farosdaughter”, “stwalkerster”, and “cremepuff222”. For “stwalkerster”, there will be 479001600 permutations, so I’m expecting to produce a 5.7G file all the permutations. Following on from that, I want to sort them into alphabetical order.

Also, I am looking to implement a simple linked list in C++, with the possibility of expanding it to form a double-linked list, and possibly expanding to become a circular linked list. Using a linked list, I can easily sort all those permutations into order on insertion.

Alternatively, a better approach might be to use a binary tree. This will vastly improve seek time, as it will only take a maximum of 29 iterations on a perfectly balanced binary tree to search for a specific item, plus it makes implementation of a binary search much more easy.