Underbelly Cowgate in Minecraft

Myself and another Underbelly tech have been working on recreating Underbelly Cowgate in Minecraft.

It’s still a work in progress, but here’s some screenshots:

00-outside

View from “Cowgate”

01-lane

View down the lane

02-outside-box

Entrance to the bottom garage

03-bottom-garage

Reception area, including flyer shelves

04-boxoffice

Box office counter, with “Demon Shelf”

05-reception

Reception desk

06-behind-reception

Behind Reception, showing radio chargers, and doorway to behind the walls.

07-demonshelf

Behind the walls near the web team. To the left, you can see the Demon Shelf, so called because it’s a pain to get past behind the wall when wiring everything up.

08-web-ticket-office

Office of the web ticket office

09-outside-cafe

Entrance to Cow Cafe

10-cafe

Cow Cafe

11-press-office

Door to the Press Office

12-inside-press-office

Inside the Press Office

13-laughing-stock

The Laughing Stock, at Cow Cafe

14-laughing-stock-kitchen

Kitchen area of The Laughing Stock

15-top-garage

Entrance to Top Garage

16-tech-store

(WIP) Inside the Tech Store

17-bin-store

Bin Store

18-flyer-store

(WIP) Flyer Store and Street Team Office

19-block-stairs

The bottom of Block Stairs

20-block-stairs

Middle floor of Block Stairs

21-block-stairs

Top of Block Stairs

22-dancer-door

Belly Dancer, from the door

23-dancer-op

Belly Dancer, from the tech position

24-dancer-stage

Belly Dancer, from the stage

25-dancer-storage

Belly Dancer storage area

26-delhi-corridor

(WIP) Dehli Belly corridor

27-delhi-corridor

(WIP) Dehli Belly Corridor

28-delhi-door

Delhi Belly, from the door

29-delhi-op

Delhi Belly, from the tech position

30-delhi-backstage

Backstage in Dehli Belly

31-delhi-stage

Dehli Belly from the stage

32-delhi-dressing

Delhi Belly dressing room and storage area

33-delhi-iron-hvac

Delhi Belly and Iron Belly V/AC area

34-jellybelly-blockstairs

Outside Jelly Belly, near Block Stairs

35-staffroom-corridor

Staff room corridor

36-staffroom

Staff Room, and The Wall Of Shame

37-dm-office

Duty Manager’s office

38-jelly-dressing

Jelly Belly Dressing room and storage area

39-iron-door

Iron Belly, from the door

40-iron-op

Iron Belly, from the tech position

41-iron-stage

Iron Belly, from the stage

42-iron-storage

Iron Belly storage area

43-iron2

Iron 2

44-iron2-siphon

Iron 2 near Iron AC area, hole for water siphon into Belly Button

45-white-door

White Belly, from door

46-white-op

White Belly, from tech position

47-white-stage

White Belly, from stage

48-white-backstage

Backstage in White Belly

49-button-door

Belly Button, from door

50-button-seating

Belly Button from seating area

51-button-stage

Belly Button, from stage

52-button-op

Belly Button, from op position

53-button-behindseating

Behind seating area in Belly Button

54-button-underseating

Underneath the seating in Belly Button

55-button-mind-yer-heid

Mind Yer Heid! (Belly Button tech position)

56-button-corridor

Button Corridor

57-button-corridor

Button Corridor

58-button-storage

Button Storage area

59-button-dressing

Button dressing room

60-button-stairs

Big/Button stairs

61-big-stairs

Big/Button stairs

62-big-progress

Big Belly

63-top-of-lane

Top of the Lane Doorway

64-beer-belly

Beer Belly Bar

We’ve still got the other half of the building (Beer Belly / Jelly Belly / Belly Laugh / Wet Corridor / Wet Alcoves / Spiral Stairs / Victoria St Box Office / Lady Cave / Top Office) to do. Stay tuned for updates! I’ll try and get some real photos for comparison too!

Arrival at Zurich – Wikimedia Hackathon 2014

I’ve arrived!

After some hiccups with API, and my shoes setting off the metal detector causing me to have a full body scan et al. by the security guards, the flight here was relatively uneventful thankfully. SwissAir provided free drinks and lemon cake inflight, which was unexpected, but welcome.

The sight of the Alps as the plane was coming in to land in Zurich reminded me how much I love being in the mountains. Although Zurich is not really in the Alps, it’s not too far away, and just the overall feel of Switzerland (France too, but I’m not in France) is really relaxing and calming.

I’m quite glad to be back on the continent again, it’s just a completely different atmosphere. Unfortunately, I’m not going to be able to get out and explore Zurich much while I’m here, as I have a lot of coding to do for the Wikimedia Hackathon!

It’s good to see faces again I’ve not seen for a year – and also to put faces to names I didn’t meet last year in Amsterdam. While the Hackathon hasn’t officially started yet, a fair few people are already getting down to work, including myself. I’ve been making sure that the code I’m going to be working on is in a fit state to have new features added to it, so making sure I’m not building new features on top of existing bugs. I’m also attempting to replicate my working copy of the code that’s sat on my desktop machine onto my laptop – I thought I had all the pieces I needed, then I remembered how long it has been since I last did software development on my laptop!

Power initially proved to be problematic too, while I thought I had a Swiss power adaptor, all of the adaptors I had were in fact only European ones. A mad dash around the shops before I left proved fruitless, so I had to buy one in the airport. UK and EU plugs being larger than Swiss plugs, and with the elegance of the Swiss sockets allowing three plugs to be inserted in a space only slightly larger than the UK and EU plugs means that when an adaptor is plugged directly into a Swiss socket, the ability to use the other two Swiss sockets is severely limited. Thankfully, a fair few people (myself included) have brought extension leads (mainly European), which others can plug into without problems.

The main aims for my time here are to get a framework for supporting OAuth within the account creation tool, allowing us to replace a few key pieces of functionality for which the implementation is not ideal. Firstly, I want to replace our confirmation of on-wiki username, then work towards implementing the welcomer bot to use the account creator’s actual account, rather than faking their identity with a bot. Lastly, if we can allow the account creation tool to actually create the accounts itself too, so we don’t need to redirect to the creation web form, it would be awesome.

I’m hoping to get through as much of that list as possible, with assistance from the development team of the extension, but overall I’m aiming to get far enough with a working framework so that I can complete the rest without assistance. If I can also complete some of the outstanding issues that the extension has too, and learn more about MediaWiki in the process, it will be even better.

Operation Hydra

Here are some screenshots of the Intel map showing the aftermath of the brilliant Operation Hydra, organised by Agent @Foxyss4. Large numbers of Enlightened agents converged on the city to clear blocking links, erect a field covering the centre of the city, and then smash all the Resistance portals under the field.

Unfortunately, the large field came down too quickly, so we just destroyed as many resistance portals as we could in the time we had left. Many thanks to Foxyss4, bhands, Hereticdark and anubis619 for their instrumental parts in building putting up the field, including climbing Arthur’s Seat to create necessary links.

858122_10203146576401957_6891346243874374638_oss (2014-05-04 at 04.54.09) ss (2014-05-04 at 04.55.04)

Heartbleed

Heartbleed logo

A couple of weeks ago, a security announcement (CVE-2014-0160) was made regarding a simple buffer over-read bug in a library.

This library is OpenSSL.

OpenSSL, for those who don’t know, is the library which deals with the encryption on the web (so stuff like https). A buffer over-read is something like this – let’s say I have a variable with three letters in it:

Hi!

What you don’t see is that this is stored in the middle of memory, so it’s actually surrounded by a load of other stuff:

sdflkjasHi!kldjlsd

With non-simple variables, such as blocks of text (character strings), a computer doesn’t directly know how big it is. It is up to the programmer to keep track of this when the variable is created. This allows a programmer to read back from memory a larger amount of data than is actually stored. This is almost always a bug, as the data that’s past the end is usually meaningless, and possibly dangerous depending on what is done with it.

If I was to read 5 letters from my 3 letter variable, I’d get this:

Hi!kl

What actually happened with Heartbleed was something very similar. Part of the protocol allows the one end to send a “heartbeat” to keep the connection alive. Part of this request allows the sender to specify some data to be returned back to them. As the server doesn’t know how long this is, you have to tell it too. xkcd has a really good explanation of this in comic form:

xkcd: Heartbleed Explanation

Of course, being able to read random bytes from a process’ memory is a bit of an issue for security, as there is usually some data in that memory that you don’t want the user to know. In a multi-user environment, this normally comes in the form of private data about other users. It’s a pretty serious security vulnerability on it’s own. However, this isn’t just any application, it’s OpenSSL – a widely-used public key encryption library.

Public Key Encryption

For those who don’t know what public key encryption is, here’s a quick primer.

If I want to protect some data when I send it to you, I would encrypt it. With classical encryption, I would create some key which we both have or know, and use that to encrypt the data, then I could be confident that it’s not going to be intercepted en-route to you. This does mean that I need to use a different key for everybody I send data to, and need a secure way of exchanging this data with everyone, which could cause a lot of issues.

Public key encryption solves this by creating a “key pair”, composed of a “private” key and a “public” key. Some complicated mathematical operation and relationship between these two keys means that the public key can be used to encrypt data, and then only the private key can be used to decrypt that data. The relationship between the keys means that the private key can never be derived from the public key, so you can freely distribute, publish, and shout from the hilltops your public key. Then, anyone who wants to get in contact with you can take your public key, and encrypt the data, and then only you could decrypt it.

Public key encryption is the base for SSL (secure sockets layer), which itself is the base for HTTPS.

Certificates, and Heartbleed

The public keys used for HTTPS are the certificates that your browser shows. Combined with the server’s private key, communication between the server and the browser can be completely encrypted, protecting the transmitted data.

The Heartbleed bug, however, allows any client to request about 64KiB of memory from the server, and to perform the encryption, this memory has to include the server’s private key. If the client is lucky, then they can download the private key (or the set of large prime numbers that make it) of the server and reconstruct the encryption keys locally.

Consequences of leaked private keys

There are quite a few consequences of a private key being leaked, especially when it’s been signed by a trusted third party (like almost all SSL certificates are signed by a CA). The obvious one is that any traffic you encrypt with this public key is now readable by anyone with the private key. Think along the lines of passwords, credit card numbers, etc.

What isn’t immediately thought of is the fact it’s possible to decrypt this data offline – if an attacker had saved your encrypted traffic from years ago, it’s possible that they could now decrypt that traffic. Implementing forward secrecy would go some way to mitigate this.

If the compromised credentials aren’t revoked (or the user doesn’t check revocation lists – the default in Google Chrome), then the attacker can then use those credentials to perform a future man-in-the-middle attack, and intercept any future data you send to a site.

Basically, if someone gets hold of that private key, you’re in trouble. Big trouble.

Recovery

By now, you should have updated to the fixed versions of OpenSSL, and made sure any services using the libraries have been restarted to apply the change.

You should also assume the worst-case scenario – private key data has been fully compromised, and is in the wild. Certificates should be revoked and re-issued BEFORE any other changes of secure credentials, otherwise the new credentials could also be leaked.

Anything that has the potential to cause adverse effects if exposed should be invalidated. This includes credentials like passwords, security keys, API keys, possibly even things like credit card numbers. From a security standpoint, this is the only way you can be completely safe from a leak due to this bug. From a reality standpoint, most people won’t be affected, and stuff like credit cards don’t need to be replaced, just keep a really close eye on statements and consult your bank the moment you notice something amiss.

Now would also be a really good time to start using password manager software like KeePass, so you can use strong and unique passwords for every service without worrying about remembering them all. KeePass uses an encrypted container with a single password (or key file, or both) that protects all the data stored within it.

The future

The bug is a simple missing bounds check. It’s the sort of bug that gets introduced thousands of times a day, and fixed thousands of times a day. Sometimes one or two slip through the gaps. Usually, they’re in non-security-critical code, so any exploitation of these doesn’t give an attacker much to work with.

Sometimes, just sometimes, one of these gets it’s way into a critical bit of code. Sometimes, this then gets released and isn’t spotted for a while. Who’s fault is it? Well, it’s really hard for a developer to see mistakes in their own code. That’s why we have code review. Not every bug is going to be seen by everyone at review stage. Nobody can reasonably assign blame here, and that’s probably not even the right thinking for this.

We should be looking at how we can reduce the frequency of events like this. For starters, this should be added to code review checklists (if it’s not there already!). OpenSSL should also add regression checks to make sure that the same bug isn’t introduced again.

It should be a global learning experience, not a situation where everyone points fingers.

OpenSSL is open-source software – the source code is freely available for anyone to view and edit. Anyone can review the code for security holes, and the developers have a disclaimer which states they supply the software “as-is”, so it’s the responsibility of the users of the software to make sure it works as expected. Open-source software is meant to be more secure because anyone can review it for security holes, but in practice I feel this rarely happens.

Let’s work together – the entire internet community – to make the internet a safer and more secure place. We have done really well with things like SSL, but bugs like this have seriously dented confidence in the security of the internet. Let’s work together to rebuild that trust to a level beyond what it was at before.

This site

This site, and the other sites hosted on this server were amongst the many thousands of other websites which have been hit by this bug. While there’s no indication that the bug was ever exploited, I cannot guarantee that it’s not been. Consequently, I’ve done the thing that so many other sites have done, and patched the hole, and reissued credentials and certificates as necessary.

Wikimedia Hackathon 2014 – Zurich, Switzerland

In a few weeks time, I’m going to be heading to the Wikimedia Hackathon in Zurich, Switzerland.

I’ve been granted a scholarship by Wikimedia UK to cover my travel costs and accommodation, so I can spend a long weekend hacking on code for Wikipedia-related things in the company of a hundred or so other developers from around the world.

I’m heading there to work on integrating the Account Creation tool with the Wikimedia login system so we can replace our broken welcome bot with posts from the creator an account. In the past we have faked it by making the bot sign as a different user, but we’re hoping to allow the bot to edit as other users using the new OAuth tools.

I’m also hoping to learn a lot more about the internals of MediaWiki, the software which powers sites like Wikipedia, in the hope that I can get a lot more involved in the development of this remarkable piece of software, both the core application itself, and extensions. I’ve already done a bit of work on extensions, but I’ve hardly done anything with core. I’d love to learn more to be able to get much more involved with it.

One of the other things I’m looking forward to is the OpenPGP/GnuPG key signing party that’s been suggested, where people can get together and verify each other’s identities, then go away and sign their keys as being valid. It’ll be the first time that I’ve ever been to something like this, and it will be good to get a few more signatures on my key!

It’s going to be really good to meet people, learn, and importantly hack on code to try and get something worthwhile done!

Internet censorship doesn’t work. At all.

When David Cameron announced that he was planning to force all the ISPs to implement automatic filtering of porn on the internet, a lot of people said it was a good idea in principal. And a lot of people who know how the internet and/or filtering tech works said it’s never going to work.

Let me clear something up. I don’t think kids should be looking at porn. I also don’t think there’s a damn thing (on a technical level) that can be done to stop them.

Filters generally work with blacklists and whitelists, and heuristic patterns. Basically speaking, some sites may never be blocked, others will always be blocked, and the heuristics will likely work with keyword lists, so if a page contains a word like XXX or a phrase like “hot hard-core action”, the filter will probably block that site.

Enter the problem: now this site contains those phrases and isn’t one of the huge well-known sites, it could be blocked by those heuristics. Other sites like LGBT sites, rape support sites, and even teen puberty help sites could find themselves blocked, and not all porn sites will contain those phrases so some will inevitably slip through. What’s more, ISPs could find themselves breaking the law by following the law. The LibDem LGBT site was one of those caught in the crossfire, and blocking the website of a political party around election time could be seen as something like electoral fraud.

Apparently parents can override these filters, but what’s to say that parents don’t let their more knowledgeable kids manage the net connection? Or maybe don’t want to disable them because they’re oblivious to the issues? Or even worse, what if the kid is trying to get support regarding parental sexual abuse?

There is so much that could go (and has gone wrong already) with this, and there’s so much damage that could be done to vulnerable people who are trying to get support. It’s a real kick in the teeth for charitable organisations who are doing their best to help people, and then the government comes along and pushes this through.

This is one of many examples of why I feel so strongly against Governmental intervention in technical matters such as internet governance. If you don’t understand the technology, don’t try and legislate for it. Learn the technology, how it works on a basic level, for example with filtering learn how filtering works, and what sort of things get filtered, advantages and disadvantages, and problems. Don’t assume the industry will work out the problems Mr Cameron.

…and so history repeats itself.

In 2008, two Wikipedia administrators (PeterSymonds and Chet B Long) allowed another user (Steve Crossin) to access their administrator accounts. As expected, this was eventually discovered and trout was used extensively across the board. While those users have learnt their lesson, and a lot more users learnt from their mistake too, it seems not everyone learnt that lesson or studied history to learn from that.

A couple of weeks ago, two high-profile users were uncovered to have done a similar deed – Riley Huntley asked Gwickwire to make an edit from his account while he was unable to do so. While Riley didn’t have administrator permissions on enwiki, he did on Wikidata, and thanks to the Single-User Login system that the WMF wikis use, this would have logged Gwickwire onto Riley’s Wikidata account at the same time.

Aftermath

A fair few people who have been around a long time on Wikipedia have rolled their eyes, having seen this happen before and the results it took. GWickwire and Riley have both left, which I feel is an over-reaction, especially since Peter and Chet both only lost their administrator privileges, and only then for 5 months or so – and Peter is now a steward. Yes, it was a mistake, a foolish thing to do (especially given history), but it was 5 years ago so it was a decent amount of time ago. I think both Riley and GWickwire will probably come back at some point, but under different names.

This whole thing is a shame, cos Wikipedia has lost two good editors, who were good at what they did. ACC and #wikipedia-en-help have suffered a bit with their loss.

Linode NextGen

Linode have been amazing as usual, and recently increased network capacity (bandwidth increased 1000%), and doubled CPU capacity (from 4-core to 8-core), and in the last few days they’ve doubled the RAM in each plan! This is all part of their free* upgrades that happen every now and again.

(* They’ve also adjusted the pricing plans to be easier and more friendly. Thus, their (for example) $19.95 plan has been increased to $20. Technically, it’s not free, but for 5 cents?)

The migrations to the new systems take a while – but I’m planning to do it in the middle of the night. Downtime could be a couple of hours. The bonus is with this reboot, the capacity of the entire cluster will be doubled. :o

In other news, I’m planning on retiring pikachu, it’s caused me nothing but trouble with it’s disk arrays, and the lag from Roubaix compared to the Linodes in London has been less than desirable. Plus, with the new capacity that the London cluster has, there’s no real advantage to having a dedicated machine out in Roubaix (apart from the BOINC work that it’s been sat doing cos it could).

If you’re interested in trying out Linode, they do a free trial! Just please mention this referral code: c88742a23087c8758c2824221626a8c1226c1736

Nagios: ?corewindow=cgi-bin/status.cgi

I’ve just found the solution to an issue which has been annoying me for months.

A bit of background – if you link to Nagios, you either link to the entire tool with it’s sidebar and main page, or you link to a specific page within it, losing the sidebar. Nagios includes a nice URL parameter which allows you to specify what the “main page” is, such as the service status view:

http://localhost/nagios/?corewindow=cgi-bin/status.cgi

However, if you try and click on any of the filters, such as “All Problems”, you end up getting an empty list.

It seems that the solution to this is to simply append the value ?host=all to the corewindow parameter:

http://localhost/nagios/?corewindow=cgi-bin/status.cgi%3Fhost=all

This makes it behave as expected. :)