Monthly Archives: May 2016

Working in the video game industry

This week a Queens’ CompSci alumnus, Ben Nicholson, gave us a talk about his journey in the video game industry.

Ben started life at Oxford, graduating with a degree in Mathematics. However, he quickly saw the error of his ways and decided to read the Computer Science Diploma at Queens’. This was a postgraduate course run until about 2007, that was intended as a one year crash course in Computer Science.

Like many CompScis, Ben had grown up playing video games. He wanted to combine his knowledge of maths and physics with computer science to make games a lot more realistic when it came to the laws of nature.

Ben took us through his life in the most creative way possible. His presentation was a video game (made in Unity3D) with a metaphorical hill of life. As we climbed it, we saw more and more of what he’d done.

He left Queens’ and started work at Sony on the game “This is Football” series of games. This was his first taste of working in AAA game studios and he shared valuable insights on the experience. He walked us through one of the first problems he had to solve: making goal nets move when they come in contact with a football. There was a trade-off here between how realistic the physics was and how fast the computation, and he showed us some cool physics hacks and approximations.

He then moved to Rocksteady Studios where he worked on the Batman: Arkham series of games. As the physics developer on the games, he was in charge of Batman’s cape and funky physics on ropes/explosions/hair etc. We all left with a much greater knowledge of point masses.

Ben also worked on destruction physics at Frontier Developments, leading the development of the destruction tech engine for Scream Ride. This is a game where you get to build a city, put a rollercoaster around it and then watch the ride smash through your creation.

After his 11 years working on large AAA games, Ben decided to form his own indie games studio. Inspired by his work on Batman, it’s called Cape Guy!

A lot of us, especially final year computer scientists, trying to figure out what we want to do, have thought about starting our own companies. So a game studio sounded like the perfect opportunity. However, while Ben is loving the experience, he did provide a little bit of reality check on the indie life. He talked about how things he hadn’t considered in AAA games were suddenly important – such as PR and marketing. He weighed the pros and cons of being on your own versus a big studio and spoke about how we should proceed if we wanted to join the industry. The games industry certainly provides a whole host of roles based on your interests – from game mechanics, to graphics, physics engines or high level animation.

Personally, I can’t wait for Ben’s next game – check out his twitter and website to find out more!

The annual dinner

We had the annual Computer Science dinner last Sunday. This our annual event for current students, supervisors and alumni.

First point of celebration was that all the Part II students successfully completed their projects and handed them in on time: well done everyone!

I’d like to thank the companies that sponsored us this year: Improbable, Palantir, Microsoft Research, Jane Street and Coherent Graphics. Your support is really appreciated.

This year we were lucky to have Eben Upton and Liz Upton of Raspberry Pi fame as our guests of honour. Eben gave us a really interesting talk about things that (almost) went wrong when they were getting Raspberry Pi of the ground. The moral of his story was that its never plain sailing in a startup. Imagine a swam: calm on the surface and paddling like mad underneath.

It was great to see so many people there. See you all next year.


I forgot to take any more photos than this one so if anyone has any good ones then please do send them to me!

Boundary Detection in Natural Language

Does it often happen to you when you get a text “Let’s eat Eduard” and start wondering whether your friend has joined University of Cambridge Cannibalism Club? Well, worry no more, because Andi’s project is going to put all the missing punctuation in and eliminate your worries. This Wednesday he gave us a presentation on the topic.

We now (hopefully) see that missing punctuation, or in more scientific terms, missing segment boundaries, can be harmful. However, why is this problem important? Surely, if a human omits crucial piece of punctuation from the written text, it’s his own fault. However, there is another type of obtaining text, apart from writing, which has grew really popular recently. I’m talking about speech recognition. Whether you use Siri, Cortana, Alexa or “OK, Google”, they largely have the same challenge – from an infinite stream of words they want to break them down to segments to obtain original meaning. Of course, the other way to obtain punctuation is to carefully consider pauses in speech, but we’re going to assume we do not have that information available.

The project is split into four stages as shown below:


First, we need to gather data. For this purpose, British National Corpus was used, containing over 100 million of segments with annotated words. Some extra processing was required to remove some noise, unhelpful to the project. Finally, the corpus was filtered down to include only written text; omitting some other types of text, like meeting minutes, which were not verified and could contain grammatical and linguistic errors.

Then, we move on to training a model. For this purpose, the n-gram model was used. As Andi explained, it is a type of probabilistic language model for predicting the next item in such a sequence in the form of a (n − 1) order Markov model. In other words, we use the previous (n – 1) words to predict the next one.

Now, we can perform classification in this model. We can now estimate the probability of whether the next word or punctuation follows (n – 1) words based on the training set. After that, we can use that information to predict the punctuation by using the beam trees. Essentially we want to consider all possible branches (punctuation or next word) pruning branches with very small probability. This is beneficial, since it allows us to delay the decision point if the probabilities are similar, and only prune when the continuation of the branch was deemed to be improbable. You can see the example of this in the Figure below, when we delay the prediction whether “cat” is followed by “eats” or by a punctuation in a bi-gram model.


Finally, we perform evaluation of the model. There are two commonly used metrics – recall and precision. Recall measures how many actual punctuation we have identified. Precision measures how many punctuations we identified were actual. However, neither of these metrics is good on its own – it’s easy to get a perfect recall by having punctuation everywhere and it’s easy to get a perfect precision by generating only really obvious punctuations. To solve this, the F-score is introduced, which combines these two metrics into a single one.