A Computer Science Supervision

Here’s another video from Jake about supervision in Cambridge.  The video is of the final supervision that I give for the Concepts in Programming Languages course.  Concepts is a really fun course that covers the history of different programming languages.  The idea is that by thinking about how languages have evolved over time you might be better placed to exploit (or invent) new ones.

This has some nice overlap with my research at the moment too in which we are looking at how scientists use Fortran and how the language has evolved over time.  You can see a bit more about this on the New Approaches to Programming in the Sciences (NAPS) webpage.

The supervision took place near to the exams and so you’ll see me at the beginning talking about choosing exam questions.  This is unusual since Cambridge exams take place at the end of each year and so for the rest of the time we are free to focus on learning without worrying about exam strategy.

Note Jake’s comment that the supervisor will try to ask you questions that interpret the course rather than just repeat the facts.  I think this is the whole point of a university education – we don’t want students to just learn the book by rote but instead we want them to be able to link the pieces together and apply them in new ways.  In teaching terms this is called a deep approach to learning.

Cambridge Interview Video

Jake and I have made a recording of an undergraduate interview (not a real one!). Jake has edited it down and added some very useful commentary so you know what’s going on.  Hopefully it might help demystify the process for people who are applying to Cambridge.

University teaching prize

This year I was privileged to be awarded one of the University teaching prizes.   They only give out 12 awards a year and so given that there are around 1,600 academic staff in the university I was very pleased to get one. 

My nomination was from the Computer Laboratory for the work I’ve done with the programming lecture courses I teach.  For the first and second year Java courses this involved moving away from lectures to practical sessions with automated unit tests so that students can work at their own pace and actually practice on large programming tasks.  I also teach the second year course on Prolog and in this course I replaced the lectures with video recordings – the idea was to investigate how a MOOC would work in tandem with the Cambridge supervision system.    All of the above work I did in collaboration with Alastair Beresford who is a senior lecturer in the department and a fellow of Robinson College – he won a teaching prize too this year for this work.

Here’s a picture of me getting my award from the University’s Vice Chancellor Professor Sir Leszek Borysiewicz:


You can see more details about the awards on the University news pages

Second and third year exam results

Congratulations to the second and third years on their exam results!    Overall Queens’ has achieved five 1sts, five 2.1s and 1 2.2.  Not only are these results excellent in their own right but almost everyone has improved (relative to the rest of the year group) from last year.

Special acknowledgement goes to Tom Powell and Stephen Cook who also had their final year projects commended by the examiners.

Now we have to wait for the first year results to come out on 30th June….

Computer Science Annual Dinner

Sunday 18th May was the Computer Science Annual dinner for Queens’ undergrads, graduates and supervisors.  One of my favourite things about the dinner is getting a chance to catch up with our graduates and with a total 63 people attending it was the biggest I’ve been to.  There was a very relaxed and social atmosphere and all the different groups of people got on really well.


We were especially pleased to welcome Demis as our guest of honour this year.  Demis graduated from Queens’ in 1997, went on to do a PhD and then founded DeepMind Technologies – recently acquired by Google.   Demis featured in our elevator pitching session a few weeks ago (can you spot it?) and he was in hot demand from the more entrepreneurial undergraduates.

This year’s organizers, Mistral and Eduard, did an excellent job.  Everything ran smoothly from the Champagne reception (photo below), to formal hall, to the extra food and drink in Old Hall to round the evening off.


I complete the takeover as Director of Studies this year and so Robin Walker gave his final Computer Science Dinner Speech.  Next year I’ll have to do it all by myself although Robin admitted he might come along if we extend an invitation.

The dinner is greatly enhanced by some generous sponsorship by both companies and individuals.   The companies sponsoring us this year were: Jane Street, Skin Analytics, Coherent Graphics, G-Research, Bromium and Acano – all of these were represented at the dinner by a Queens’ graduate or two.

James King’s AI Playlists (james.eu.org/download)

It’s an hour after the last exam and people are coming back to my place to celebrate. However, my choice in music is terrible, and I don’t want to kill the buzz by playing things no one else likes. The solution: have a computer do it for me.


James developed a music analysis platform for his part III project, which provides a q-learning hierarchical-clustering Markov-model solution to the problem of playlist creation. The idea is to it separate music that sounds similarly into clusters, so that if there is one track in my library that I can identify as socially acceptable, the algorithms will find others like it as well. Additionally, James used modern AI techniques to model how I interact with my music in order to produce better recommendations based on different user moods (e.g. revision vs. hacking). Here’s an overview of how it works:

Feature Extraction (Learned in Part II)

The first step is to take a song and turn it into a set of descriptive features. He takes in music files that look like:


And by doing autocorrelation, a method of exaggerating repeated features and suppressing others, he turns them into signals like the following:


The high peaks labeled “good match” are then used to identify songs.

Clustering (Learned in Part IB)

Since the first goal of the project is to divide songs into clusters (playlists) based on similarity, these features need further processing. In particular, a clustering algorithm, which takes a set of unorganized points and divides them like this:


Each point here represents a song, and they are currently being clustered by two different features. However, this type of clustering doesn’t capture the notions of genres and subgenres, so James opted for a more refined hierarchical clustering algorithm, which does stuff like this instead:


What to Play Next

At this point, the algorithms have everything grouped into playlists and subplaylists, using hierarchical clustering on the characteristic features of each song. The next part of the problem is figuring out which song in the playlist to play next, given the previous song. This is done with Markov Models.

Markov Models (Learned in Part IB)

A Markov model shows how states are probabilistically linked, making the assumption that the next state is entirely dependent on the current state. So for example (probabilities are heavily adjusted because Andy can see this):

If I’ve been revising, it’s highly likely that I’ll keep revising, with only a tiny chance that I’ll end up going to a pub. And even if I make it to a pub, I definitely wont go to another one afterwards; instead it’s straight to sleep.

James took this and replaced the states (revising, sleep, go to pub) with clusters and subclusters in his model. The model then contains the likelihoods of switching artists, genres, and different songs within the same genre. It is initially created to make similar songs likely to be played in sequence, but then uses AI to learn a better way of doing this based on user actions.

The AI – Q-Learning (Learned in Pat II)

This part is more involved, so I can only give a general overview.

In essence, Q-learning is a method of teaching a computer to perform some task by shouting at it when it gets it wrong and giving it a cookie when it gets it right, learning by reinforcement. In this case, the recommender did a good job if the user listens to a track all of the way through and a bad job if the user skips over it after the first few seconds.

With this information, the system can update the Markov model describing how the next song is chosen. For more information, here’s a link to our AI II course’s notes on the topic (page 339): http://www.cl.cam.ac.uk/teaching/1314/ArtIntII/ai2-2014.pdf  

Now that James has done the hard work for us, you can try out his system at james.eu.org/download to see for yourself how it works.