Deep Learning for Music Recommendation

This week, it was Andy’s turn to give a presentation about his Part II project on using Deep Learning to automatically tag music. Andy began by talking about how Spotify had millions of songs in its database and the need for machines to auto-tag these to form playlists.

 

He then explained some of the background theory that his project builds upon, such as:

  • Mel-frequency spectrograms (which represents the spectrum of frequencies according to their perceptual distance)
  • Deep networks and supervised learning
  • Convolutional layers and how deep networks learn through gradient descent
Screen Shot 2018-02-23 at 16.50.33

A mel-spectrogram of a typical music file.

Finally, Andy described traditional algorithms for tagging such as collaborative filtering and explained how his project aimed to improve their shortcomings. He described the network architecture and the research paper he was using as well as some preliminary results.

Screen Shot 2018-02-23 at 16.50.44

A diagram of a single neuron, and a deep network consisting of many neurons interconnected.