This week, it was Andy’s turn to give a presentation about his Part II project on using Deep Learning to automatically tag music. Andy began by talking about how Spotify had millions of songs in its database and the need for machines to auto-tag these to form playlists.
He then explained some of the background theory that his project builds upon, such as:
- Mel-frequency spectrograms (which represents the spectrum of frequencies according to their perceptual distance)
- Deep networks and supervised learning
- Convolutional layers and how deep networks learn through gradient descent
Finally, Andy described traditional algorithms for tagging such as collaborative filtering and explained how his project aimed to improve their shortcomings. He described the network architecture and the research paper he was using as well as some preliminary results.