Monthly Archives: February 2016

Procedurally generating closed-circuit racetracks

In this week’s talk we had another 3rd year student, Jamie, explain his project to us and why he chose to spend his time on this subject.

Jamie started out by giving some examples of procedurally generated content. The example he used was Minecraft and how 2D Perlin Noise was used for the terrain generation. However, he explained that different methods are used in procedural generation depending on the required properties of the result. He illustrated this by explaining that Perlin Worms are used to generate the caves in Minecraft in order to increase the connectivity of the caves.

image.CN0VCY

He then went on by briefly touching upon the uses of procedural generation in audio, L-systems and city layout generation (as seen above), before going on to the main part of his talk: “Procedurally generating closed-circuit racetracks”.

Jamie started out by giving us some definitions that were necessary to understand the process. Firstly, he needed to define a racetrack, which was required to be a closed circuit and needed to have a start and a finish. Then, he talked about the different constraints one could put on the racetracks. He chose to have a number of “hard constraints”, which were properties the generated racetrack had to have, and also a number of “soft constraints”, which were properties which would be preferable but not necessary. The “hard constraints” were represented by a set of (in)equality constraints, so they had to have a certain exact values. Each “soft constraint” was represented by an Energy Function, which gave a certain racetrack a value based on its properties regarding the constraint. The total “value” of a racetrack was then found by summing all the Energy Functions.

In order to write his code and generate racetracks, he needed a way of representing the racetracks, which he did by using an ordered list of “d quadratic bezier curves”, which are a way of representing a piece of the racetrack, where each bezier curve can at most have one type of curvature.  The racetracks themselves were generated by using something called Voronoi Diagrams (as seen below).

image.0NATCY

Voronoi diagrams are represented by a group of points, with the edges being defined as the locus of points where the distances to each of the 2 closest points are equal. Jamie then created racetracks from these by selecting a collection of these edges. Each racetrack is first checked to conform with the “hard constraints” and is then optimised by minimising the sum of the energy functions of the “soft constraints” using gradient descent, which works a bit like rolling a ball down a slope (see picture below).

image.V0KZCY

Jamie finished his presentation by giving us a demo of his code and explaining that, to finish off his project, he would do a user study to see if the racetracks he generated are truly as good or maybe even better than the content that is available so far.

Parallelization, Convolutional Neural Networks and other buzz-words

Last Wednesday we were given a talk by Ben, one of the third years at Queens’. His talk introduced us to some very recent popular computer science topics and a way to combine them under his Part II project “Parallelized Deep Learning for Convolutional Neural Networks on the Intel Xeon Phi”.

He started his talk by explaining how neural networks work. We’ve seen how they aim to process information in a way similar to how biological systems, such as the brain, do it. Their structure gives the impression of a brain, by having multiple interconnected layers, each of them composed of multiple neurons. Neurons are just processing units that apply a predefined function to a set of inputs. Convolutional Neural Network are a particular case in which the function applied in each neuron is a convolution. The following picture represents the structure of a neural network, and it shows how one neuron applies its function.

Presentation.7-page-001

Ben then explains that you can train such networks to make different “intelligent” decisions – his example was understanding handwritten digits. The training can be done by minimizing the error for each training sample – in a way similar to letting a ball roll down a hill.

The theory behind all of this is as old as 1980, but requires quite high computational power. Due to recent advances in technology, using such a structure has become a feasible thing. For his project, Ben uses the Intel Xeon Phi co-processor, specialized to do parallelized computing. By splitting and parallelizing the calculation, Ben showed us how you can reduce the training time to mere seconds, compared to several days.