Monthly Archives: March 2016

Easter Revision Planning: 2016

In a departure from the previous theme, we focused this week on Easter revision. Eduard collected everyone’s plans and classified them as either focusing on when to revise, or what to revise. Some of us were chosen to present our plans to everyone else. Here’s the selection of the more memorable moments from the presentations.

When, Middle Level of Detail:  Ben

Ben planned to do eight hours of work per day… on his dissertation alone. Not to mention revision for his final year exams and a three-day rowing trip. While he sketched out a rough work schedule – in sufficient detail to identify daily targets – he retained enough freedom to juggle sessions around as necessary.

When, High Level of Detail: Andy

Andy planned in great detail each hour of his day, segmenting it into blocks of exactly 1 hour 35 minutes. Andy’s plan should ensure that he stays on track, and can just run on autopilot from this stage onwards.

Image: Andy’s incredibly detailed plan.

Screen Shot 2016-03-11 at 13.45.54.png

When, Low Level of Detail: Alex no. 1

Alex started off by asserting that he was a great “believer in freedom”, and so it’s no surprise that his plan left plenty of room for improvisation. This should allow him to vary his schedule on a day-to-day basis, as per his mood.

What: Alex no. 2

Everyone was dazzled by his three page plan written in LaTeX — until he mentioned that “none of this is the actual plan.” Fortunately, he did seem to have a good alternative plan, relying heavily on tripos questions (“Java questions are jolly useful”) to revise. This is a good strategy to get to understand the style of tripos questions and cover the material at the same time.

Image: An extract from Alex’s plan written in LaTeX.

Screen Shot 2016-03-11 at 13.42.13

What: Henry

Henry emphasised the importance of taking a break, planning in a week of gliding. Cambridge can be lot of hard work, and it’s vital to take some time off to be able to focus fully during term-time.

What: Radu

Radu used fancy fitting algorithms to slot his study sessions into his timetable. Wisely, he prioritised the courses he needed by assigning them the most number of sessions. The Part II dissertation was also once more a running theme.

How it can all go wrong: Eduard

To finish off, Eduard gave a demonstration of how a plan can go wrong, even for the best of us. Overall, the message was “be honest with yourself,” and keep a realistic set of goals in mind while revising over Easter.

Image: Eduard’s example (of what not to do) plan, including travel to Japan, FIFA, meeting with friends, and a worryingly small amount of time to work on his project.

Screen Shot 2016-03-11 at 13.43.07.png

Compiler Optimisation and Automatic Loop Parallelisation

This week the Queens’ Computer Scientists were given a fascinating talk by Katie regarding her Part II project on Compiler Optimisation and Automatic Loop Parallelisation.

The project looks at exploiting parallelism automatically through compiler optimisations and distributing work across multiple cores. The benefits of which are quite remarkable: if 50% of a program is parallelizable, the speedup from parallelisation can be as much as 2x, but a program with a parallelizable code fraction of 95% could see a speedup of up to 20x!

To begin with, we were told about the basics of compiler optimisation.

Compiler optimisation aims to make code:

  • Smaller E.g. Dead code elimination, Unreachable code elimination
  • Faster E.g. Auto-Vectorisation, Auto-Parallelisation
  • Cheaper E.g. Register colouring, Cache management


Screen Shot 2016-03-06 at 20.14.08

There are many types of optimisation, and each type corresponds to the stage in which the compiler should be optimised. In the image above, we can see that it is possible to optimise the parse tree, intermediate code or target code. Katie chose to run her optimisations on the program’s intermediate code, because in doing so, the intermediate representation is translated from the original source code and so should have obvious basic blocks, meaning a block of code without control flow. As such, distinct units of execution are created, upon which optimisations can be carried out — this saves time and space compared to if the optimisation had been run and stored per individual instruction. Intermediate code also has a smaller number of instructions than the original source code and so it reduces the number of instruction combinations and structure that the optimisation needs to look for.

Optimisation is carried out by analysis and transform passes. The transform pass changes the code written by the programmer, and the analysis pass prevents these changes being dangerous, e.g., leading to unexpected behaviour or segmentation faults.

Unreachable elimination is one property that can be checked when optimising code, it allows for transforming the code to optimise the size of the program. However, with this comes the issue of the transform being safe; will any code be broken?

When the analysis searches for a particular program property that cannot be decided, the optimisation approximates cautiously. One example is overestimating reachability:

Screen Shot 2016-03-06 at 20.41.33

Katie adopted a similar approach in her project to implement a way of optimising a compiler to parallelise loops. The aim was to gain program speed up by utilising many cores.

It is possible to take a risk and try to parallelise code that is non-parallelizable, but the safer option is to underestimate parallelizability.

Screen Shot 2016-03-06 at 21.23.08
After carrying out safety checks in the analysis of the above loop: checking that there are no inner-loop dependencies, ensuring later iterations are independent of earlier ones, checking that arrays don’t alias to prevent dependencies between loops, bounding the loops with known constants, ensuring there are no unexpected breaks or returns, and omitting loops with reduction variables that occur on a per thread basis, Katie was able to optimise the code, extract loops and run them as separate threads distributed across multiple cores:

Screen Shot 2016-03-06 at 21.23.12
To finish her presentation, Katie talked about how she  ran a Mandelbrot set calculation with a maximum iteration of 1000 to test the compiler optimisations she had created to spread the iterations across multiple cores. The graph below shows the data from the test: