Monthly Archives: January 2015

Requirements management in large systems

Last night we had a guest talk from Bob Salmon who was an undergraduate at Queens’ in 1989.

Bob talked about how adding a single requirement can drastically change the design of a large system. We went through some examples starting with the requirement that your system should be upgradeable. This is easy to do if its your laptop: install updates and reboot. But if you also have the requirement that it must work for a very large system. There might not be enough downtime available (e.g. overnight) for you to shut the system down to upgrade it. So instead you might run a series of copies to a shadow system whilst leaving the original system (the system of record) running. Once all the copies are done you can then flick a switch to deploy the new one. We also looked at architectural approaches to provide high-availability – it was left to the audience to think about how to provide high-availability upgrades!

Bob has worked on the UK smart meter project which is slated to install smart meters in every home in the UK by 2020. He talked about the challenges in building this system and how some of the different requirements are interacting to make things even more difficult. One interesting aspect of the Smart Meter proposal is that other devices in my house will be able to talk to the meters. For example, my washing machine might want to know when electricity is cheap. This requires firewalling within the house to ensure that consumer devices cannot interfere with the national energy distribution infrastructure – we have to watch out for trojan washing machines.

After the talk we went to dinner as usual and we talked about how the Computer Science course has changed (or not changed!) since Bob’s time.

Algorithmic Video Summarisation

With the first week of Lent term out of the way, our Wednesday meeting saw Jake present his part II project on algorithmic video summarisation.

With the volume of video footage recorded each year from 6 million CCTV cameras in the UK estimated at a dizzying 50 billion hours, it is evident that humans can no longer keep up with the task of picking out important events for police work. Jake’s solution aims to use a variety of video summarisation techniques to reduce videos to just the scenes containing interesting activity. In simple terms:

video-summarisation

He presented a variety of methods for determining just what sections of a video are ‘interesting’. We were shown an early demo implemented using the open source computer vision library OpenVC, and assured that the choice of Windows & Visual C++ was necessitated by performance alone. The demo took some CCTV-style footage (although punting on the River Cam rarely appears on Crimewatch, admittedly), and analysed the change in colour distribution between frames to determine the moments where the most action was taking place.

This proof of concept showed the legitimacy of an algorithmic approach for selecting regions of activity, and we were then introduced to a range of more sophisticated techniques that might be considered. These included comparisons to a median image (with interesting parallels to how modern video encoding is performed, with H.264’s use of keyframes), as well as motion and object detection, facial recognition, and the easily forgotten idea of examining audio data too.

Of course, there is no use comparing these approaches without some objective measure of the quality of their results. This evaluation will be a significant part of Jake’s project, as he seeks to compare the computed videos with manually produced summaries. He presented the novel concept of asking users to perform objective tasks from viewing the extracted footage, for instance counting the number of people entering a room, as a means of determining the reliability of the summaries.

Aside from utilising CCTV footage more effectively, this technology also has also potential uses in video browsing and retrieval systems, and consumer video composition apps (a step forward from auto-generated slideshows of holiday snaps, we can only hope). Best of luck, Jake.

An Electronic Trading Hackathon (eth0)

Last weekend, Jane Street organised ‘eth0′ – the first edition of their hackathon. Being a quantitative trading firm, Jane Street write smart computer programs that are constantly trading on the stock market, implementing novel algorithms and ideas. This means that they need efficient and robust code, as well as profitable strategies.

Thus, the aim of the hackathon was to provide a similar experience to computer science students by getting teams to write bots that would compete on a mock exchange. Not missing the opportunity to strut our stuff, the team from Queens’ comprised Jeppe, Eduard and Sid.

The brief was simple: an online service was provided, to which bots could send JSON requests and execute trades. There were a few stocks – FOO, BAR, BAZ, QUUZ – that every team could trade (along with a few bots written by Jane Street). There was also an additional product called CORGE which was a kind of ETF (Exchange Traded Fund). This meant that it was a composite of other stocks – 0.1 FOO and 0.8 BAR. At any point, you could convert from one form to the other.

Given only that much information, the teams were to compete in three rounds. The scores were weighted such that your worst performances counted more than your best ones. This was to simulate the real world where a big loss is far more negative (potentially causing the firm to close down) than a profit is positive.

The first part of the hackathon turned out to be more a technical than financial task. The team from Queens’ spent a few hours just writing code to connect to the server, receive updates on prices and trades, store/ parse this data, and finally send responses.

When it finally got to the trading bits, Queens’ managed to implement two strategies.

The first is called ‘market making’. Essentially, every stock currently trading has two sets of prices. One of these is the ‘offer’ – the price someone is willing to sell the stock at. The other is the ‘bid’ – the price someone is willing to buy at. There is normally a difference between the highest bid and the lowest offer – known as the ‘spread’. The bot worked by placing a buy request just above the bid and a sell request just below the offer. When people wanted to either buy or sell, the bot’s price would be the most attractive and the trade would get executed. Thus, the bid-offer spread (albeit small) is the profit the bot makes.

Queens’ managed to get this working before the first round and was the only team to make money.

For the next rounds, another strategy was used – ‘ETF arbitrage’. Arbitrage means the simultaneous buying and selling of a product to make a ‘riskless profit’ from price differences. For example, if you could buy an apple for $1 at store A, and then sell it to store B at $1.01, you could technically make $0.01 without spending any of your money. You could then borrow money and do this ad infinitum – as long as it was profitable. The ETF arbitrage worked similarly with CORGE, FOO and BAR. This could be thought of as buying a basket of 10 apples for $1 and then selling each for $0.11. Since CORGE could always be converted to 0.1 FOO and 0.8 BAR – it should always be priced as 0.1 the price of FOO plus 0.8 the price of BAR. If it is ever priced less than this (like the apples), you can buy the stock and convert it to FOO and BAR which are worth more. Thus, you can then sell them and pocket the difference (individual apples being sold). If the basket were to be worth more, you could do the reverse transaction. The challenge was made harder with a few extra transaction costs, but the idea was the same.

Queens’ turned out to be pretty successful in round two – coming second, and finishing third overall due to a large loss in the final round.

All in all, the excessive code, food and (of course) Red Bull, contributed to a really fun Saturday. We’re looking forward to doing more of these.

An All-Smartphone Drone

I hope you all watched the Royal Institution Christmas lectures on hacking your home. The series covered two of the most interesting interactive computational systems of the 21st Century (so far) – smartphones and drones. Both of these systems have already had a huge impact on modern society, and the potential for autonomous land, marine, and aerial vehicles to contribute hundreds of billions of pounds to the global economy is very real. Last year a couple of Andy Rice’s PhD students and I took on the challenge of bringing these two systems together and building an all-smartphone drone. This integration has been performed before to some degree, but normally using an Arduino or similar embedded processor to perform the real-time hardware control. We wanted to develop a solution which used only the smartphone for all aspects of sensing, planning, command and control.

The artificial horizon GUI of Captain Buzz

The artificial horizon GUI of Captain Buzz

A smartphone is all you need to build an autopilot for a plane. There, I said it. It’s a bold (and italicised) statement, but it is true. A smartphone contains inertial sensors to determine roll, pitch and yaw; GPS to know global position and altitude; a wealth of radio communication options; and plenty of processing grunt. It even packs a couple of cameras. So we set about proving that it could be done.

The first challenge lay in our output options to send control messages from the smartphone to the plane. We wanted to make use of the headphone jack from the smartphone, rather than use an On-The-Go port connection. This was for a number of reasons, including the limited number of devices with OTG ports, and the attraction of the “purity” of the headphone jack solution. The idea that we would be playing “just the right sort of music” down the headphones in order to make the phone fly the plane was quite a neat one. However standard aircraft designs incorporate a rudder, elevator, and ailerons. This means that three independent control signals are needed to fly the aircraft. A headphone jack however only provides left and right channels – just two control signals. Our solution was to choose a flying wing, or delta wing, design for our aircraft. This is the iconic shape of the US B2 stealth bomber and by turning the entire aircraft into a single wing only two control channels are needed (elevator and aileron). Technically the control surfaces of a delta wing take on a new name (elevon) because the same moving flaps fulfill the tasks of both elevator and aileron. The right sort of music, by the way, is pulse width modulation. To you and me it just sounds like a nasty buzzing sound, but no one said the music that makes a plane fly would be symphonic. However, our smartphone UAV pilot sounding a lot like an angry bee is actually rather poetic. The smartphone pilot was christened Captain Buzz for obvious reasons…

The smartphone UAV airframe

The smartphone UAV airframe

The second challenge lay in tuning the control loops for the autonomous controller. There is a good reason why most people use an Andruino or similar for the controller when attempting to build a smartphone drone – the lag from sensing to control output through the Android system is pretty high. Probably because the Android development team did not have a fixed wing autopilot in mind when brainstorming the likely uses of this operating system. We used PID controllers within our control loops, and many of the parameters needed to be tuned gradually by hand in test flights. We could overcome the lag to some degree, but the smartphone drone will always wander sluggishly around the sky compared to an Arduino counterpart. However, in the words of JFK, we chose to do this and other things, not because they are easy but because they are hard. Admittedly, the moonshot was definitely harder than this.

It's as simple as it looks

PID control is pretty basic, but is normally fine for something as simple as autonomous flight…

A third challenge lay in safely developing the autopilot software through live test flights. You cannot tune the parameters statically on the ground, the system has to fly for both it and us to learn various system behaviours and parameter settings. To do this safely we made use of a “buddy box” which is traditionally used to train novice human pilots. The buddy box is mounted in the plane and two controllers are hooked up to it, with one set of outputs from the buddy box running to the plane’s control surfaces and engine. We used standard radio equipment to fly the aircraft to a safe height and position, and then a switch on the human’s controller allowed the smartphone in the cockpit to take control of the elevons. The human pilot maintained throttle control and could always switch back to full control in an instant as needed. This system allowed the smartphone controller parameters to be determined over a series of test flights safely.

A fourth challenge lay in the mounting of the smartphone and buddy box in the airframe. A smartphone seems relatively light and compact, but is actually a relatively awkward shape to mount into the narrow confines of a standard aircraft fuselage battery bay. We struggled to maintain a good centre of gravity with a conventional off-the-shelf delta wing design, plus we wanted a slower flying aircraft with larger lifting area than the standard designs provide, and so turned to a laser cutter to 3D print our own aircraft. Our smartphone mounting point became an external pod running underneath the delta wing, hacked into place courtesy of a bit more laser cutting and a hot glue gun.

After quite a few test flights to tune over a dozen parameters we have a relatively stable autonomous aircraft. Captain Buzz has his own Google+ page, so you can follow more of his antics there over the coming months. There are a couple of YouTube videos of test flights linked from that page too. We will be adding some more functionality, and doing an interview for BBC Radio Cambridgeshire during a live flight once the weather picks up. At the moment Buzz can turn onto provided compass headings and maintain stable flight in turns and on the level. By providing a sequence of GPS waypoints to Buzz he can look at his current GPS position, calculate a bearing to the new waypoint, and turn onto that heading. This is the essence of waypoint following for an autonomous platform.

Captain Buzz

If reading this and watching our videos has inspired you to fly remote control aircraft, or to build a flying robot then please read the following very carefully. The CAA regulations on flying devices must be followed at all times, read the air navigation guidelines very carefully before flying anything outdoors.

Ramsey

Captain Buzz was a fun hobby, but more importantly he does a good job of outreach and engagement with prospective students.

Captain Buzz was a fun hobby, but more importantly he does a good job of outreach and engagement with prospective students.

A new(ish) addition

Hello readers!

My name is Dr Ramsey Faragher, and I am a new Bye Fellow at Queens’ college in Computer Science. I help Dr Rice to cover the mathematical and applied parts of the tripos and have actually been in post for a few months now, but am only just getting around to this blog. My research interests are very applied, in the broadest sense I am interested in writing software to allow systems to understand and navigate the world around them. This encompasses sensor fusion, machine learning, autonomy, computer vision, and signal processing.

“From submarines to smartphones” is the name of a talk I have given many times covering my adventures so far in the fields of positioning and navigation. That succinct title is missing out autonomous cars, autonomous aircraft, trains, and a host of other platforms. I was even involved in the original design study for ESA on the intelligent navigation system (now called Seeker) for the EXOMARS autonomous Martian rover planned to hit the planet in 2018. My broad exposure to a wide range of projects came from spending 6 years in industry between my PhD and my current academic post. I still engage with various industrial sectors today through both academic and extra-curricular activities and continue to find myself working on new and exciting projects. Top Gear magazine wrote an article about me in 2013, based on my wide variety of projects and developments. Perhaps one day I shall write a book with the same snappy title as my favourite talk, but for now most of my stories shall have to wait for computer science dinners, talks and college feasts….

Ramsey