Monday 23 November 2009

Slides from talk: Ypnos: Declarative, Parallel Structured Grid Programming

Slides from this talk, given on Friday 20th November at the CPRG weekly seminar, can be found here. The paper accompanying the talk can be found here.

Talk abstract:

A fully automatic, compiler-driven approach to parallelisation can result in unpredictable time and space costs for compiled code. On the other hand, a fully manual approach to parallelisation can be long, tedious, prone to errors, hard to debug, and often architecture-specific. This talk presents a declarative domain-specific language, Ypnos, for expressing structured grid computations which encourages manual specification of causally sequential operations but then allows a simple, predictable, static analysis to generate optimised, parallel implementations. Ypnos is sufficiently restricted such that optimisation and parallelisation is guaranteed.

Friday 31 July 2009

Message Relays, Parrot Fashion

Lately I have been thinking a lot about pirates. Pirates are ubiquitous in programming language design but they've had a bad press. This post hopes to redress the balance a little.

I was recently chatting to my pirate buddy Elfonso, and he related to me a problem that he'd recently encountered concerning his four crewmates: Albert, Blackbeard, Cuthbert and Dread. The five of them had taken to living on the desert islands shown on the delightful map below.



The pirates like their privacy and each insists on having one island to himself. Following a dispute over some buried treasure, Elfonso only really talks to Dread; the other four all talk to each other, except Dread and Blackbeard who've never really seen eye to eye (they both have patches). Even when chillaxing on their private islands, the pirates like to send messages to each other, and do so by the classic medium of string and tin cans. Due to a shortage of string, the islands are now wired together somewhat haphazardly, as the map shows. The pirates will forward messages for each other if no direct route is available between the two islands, but they'd really rather not. Elfonso's question was, "Where should each pirate live so that they can all send messages to their friends with the minimum amount of forwarding?"

Let's try a simple arrangement (see left) and see what happens. Let's say Elfonso takes the middle island, and the other four pirates take the four islands that have a direct connection with him. Now when Elfonso and Dread want to exchange messages, they can do it directly; each of the other pairings (Albert with Blackbeard, Cuthbert, and Dread; Dread with Cuthbert and Cuthbert with Blackbeard) will require Elfonso to forward the message. So if each pair of pirates who are on speaking terms exchange one message, that will require five forwards in total. We'll say that this arrangement has a score of 5, remembering of course that a lower score is better. Is it possible to improve on this arrangement?

There is a simple but very time-consuming way to find the best possible arrangement; we simply try every possible combination of pirates and islands, and work out the score for each one, then take the best of the bunch. This is quite feasible for five pirates and eight islands, but (planning ahead) we'd like a method that works for thousands of islands and hundreds of pirates! Even the fastest computer would take hours to solve a problem that large with this "brute force" method, and pirates are notoriously technophobic in any case. Clearly we need something different.

One solution is to use a method which will hopefully find a "fairly good" arrangement very quickly. This allows us to trade off the quality of the arrangement with how long we are willing to spend looking for it. This is where I felt the CPRG could maybe help Elfonso out.


A simple approximate method is to put the pirate with the most friends on the island with the most wires, the second most popular on the second best-connected island, and so on. In this case this leads us to the arrangement on the left. The real weakness in this case is that Dread and Elfonso are at opposite ends of the map, so every message between them will need to be forwarded by Albert and Blackbeard (or Albert and Cuthbert). But overall this isn't too bad; it has a "score" of 6, which is clearly not ideal, but it was quick to work out and in some contexts that would be good enough.

My current work is in finding a method somewhere between these two; still fast enough to solve examples involving large numbers of pirates and islands, but producing better solutions than the naive method of giving the chattiest pirates the best-connected islands. How can we do this? There are many possible approaches. One of the weaknesses of the simple approach above is that it ignores the structure of the islands and the pirate's communications; even if two pirates talk to each other a lot, they won't necessarily be given nearby islands. I am investigating methods which take this structure into account. Early results suggest that it's often possible to get within 10% - 20% of optimal with approaches that run 10 or 20 thousand times faster than brute force.

Having read this far, you may now be scratching your head and wondering how on Earth this could pass for computer science. Maybe you figured it out already. The islands are analogous to computers, joined together in a network; or maybe, processors joined together on a chip. The pirates are particular programs which need to run, and maybe communicate with each other. At this point the simple model of "communicate or don't communicate" breaks down; instead we need to know how often each program needs to send or receive data to each other program, and how fast the links between the computers can provide that data. But in fact this extra structure usually makes it easy to find a "reasonably good" solution, even if it complicates finding optimal solutions.

The final task, of finding an optimal solution for Elfonso and his piratical buddies, is left as an exercise to the reader.

Tuesday 19 May 2009

Dictionaries: lazy or eager type class witnesses

I gave a talk at the Cambridge computer lab on May 15, 2009:

Type classes are Haskell’s acclaimed solution to ad-hoc overloading. This talk gives an introductory overview of type classes and their runtime witnesses, dictionaries. It asks the questions whether dictionaries should abide by Haskell’s default lazy evaluation strategy.

Conceptually, a type class is a type-level predicate: a type is an
instance of a type class iff it type provides an implementation for
overloaded functions. For instance, `Eq a’ declares that type `a’
implements a function `(==) :: a → a → Bool’ for checking equality.

Type classes are used as constraints on type variables, in so-called
constrained polymorphic functions. E.g. `sort :: Ord a => [a] → [a]’
sorts a list with any type of elements `a’ that are an instance of the
Ord type class, i.e. provide implementations for comparison.

Witnesses for type class constraints are necessary to select the appropriate implementation for the overloaded functions at runtime. For instance, if `sort’ is called with Int elements, the Int comparison must be used, versus say Float comparison for Float elements.

Two forms of witnesses have been considered in the literature, runtime type representations and so-called dictionaries, of which the latter are the most most commonly implementation, e.g. in GHC . Haskell implementations treat dictionaries just like all other data, as lazy values that may potentially consists of non-terminating computations. This way part of the type checker’s work, who has made sure that the dictionaries do exist, is simply forgotten. Is this really necessary? Can performance be gained by exploiting the strict nature of dictionaries?

You can get the slides here.

Wednesday 8 April 2009

Compiler research centres

As Mary Hall, David Padua and Keshav Pingali observed in a February 2009 Commun. ACM article:
There are few large compiler research groups anywhere in the world today. At universities, such groups typically consist of a senior researcher and a few students, and their projects tend to be short term, usually only as long as a Ph.D. project.
This is certainly the case in the UK. Imperial's Software Performance Optimisation group is a blueprint of that (OK, with a post-doc sandwiched between the senior researcher and students).  Whilst other research groups in the UK may appear to be larger, compiler researchers typically represent a smaller subset of researchers working on programming languages or computer architecture. For example, in Cambridge the Programming group is part of the Programming, Logic, and Semantics group (and a recent article in the IET magazine suggests that a senior member of the Programming group also belongs to the Computer Architecture group); in Oxford the Programming Tools group is part of the Programming Languages group; in Manchester and Edinburgh compiler researchers work in the Advanced Processor Technologies group and the Compiler and Architecture Design group, respectively. Note that only the Edinburgh group prides themselves as compiler researchers (at least if one judges by the group name). 

In the same Commun. ACM article we read the following recommendation:
...researchers must attempt radical new solutions that are likely to be lengthy and involved. The compiler research community (university and industry) must work together to develop a few large centers where long-term research projects with the support of a stable staff are carried out. Industry and funding agencies must work together to stimulate and create opportunities for the initiation of these centers.
The situation is definitely getting better in the US. In April 2009, The Defense Advanced Research Projects Agency (DARPA) awarded $16 million to the Platform-Aware Compilation Environment (PACE) project at Rice University, as part of its Architecture Aware Compiler Environment (AACE) programme. In March 2007, Intel and Microsoft announced awarding a combined $20 million grant to the Parallel Computing Laboratory at UC Berkeley and to the Universal Parallel Computing Research Centre at Illinois (with the universities reportedly applying to additional funding of $7 and $8 million, respectively, to match the industry grant). No doubt, these parallel computing centres will spend big on compiler projects (e.g. PACE is explicitly focused on compilers).

What about compiler funding the UK? From what I see (by looking at the number of PhD studentships, post-doctoral fellowships and permanent positions), the UK spending on compiler research seems to have been monotonically increasing over the past couple of years. However, this growth does not have as much support as in the US, so the UK risks losing its competitiveness in this field.

Thursday 12 February 2009

First Post

Welcome to the Cambridge Programming Research Group's blog. The CPRG is a research group in the Computer Laboratory at the University of Cambridge. The group is also part of a larger group at the Computer Lab known as PLS, or Programming, Logic, and Semantics.

This blog is authored by members of the CPRG containing posts related to the research of the CPRG and interesting, relevant research from elsewhere. This includes research relating to: programming language design, compilers, interpreters, virtual machines, portable target codes, program analysis, program transformations, and optimisation. Other related topics include: fast typical-case solutions to NP problems, algebraic manipulation by computer, and compiler/hardware trade-offs.