The next Scottish Programming Languages Seminar will be held on Wednesday 14th January 2009 at the Informatics Forum, University of Edinburgh. The organisers are Bob Atkey and Sam Lindley.
The seminar will be held at the new Informatics Forum, 10 Crichton Street, Edinburgh, EH8 9AB. A map showing the Forum is available. The entrance is opposite that of Appleton Tower. Once inside the building, there will be signs directing you to room 4.31, where the talks will be held, and room 4.40 “Mini Forum 2”, where lunch and coffee breaks will be held.
A buffet lunch will be provided before the meeting at 12 noon. The meeting is expected to finish at around 5.30pm. We will probably head to a local pub afterwards.
If you intend to attend, please email one of the organisers so that we can get an idea of numbers.
Update (2009-01-12): different title and abstract for Ezra's talk.
Update (2009-01-13): Dominic Mulligan taking the place of Jamie
Gabbay.
1200 | Lunch | ||||
Session 1 | |||||
1300 |
A programmer's interface to a relational database is rarely comfortable.
But I will show you how a compiler can translate ordinary functional
program code into SQL queries. Somewhat surprisingly, the technique
works for any appropriately-typed pure functional expression, albeit for
a strong notion of "purity." Thus, unlike in Hollywood where a
screenwriter can never be sure a movie sequel will be popular, here you
can be sure that your SQL--written in your own native programming
language--will succeed (in translating to SQL).
|
||||
1330 |
Building an optimising compiler is a difficult and time consuming
process. Moreover this must be repeated for each new generation of a
microprocessor. When changes occur in the microarchitecture the compiler
must be retuned to optimise specifically for that new target. Several
releases of a compiler might be necessary to fully exploit a processor's
performance potential.
We address this challenge by developing a portable optimising compiler. Our approach makes use of machine-learning to automatically learn the best optimisations to apply for any new program on a new microarchitectural configuration. |
||||
1400 | Coffee break | ||||
Session 2 | |||||
1430 |
Nominal terms
are a syntax which extends first-order term syntax with facilities for
describing and manipulating binding.
They have served as the basis of a
unification
algorithm, a
rewriting
framework, and a
universal algebra.
Nominal terms retain the flavour of first order terms (likewise for
the unification algorithm and the notions of rewriting and algebra),
but with binders.
In this talk, I will describe permissive nominal terms. These
generalise nominal terms, with interesting effects on behaviour and
meta-theory. I will show what permissive nominal terms look like, how
they behave, and what I think they are good for.
More information, including papers and slides and videos of past
talks, are on my homepage.
(Abstract originally written for a talk by Jamie Gabbay.)
|
||||
1500 |
Many predominantly lazy languages now incorporate strictness enforcing
primitives, for example a strict let or sequential composition
seq. The primitives are introduced for a variety of reasons, e.g., to
gain time or space efficiencies or to control parallel
evaluation. This talk develops a theory for proving equivalences
between programs in such languages with selective strictness,
speciffically for a core lazy functional language with seq. The
research contributions of the full paper are as follows:
|
||||
1530 |
Hadoop (aka map/reduce) is an open-source framework for distributing
computation across a large cluster of machines. By decomposing
computations into idempotent 'map' and 'reduce' steps, the programmer is
largely freed from the vagaries of distributed systems. Hadoop presents
the programmer with the abstraction of a single massively scalable,
smart and reliable computer. I will present an overview of how Hadoop
achieves these goals, some real world examples of Hadoop usage, and also
look at some of the exciting new developments in this area.
|
||||
1600 | Coffee break | ||||
Session 3 | |||||
1630 |
My talk will be: An introduction of our project to improve the
computing power of high level programming languages. Our plan is to
develop data parallel and partitioning techniques that harness the
multicore designs promises. We are basing our approach on two key
points:
|
||||
1700 |
I will describe a system developed at Codeplay for offloading parts of
application source code to run on accelerator processors with scratchpad
memories (e.g. the SPE cores in the Cell BE processor). The idea is
that code enclosed in a "sievethread" block is compiled to run with
standard semantics on an accelerator. Versions of all functions
associated with call-graphs rooted in a sievethread block are also
compiled for offloading to the accelerator.
The advantage of such a system is that it allows heterogeneous multi-core processors to be programmed in C++ using a paradigm similar to traditional POSIX threads, and allows homogeneous and heterogeneous versions of an application to coexist in one source base. The research challenge is in the duplication of methods. This raises many interesting issues, including: separation of pointers to data in host memory from pointers to data in local store, duplication and offloading of methods called via function pointers, and complex C++ features such as virtual methods. Most of the talk will focus on these issues: I will describe the solutions we have devised and implemented in a version of the system for a major semiconductor company. I expect members of the SPLS community have been thinking about how to compile high-level programs for heterogeneous multi-core processors, and am keen to get their feedback on the topic. In particular, I am interested in the extent to which analogous challenges arise when compiling functional programs for heterogeneous multi-core platforms. |