Complexity (30 January – 3 February)
Talk list
Lectures

Martin Hofmann (LMU, Munich, Germany),
Pure pointer programs (implicit computational complexity) with an abstract datatype of pointers.
Pointer programs are a model of structured computation within LOGSPACE (logarithmic space on a Turing machine). They capture the common description of LOGSPACE algorithms as programs that take as input some structured data, e.g. a graph, and that store in memory only a constant number of pointers to the input, e.g. to the graph nodes.
We define a formalised language for pointer programs (Purple) and show that some LOGSPACE algorithms can be programmed in Purple while others cannot (e.g. reachability in undirected graphs). This yields a better understanding of the structure of LOGSPACE and also sheds new light on finite model theory; indeed, since formulas in deterministic transitive closure logic (DTC) can be evaluated in Purple it can be deduced that reachability in undirected graphs cannot be defined by a DTC formula which was hitherto unknown.
This also, somewhat trivially, separates Purple from PTIME. In order to get a more meaningful such separation we would like to strengthen Purple while still remaining strictly below PTIME. Possible such extensions are nondeterminism, iterators in the style of Java, and various patterns of recursion patterns.
The course will give an overview of Purple and related systems and results, in particular Graph Automata and DTC logic and present some of the extensions to Purple that we currently investigate.
This is joint work with Ulrich Schoepp and Ramyaa Ramyaa.
 Tuesday 31, 9:00 — part 1
 Wednesday 1, 9:00 — part 2

Yiannis N. Moschovakis (UCLA, USA),
Relative complexity in arithmetic and algebra.
The main aim of these lectures is to show how ideas from the study of recursive programs and algorithms can be used to derive lower complexity bounds for mathematical problems which are robust (with respect to the choice of computation model) and plausibly "absolute", i.e., they restrict "all algorithms". An alternative name for them would be "Recursion and Complexity".
There will be four, 40minute lectures, as follows:
 Recursive (McCarthy) programs. Mostly wellknown introductory material. One possibly novel idea is a new approach to the foundational problem of justifying the ChurchTuring Thesis, which comes from examining the connection between recursion and computation.
 Uniform processes. A simple, axiomatic approach to the theory of algorithms from specified primitives in the style of "abstract model theory". Uniform processes capture the "uniformity" of algorithmsthat they apply "the same procedure" to all inputsbut not their effectiveness. They carry natural complexity measures and support a simple "Homomorphism Test" which can be used to derive absolute lower bounds for algorithms from specified primitives. This is the main, new material in these lectures.
 Lower bounds in arithmetic. Strong versions of results obtained jointly with Lou van den Dries, mostly about the complexity of coprimeness from various primitives and relative to various computation models.
 Lower bounds in algebra. Strong versions of results on "0testing" for polynomials over various fields, mostly due to Peter Buergisser (with others) for algebraic decision trees.
 Monday 30, 11:00 — part 1
 Tuesday 31, 11:00 — part 2

Stefan Szeider (TU Wien, Austria),
Parametrized complexity.
Parameterized Complexity is a new theoretical framework for the analysis and solution of hard computational problems. Virtually in every conceivable context we know more about the problem input than just its size in bytes. The key idea of parameterized complexity is to represent this additional information in terms of a parameter, and to study computational problems in a twodimensional setting formed by the input size and the parameter. This setting gives rise to a new theory of algorithms and complexity that allows a more finegrained complexity analysis than the classical onedimensional setting. Central to the theory is the notion of fixedparameter tractability, which relaxes the classical notion of tractability by admitting algorithms whose runtime is exponential, but only in terms of the parameter of the problem instance. In recent years, ideas from parameterized complexity theory have found their way into various areas of computer science, such as artificial intelligence, database theory, computational logic, computational social choice, computational geometry, and computational biology.
In the first part of this tutorial we will discuss algorithmic methods for establishing fixedparameter tractability, including the method of bounded search trees, reductions to a problem kernel, and algorithmic meta theorems. In the second part we will discuss the main concepts of parameterized intractability, which are similar to the theory of NPcompleteness, and allow to provide strong evidence that a parameterized problem is not fixedparameter tractable.
 Thursday 2, 11:00 — part 1
 Friday 3, 9:00 — part 2

Heribert Vollmer (Leibniz Universität, Hannover, Germany),
Circuit complexity.
In these introductory lectures we will cover two topics from the area of circuit complexity: In the first lecture we will talk about arithmetic circuits of small depth. We will mainly concentrate on the classes #NC1 and #GapNC1 and show how they provide useful characterizations of counting complexity classes, how they capture the computational complexity of some problems in linear algebra, and how they shed new light on the relation between logarithmic depth circuits and logarithmic space Turing machines. In the second lecture we will turn to the area of proof complexity and use very small NC0 circuit families as proof checkers. Alternatively one might say that we use NC0 circuit families to enumerate languages (allowing repetitions). We will show how on the one hand even NPcomplete languages can be enumerated in this way but on the other hand some very simple languages lack this property.
 Thursday 2, 9:00 — Arithmetic circuits of small depth
 Friday 3, 11:00 — Proof complexity
Invited talks

Emmanuel Hainry (Nancy),
Computable Analysis: Computability and complexity over the reals.
Computing over continuous domains, in particular on the reals is quite important for example to simulate physical, biological, mathematical phenomena. Various models and various machines for computing on such domaines exist but there is no such thing as a ChurchTuring thesis for computing on the reals, some models staying in the Turingcomplete class, some using reals to answer the halting problem. We will concentrate on one model: Recursive Analysis which is well accepted and has a long history as it was already present in Turing's 1936 paper and even hinted at by Borel in 1912.
We will present the recursive analysis model, including some explanations on why some choices were made and introduce fundamental results on computability in recursive analysis. We will also show recent results on the characterization of classes of computable real functions.
Then, we will enter the complexity field. What does complexity mean when the size of the input is infinite and the computation is not terminating? We will answer this question, present basic tools to analyse the complexity of functions and a framework that allows us to translate characterizations of discrete complexity classes into characterizations of the analog real complexity class, and use it to give a characterization of the class of real functions computable in polynomial time.
 Tuesday 31, 14:30

Neil Jones (Copenhagen),
Alan Turing and 75 years of research in models of computation.
From a programming perspective, Alan Turing’s epochal 1936 paper on computable functions introduced several new concepts, including what is today known as selfinterpreters and programs as data, and originated a great many nowcommon programming techniques.
We begin by reviewing Turing’s contribution from a programming perspective; and then systematise and mention some of the many ways that later developments in models of computation (MOCs) have interacted with computability theory and programming language research. Next, we describe the “blob” MOC: a recent storedprogram computational model without pointers. Novelties of the blob model: programs are truly firstclass citizens, capable of being automatically executed, compiled or interpreted. The model is Turing complete in a strong sense: a universal interpretation algorithm exists, able to run any program in a natural way and without arcane data encodings. The model appears closer to being physically realisable than earlier computation models. In part this owes to strong finiteness due to early binding; and a strong adjacency property: the active instruction is always adjacent to the piece of data on which it operates. Next, some of the bestknown among the numerous existing MOCs are overviewed and classified by qualitative rather than quantitative features, paying attention to two factors of prime importance to programmability and physical realizability: finiteness (and with respect to what); binding times (of what to what at which point in a computation’s time). We attempt to establish a list of traits an “ideal” MOC should process. Finally, we describe how the blob model differs from an “ideal” MOC, and identify some natural next steps to achieve such a model.
Keywords : programming, recursion theory, models of computation
 Wednesday 1, 11:00

Virgile Mogbil (LIPN, Paris Nord),
Parallel computation with boolean proof nets.
 Thursday 2, 14:30
Contributed talks

Clément Aubert (LIPN  Univ. Paris 13),
Proof circuits and others parallel models of computation..
Proof circuits [Aubert, 11] are a clear and intuitive presentation of the Boolean proof nets [Terui, 04] in a uniform framework [MogbilRahli, 07] : we define pieces as a set of links and edges of a unbounded variant of Multiplicative Linear Logic representing Boolean constants, nary disjunctions and conjonctions, negation and mechanisms such as deletion and duplication. Thoses pieces may be "plugged" together to obtain proof circuits : MLLu uniform Boolean proof nets whose size and depth are implicitly bounded and whose parralel normalization matches up evaluation in Boolean circuits. This light presentation allows sublogarithmic translation and simulation between Boolean circuits and proof circuits, lightens the size of the latter and preserves all the good properties concerning complexity. We conclude by giving the rst hints toward a full correspondence between proof circuits and alternating Turing machines, enlarging the correspondence between parallel models of computation.
 Thursday 2, 16:00

Reinhard Kahle (Universidade Nova de Lisboa),
Computational Complexity and Applicative Theories (joint work with Isabel Oitavem)..
By work of Strahm and Cantini, it was already shown that Applicative Theories  the rst order part of Feferman's system of Explicit Mathematics  provide a very handy formal framework to characterize classes of computational complexity. In this talk, we present an applicative theory for FPH and its levels. The presenta tion will also include some general consideration concerning the setup of induction principles corresponding to recursion principles for complexity classes.
 Monday 30, 16:30

Martin Lackner (Vienna University of Technology),
FixedParameter Algorithms for Finding Minimal Models (joint work with Andreas Pfandler)..
Computing minimal models is an important task in AI and Reasoning that appears in formalisms such as circumscription, diagnosis and answer set programming. Deciding whether there is a minimal model containing a given variable is known to be Sigma2^P complete. In this talk I present a study of this problem from the viewpoint of parame terized complexity theory that has been undertaken together with Andreas Pfan dler. We performed an extensive complexity analysis of this problem with respect to eleven parameters. We identied tractable fragments based on combinations of these parameters by giving several xedparameter algorithms. Furthermore, for the remaining combinations we showed parameterized hardness results and thus proved that no further xedparameter algorithms exist for these parameters (under usual complexity theoretic assumptions). In particular, we proved W[2] completeness when parameterizing by the maximum cardinality of the model. This answered an open question posed in (Gottlob, Scarcello, and Sideri 2002).
 Friday 3, 14:00

Ugo Dal Lago (University of Bologna),
Higherorder Interpretations and Program Complexity (joint work with Patrick Baillot)..
Polynomial interpretations and their generalizations like quasiinterpretations have been used in the setting of rstorder functional languages to design criteria en suring statically some complexity bounds on programs. This ts in the area of implicit computational complexity, which aims at giving machinefree character izations of complexity classes. Here we extend this approach to the higherorder setting. For that we consider the notion of simply typed term rewriting systems, we dene higherorder polynomial interpretations (HOPI) for them and give a cri terion based on HOPIs to ensure that a program can be executed in polynomial time. In order to obtain a criterion which is exible enough to validate some inter esting programs using higherorder primitives, we introduce a notion of polynomial quasiinterpretations, coupled with a simple termination criterion.
 Monday 30, 15:00

Isabel Oitavem (FCTUNL and CMAFUL),
NP with tier 0 pointers..
We give a characterization of NP using a recursion scheme with tier 0 pointers. This extends the BellantoniCook characterization of Ptime and, simultaneously, it is a restriction of a recursiontheoretic characterization of Pspace.
 Monday 30, 16:00

Matthieu Perrinel (ENS Lyon),
Strong bound for light linear logic by levels.
The light linear logic by levels, introduced by Baillot and Mazza, is an extension of Girard's light linear logic. The aim of these logics is to characterize some complexity classes like FPtime and elementary time in the proofsasprograms approach. Weak bounds (i.e. bounds for specific reduction strategies) were proven for two versions of light linear logic by levels (ml4 and ml4_0). However, the corresponding strategies are complex. Thus, it was difficult to transform those logics into true programming languages with explicit reduction strategy. Extending Dal Lago's context semantics to light linear logic by levels, we show that ml4 and ml4_0 both enjoy a strong polynomial bound (valid for any reduction strategy).
 Tuesday 31, 17:00

Johannes Schmidt (Université de Marseille),
On the Parameterized Complexity of Default Logic and Autoepistemic Logic..
(joint work with A. Meier, M. Thomas, and H. Vollmer). We investigate the application of Courcelle's Theorem and the logspace version of Elberfeld et al. in the context of the implication problem for propositional sets of formulae, the extension existence problem for default logic, as well as the expansion existence problem for autoepistemic logic and obtain fixedparameter time and space efficient algorithms for these problems. On the other hand, we exhibit, for each of the above problems, families of instances of a very simple structure that, for a wide range of different parameterizations, do not have efficient fixed parameter algorithms (even in the sense of the large class XPnu), unless P=NP.
 Friday 3, 14:30

Marco Solieri (Université Paris 13),
Deep into optimality Complexity and correctness of shared implementation of bounded logics. .
(joint work with Stefano Guerrini and Thomas Leventis) Sharing graphs are an implementation of both the Lévyoptimal reduction on lambda calculus, and of linear logic proofnets. Since the proof of inadequacy of Lévy families as a cost model for lambda calculus, studies on the complexity of the shared reduction are far from complete. Indeed, the only comparative result considers the restricted case of bounded logics. Because of their well known complexity, it was possible to provide semantically, via geometry of interaction, a correspondent bounding of their shared implementation. In a similarly restricted case, where the abstract algorithm is sufficient, we present a complete and stronger comparison between the cost of the cut elimination on proofnets and the cost of the corresponding shared reduction. Then, for the first time, the expected benefits of sharing and avoiding duplication in the reduction are made explicit. The proof exploits an intermediate graph rewriting system, that permits us to give a precise account of complexity on the former and to establish a simulation of the latter. Such simulation implies the main complexity result of the shared implementation, as well as its correctness.
 Tuesday 31, 16:30

Pierre Valarcher (Université Paris Est Créteil),
From intentional behavior to algorithmic completeness..
I propose to present problems of the algorithmic expressiveness of some programming languages (imperative and functional) that compute extensionally the same set of functions.
 Monday 30, 14:30