|
LICS 2007: abstracts of accepted papers
- Invited lectures (in order of appearance)
- Regular papers (in order of appearance)
- Short presentations
LICS invited lectures
- The Kepler Conjecture and the Flyspeck Project
- Thomas C. Hales
-
Many elementary problems in geometry arise as part of the proof of
the Kepler conjecture on sphere packings. In the original proof, most of
these problems were solved by hand. This article investigates the
methods that were used in the original proof and describes a number of
other methods that might be used to automate the proofs of these
problems. A companion article presents the collection of elementary
problems in geometry for which automated proofs are sought. This article
is a contribution to the Flyspeck project, which aims to give a complete
formal proof of the Kepler conjecture.
- Reflections on Finite Model Theory
- Phokion G. Kolaitis
-
Advances in finite model theory have appeared in LICS proceedings
since the very beginning of the LICS Symposium. The goal of this paper
is to reflect on finite model theory by highlighting some of its
successes, examining obstacles that were encountered, and discussing
some open problems that have stubbornly resisted solution.
Joint LICS/ICALP invited lectures
- The Algebraic Theory of Effects
- Gordon Plotkin
-
- Highly Efficient Secrecy-Preserving Proofs of Correctness of Computations and Applications
- Michael O. Rabin, Rocco A. Servedio and Christopher Thorpe
-
We present a highly efficient method for proving correctness of
computations while preserving secrecy of the input values. This is done
in an Evaluator-Prover model which can also be realized by a secure
processor. We describe an application to secure auctions.
Joint LICS/LC invited lectures
- Combinatorics of Proofs
- Martin Hyland
-
Ideally interpretations of proofs should exhibit some essential
combinatorial features in an interesting and appealing way. As a case
study, one can consider the notion of innocent strategy which is the
basis for a game semantical interpretation of proofs and programmes.
Some combinatorial content of this notion is sketched in the joint LICS
paper accompanying this talk, whose abstract reads as follows.
We show how to construct the category of games and innocent
strategies from a more primitive category of games. On that category we
define a comonad and monad with the former distributing over the latter.
Innocent strategies are the maps in the induced two-sided Kleisli
category. Thus the problematic composition of innocent strategies
reflects the use of the distributive law. The composition of simple
strategies, and the combinatorics of pointers used to give the comonad
and monad are themselves described in categorical terms. The notions of
view and of legal play arise naturally in the explanation of the
distributivity. The category-theoretic perspective provides a clear
discipline for the necessary combinatorics.
There are other instances of a kind of categorical combinatorics of
proofs, but in this talk I shall restrict myself to the one
instance.
- Skolemization in Constructive Theories
- Rosalie Iemhoff
-
It has long been known that Skolemization is sound but not complete for
intuitionistic logic. We will show that by slightly extending the
expressive power of the logic one can define a translation that removes
strong quantifiers from predicate formulas and that is related but not
equal to Skolemization. Since the extended logic is constructive, the
translation can be considered as an alternative to Skolemization for
constructive settings. The result easily implies an analogue of
Herbrand's theorem. We will apply the method to various constructive
theories and compare it to other Skolemization methods and related
translations like the Dialectica Interpretation.
- Non-well-founded Proofs
- Alex Simpson
-
I will discuss various situations, arising in computer science,
mathematics and logic, in which one is naturally led to consider
associated proof systems involving interesting forms of non-well-founded
proof.
- Higher-Order Matching, Games and Automata
- Colin Stirling
-
We describe a particular case where methods such as model-checking as
used in verification are transferred to simply typed lambda calculus.
Higher-order matching is the problem given t = u
where t, u are terms of simply typed lambda-calculus and
u is closed, is there a substitution θ such that tθ
and u have the same normal form with respect to
beta-eta-equality: can t be pattern matched to u? In the
talk we consider the question: can we characterize the set of all
solution terms to a matching problem? We provide an automata-theoretic
account that is relative to resource: given a matching problem and a
finite set of variables and constants, the (possibly infinite) set of
terms that are built from those components and that solve the problem
is regular. The characterization uses standard bottom-up tree automata.
However, the technical proof uses a game-theoretic characterization of
matching.
- Can Logic Tame Systems Programs?
- Cristiano Calcagno
-
We report on our experience on designing and implementing tools for
automatic reasoning about safety of systems programs using separation
logic. We highlight some of the fundamental obstacles that need to be
overcome, such as the complexity of data structures and scalability of
the methods, on the path to realistic systems programs.
- Infinite sets that admit exhaustive
search I
- Martín Escardó
- (The second talk: Infinite sets that admit exhaustive
search II is included in session Topology
and Computable Mathematics.)
-
Perhaps surprisingly, there are infinite sets that admit mechanical
exhaustive search in finite time. We investigate three related questions:
(1) What kinds of infinite sets admit exhaustive search? (2) How
do we systematically build such sets? (3) How fast can exhaustive
search over infinite sets be performed?
We give answers to them in the realm of Kleene-Kreisel higher-type
computation: (1) involves the topological notion of compactness,
(2) amounts
to the usual closure properties of compact sets, including the Tychonoff
theorem, (3) provides some fast algorithms and a conjecture.
These two talks include my contributed LICS paper, but go beyond in two
respects: a general introduction to the role of topology in computation is
given, and a few new results are included, such as an Arzela-Ascoli type
characterization of exhaustible sets.
Type Theory
- Dependent Types with Universes via Normalization-by-Evaluation
- Andreas Abel, Thierry Coquand and Peter Dybjer
-
A dependently typed language with universes and
conversion-as-judgment is introduced. The main technical device to
establish its meta-theoretical properties are a
normalization-by-evaluation algorithm, a PER model on the semantics and
a logical relation between syntax and semantics. Central properties
like the injectivity of dependent function space constructor are proven
with these semantical tools. The method extends to systems with
singleton types and first class modules, thus, showing decidability of
equality and type-checking for a combination of universes.and singleton
types for the first time.
- Strong Normalization as Safe Interaction
- Colin Riba
-
The elimination rule of union types (denoted by (UE)) can break
strong normalization (i.e. is unsafe) in presence of non-determinism.
This is quite disturbing, since union types are often required in
presence of non-deterministic computations. Surprisingly, there are
also confluent systems for which this rule is unsafe. It appears that
studying the safety of (UE) amounts to the characterization in a term of
safe interactions between some of its subterms.
In this paper, we study the safety of the elimination rule of union
types for an extension of the lambda calculus with simple rewrite rules.
We prove that the union and intersection types discipline is complete
w.r.t. strong normalization. This allows to show that (UE) is safe iff
an interpretation of types based on biorthogonals is sound for it.
We then provide two syntactic sufficient conditions for the safety of
(UE). The first one is issued from the closure by union of reducibility
candidates while the second is issued from the closure by union of a
biorthogonality relation. We show that closure by union of reducibility
candidates implies completeness of intersection type assignment.
Finally, we discuss an alternative biorthogonality relation, based on
the observation of the least reducibility candidate.
- A Dependent Set Theory
- Wojciech Moczydłowski
-
Set theories are traditionally based on the first-order logic. We
show that in a constructive setting, basing a set theory on a dependent
logic yields many benefits. To this end, we introduce a dependent
impredicative constructive set theory which we call
IZFD. Using realizability, we prove that the
underlying lambda calculus weakly normalizes, thus enabling program
extraction from IZFD proofs. We also show that
IZFD can interpret IZF with Collection. By a
well-known result of Friedman, this establishes IZFD
as a remarkably strong theory, with proof-theoretical power equal to
that of ZFC. We further demonstrate that IZFD
provides a natural framework to interpret first-order definitions, thus
removing a longstanding barrier to implementing constructive set
theories. Finally, we prove that IZFD extended with
excluded middle is consistent, thus paving the way to using our
framework in the classical setting.
Security
- A Complete Axiomatization of Knowledge and Cryptography
- Mika Cohen and Mads Dam
-
The combination of first-order epistemic logic and formal
cryptography offers a potentially very powerful framework for security
protocol verification. In this article, we address two main challenges
towards such a combination; First, the expressive power, specifically
the epistemic modality, needs to receive concrete computational
justification. Second, the logic must be shown to be, in some sense,
formally tractable. Addressing the first challenge, we provide a
generalized Kripke semantics that uses permutations on the underlying
domain of cryptographic messages to reflect agents' limited
computational power. Using this approach, we obtain logical
characterizations of important concepts of knowledge in the security
protocol literature, specifically Dolev-Yao style message deduction and
static equivalence. Answering the second challenge, we exhibit an
axiomatization which is sound and complete relative to the underlying
theory of cryptographic terms, and to an omega rule for quantifiers. The
axiomatization uses largely standard axioms and rules from first-order
modal logic. In addition, there are some novel axioms for the
interaction between knowledge and cryptography. To illustrate the
usefulness of the logic we consider protocol examples using mixes, a
Crowds style protocol, and electronic payments. Furthermore, we provide
embedding results for BAN and SVO.
Computational Proof Theory
- Principles of Superdeduction
- Paul Brauner, Clément Houtmann and Claude Kirchner
-
In predicate logic, the proof that a theorem P holds in a theory Th
is typically conducted in natural deduction or in the sequent calculus
using all the information contained in the theory in a uniform way.
Introduced ten years ago, Deduction modulo allows us to make use of the
computational part of the theory Th for true computations modulo which
deductions are performed.
Focussing on the sequent calculus, this paper presents and studies the
dual concept where the theory is used to enrich the deduction system
with new deduction rules in a systematic, correct and complete way. We
call such a new deduction system "superdeduction".
We introduce a proof-term language and a cut-elimination procedure
both based on Christian Urban's work on classical sequent
calculus. Strong normalisation is proven under appropriate and natural
hypothesis, therefore ensuring the consistency of the deduction
system.
The proofs obtained in such a new system are much closer to the human
intuition and practice. We consequently show how superdeduction along
with deduction modulo can be used to ground the formal foundations of
new extendible proof assistants. We finally present lemuridae, our
current implementation of superdeduction modulo.
- Complete Sequent Calculi for Induction and Infinite Descent
- James Brotherston and Alex Simpson
-
This paper compares two different styles of reasoning with
inductively defined predicates, each of the styles being encapsulated by
a corresponding sequent calculus proof system.
The first system supports traditional proof by induction, with
induction rules formulated as sequent rules for introducing inductively
defined predicates on the left of sequents. We show this system to be
cut-free complete with respect to a natural class of Henkin models; the
eliminability of cut follows as a corollary.
The second system uses infinite (non-well-founded) proofs to
represent arguments by infinite descent. In this system, the left rules
for inductively defined predicates are simple case-split rules, and an
infinitary, global condition on proof trees is required to ensure
soundness. We show this system to be cut-free complete with respect to
standard models, and again infer the eliminability of cut.
The second infinitary system is unsuitable for formal reasoning.
However, it has a natural restriction to proofs given by regular trees,
i.e. to those proofs representable by finite graphs. This restricted
"cyclic" system subsumes the first system for proof by induction. We
conjecture that the two systems are in fact equivalent, i.e., that proof
by induction is equivalent to regular proof by infinite descent.
Timed and Stochastic Systems
- Limits of Multi–Discounted Markov Decision Processes
- Hugo Gimbert and Wiesław Zielonka
-
Markov decision processes (MDPs) are natural models of controllable
discrete event systems with stochastic transitions. Performance of MDPs
can be measured in different ways. The controller of a mean-payoff MDP
seeks to optimize average performance, whereas in a discounted MDP a
discount factor is used to give more weight to early steps of the play.
Another example is provided by parity MDPs which are used to encode
logical specifications.
A classical result of game theory establishes that mean-payoff MDPs
are limits of discounted MDPs whose discount factor tends to 0. De
Alfaro et al. considered the more general class of discounted MDPs with
multiple discount factors (multi-discounted MDPs), and established that
parity MDPs are limits of certain sequences of multi-discounted MDPs
whose discount factors converge to 0 one after another.
We introduce the class of priority weighted MDPs, which are a
generalization of both parity MDPs and mean-payoff MDPs. We prove that
priority weighted MDPs admit optimal pure stationary strategies. As a
corollary we show that priority weighted MPDs are limits of
multidiscounted MDPs whose discount factors converge to 0 simultaneously
but with different speeds.
- Game Relations and Metrics
- Luca de Alfaro, Rupak Majumdar, Vishwanath Raman and Marielle Stoelinga
-
We study notions of equivalences and metrics on concurrent game
structures that call two states equivalent if the value obtained by
player 1 for winning objectives in a quantitative fixpoint calculus
at the two states are equal. We study two properties of the relations:
logical characterization and reciprocity (the equivalence of the
player 1 version and the player 2 version of the relations).
We extend alternating bisimulation to games where the players are
allowed randomized strategies and the transition relation may be
probabilistic. We show that the resulting game relations are
incomparable to the (non-probabilistic) alternating relations.
Since game equivalences may be too strict in the presence of
numerical probabilities, we study their metric analogues which associate
a distance to each pair of states such that two states are close in the
metric if the value obtained by player 1 for any winning objective
from these states are close. We study two metrics on games and show
that the two coincide on Markov decision processes (MDPs). For games,
we show the two notions are distinct, and perhaps surprisingly, the
natural extension of the MDP definition does not characterize the
quantitative μ-calculus nor satisfy reciprocity. For the second
formulation, we prove a continuity theorem w.r.t. the quantitative
μ-calculus and show reciprocity holds.
Our results provide compositional reasoning principles to concurrent
games with probabilistic transition relations and randomized
strategies.
- The Cost of Punctuality
- Patricia Bouyer, Nicolas Markey, Joel Ouaknine and James Worrell
-
In an influential paper titled “The Benefits of Relaxing Punctuality”
(JACM 1996), Alur, Feder, and Henzinger introduced Metric Interval
Temporal Logic (MITL) as a fragment of the real-time logic Metric
Temporal Logic (MTL) in which exact or punctual timing constraints are
banned. Their main result showed that model checking and satisfiability
for MITL are both EXPSPACE-complete.
Until recently, it was widely believed that admitting even the
simplest punctual specifications in any linear-time temporal logic would
automatically lead to undecidability. Although this was recently
disproved, until now no punctual fragment of MTL was known to have even
primitive recursive complexity (with certain decidable fragments having
provably non-primitive recursive complexity).
In this paper we identify a subset of MTL that is capable of
expressing a large class of punctual specifications and for which model
checking (although not satisfiability) has no complexity cost over MITL.
Our logic is moreover qualitatively different from MITL in that it can
express properties that are not timed regular. Correspondingly, our
decision procedures do not involve translating formulas into
finite-state automata, but rather to certain kinds of reversal-bounded
Turing machines. Using this translation we show that the model checking
problem for our logic is EXPSPACE-complete, and is even PSPACE-complete
if timing constraints are encoded in unary.
Verification
- First-order and Temporal Logics for Nested Words
- Rajeev Alur, Marcelo Arenas, Pablo Barcelo, Kousha Etessami, Neil Immerman and Leonid Libkin
-
Nested words are a structured model of execution paths in procedural
programs, reflecting their call and return nesting structure. Finite
nested words also capture the structure of parse trees and other
tree-structured data, such as XML.
We provide new temporal logics for finite and infinite nested words,
which are natural extensions of LTL, and prove that these logics are
first-order expressively-complete. One of them is based on adding a
“within” modality, evaluating a formula on a subword, to a logic CARET
previously studied in the context of verifying properties of recursive
state machines. The other logic is based on the notion of a program path
that combines the linear and nesting structures. For that logic, both
model-checking and satisfiability are shown to be EXPTIME-complete.
Finally, we prove that first-order logic over nested words has the
three-variable property, and we present a temporal logic for nested
words which is complete for the two-variable fragment of
first-order.
- A Robust Class of Context-Sensitive Languages
- Salvatore La Torre, Madhusudan Parthasarathy and Gennaro Parlato
-
We define a new class of languages defined by multi-stack automata
that forms a robust subclass of context-sensitive languages, with
decidable emptiness and closure under boolean operations.
This class, called multistack visibly pushdown languages (MVPLs) is
defined using multi-stack pushdown automata with two restrictions:
- the pushdown automaton is visible, i.e. the input
letter determines the operation on the stacks, and
- any computation of the machine can be split into k stages,
where in each stage, there is at most one stack that is
popped.
MVPLs are an extension of visibly pushdown languages that captures
non-context free behaviours, and has applications in analyzing
abstractions of multithreaded recursive programs, significantly
enlarging the search space that can be explored for them.
We show that MVPLs are closed under boolean operations, and problems
such as emptiness and inclusion are decidable. We characterize MVPLs
using monadic second-order logic over appropriate structures, and
exhibit a Parikh theorem for them.
- A New Efficient Simulation Equivalence Algorithm
- Francesco Ranzato and Francesco Tapparo
-
It is well known that simulation equivalence is an appropriate
astraction to be used in model checking because it strongly preserves
ACTL* and provides a better space reduction than bisimulation
equivalence. However, computing simulation equivalence is harder than
computing bisimulation equivalence. A number of algorithms for
computing simulation equivalence exist. Let Sigma denote the state
space, -> the transition relation and P_sim the partition induced by
simulation equivalence. The algorithms by Henzinger, Henzinger, Kopke
and by Bloom and Paige run in O(|→| |Σ|)-time and, as far as
time-complexity is concerned, they are the best available algorithms.
However, these algorithms have the drawback of a quadratic space
complexity that is limited from below by Ω(|Σ|2). The
algorithm by Gentilini, Piazza, Policriti appears to be the best
algorithm when both time and space complexities are taken into account.
Gentilini et al.'s algorithm runs in
O(|Psim|2 |→|)-time while the
space complexity is in
O(|Psim|2 + |Σ|log(|Psim|)).
We present here a new efficient simulation equivalence algorithm that is
obtained as a modification of Henzinger et al.'s algorithm and whose
correctness is based on some techniques used in recent applications of
abstract interpretation to model checking. Our algorithm runs in
O(|Psim| |→|)-time and
O(|Psim| |Σ|)-space. Thus, while retaining
a space complexity which is lower than quadratic, our algorithm improves
on the best time bound.
- Infinite State AMC-Model Checking for Cryptographic Protocols
- Detlef Kaehler, Ralf Kuesters and Tomasz Truderung
-
Only very little is known about the automatic analysis of
cryptographic protocols for game-theoretic security properties. In this
paper, we therefore study decidability and complexity of the model
checking problem for AMC-formulas over infinite state concurrent game
structures induced by cryptographic protocols and the Dolev-Yao
intruder. We show that the problem is NEXPTIME-complete when making
reasonable assumptions about protocols and for an expressive fragment of
AMC, which contains, for example, all properties formulated by Kremer
and Raskin in fair ATL for contract-signing and non-repudiation
protocols. We also prove that our assumptions on protocols are necessary
to obtain decidability.
- Two-way unary temporal logic over trees
- Mikołaj Bojańczyk
-
We consider a temporal logic TL[EF,F-1] for unranked, unordered
finite trees. The logic has two operators: EF φ, which says
“in some proper descendant φ holds”, and F-1 φ, which
says “in some proper ancestor φ holds”. We present an algorithm for
deciding if a regular language of unranked finite trees can be expressed
in TL[EF,F-1]. The algorithm uses a characterization expressed in terms
of equations in forest algebras.
- Alternation-Free Modal Mu-Calculus for Data Trees
- Marcin Jurdziński and Ranko Lazic
-
An alternation-free modal mu-calculus over data trees is introduced
and studied. A data tree is an unranked ordered tree whose every node
is labelled by a letter from a finite alphabet and an element (“datum”)
from an infinite set. For expressing data-sensitive properties, the
calculus is equipped with freeze quantification. A freeze quantifier
stores in a register the datum labelling the current tree node, which
can then be accessed for equality comparisons deeper in the formula.
The main results in the paper are that, for the fragment with forward
modal operators and one register, satisfiability over finite data trees
is decidable but not primitive recursive, and that for the subfragment
consisting of safety formulae, satisfiability over countable data trees
is 2EXPTIME-complete. The proofs use alternating tree automata which
have registers, and establish correspondences with nondeterministic tree
automata which have faulty counters. Allowing backward modal operators
or two registers causes undecidability.
As consequences of the main results, decidability and
3EXPTIME-membership for two data-sensitive fragments of the XPath
query language are obtained.
The paper shows that, for reasoning about data trees, the forward
fragment of the calculus with one register is a powerful alternative to
a recently proposed first-order logic with two variables.
- A Contraction Method to Decide MSO Theories of Trees
- Angelo Montanari and Gabriele Puppis
-
In this paper we generalize the contraction method, originally
proposed by Elgot and Rabin and later extended by Carton and Thomas,
from labeled linear orderings to colored deterministic trees. First, we
reduce the model checking problem for MSO logic over deterministic
colored trees to the acceptance problem for tree automata. Then, we
exploit a suitable notion of indistinguishability of trees with respect
to tree automata to reduce a number of instances of the latter problem
to decidable instances over regular trees. We prove that such a method
works effectively for a large class of trees, which is closed under
noticeable operations and includes all the deterministic trees of the
Caucal hierarchy obtained via unfoldings and inverse finite mappings as
well as several trees outside such a hierarchy.
Constraints
- Symmetric Datalog and Constraint Satisfaction Problems in Logspace
- Laszlo Egri, Benoit Larose and Pascal Tesson
-
We introduce symmetric Datalog, a syntactic restriction of
linear Datalog and show that the expressive power of symmetric Datalog
is exactly that of restricted symmetric Krom SNP. The deep result of
Reingold on the complexity of undirected connectivity suffices to show
that symmetric Datalog queries can always be evaluated in logarithmic
space. We show that for a number of constraint languages Γ, the
complement of the constraint satisfaction problem CSP(Γ) can be
expressed in symmetric Datalog. In particular, we show that if
CSP(Γ) is first-order definable and Λ is a finite subset of
the relational clone generated by Γ then ¬ CSP(Λ) is
definable in symmetric Datalog. We also show that over the two-element
domain and under a standard complexity-theoretic assumption,
expressibility of ¬ CSP(Γ) in symmetric Datalog corresponds
exactly to the class of CSPs solvable in logarithmic space. Finally,
we describe a fairly general subclass of implicational (or 0/1/all)
constraints for which the complement of the corresponding CSP is also
definable in symmetric Datalog. These all provide preliminary evidence
that symmetric Datalog might provide a unifying explanation for
families of CSPs lying in logarithmic space.
- Quantified Equality Constraints
- Manuel Bodirsky and Hubie Chen
-
An equality template (also equality constraint language) is a
relational structure with infinite universe whose relations can be
defined by boolean combinations of equalities. We prove a complete
complexity classification for quantified constraint satisfaction
problems (QCSPs) over equality templates: these problems are in L
(decidable in logarithmic space), NP-complete, or PSPACE-complete. To
establish our classification theorem we combine methods from universal
algebra with concepts from model theory.
- Tractability and learnability arising from algebras with few subpowers
- Pawel Idziak, Petar Markovic, Ralph McKenzie, Matt Valeriote and Ross Willard
-
A wide variety of combinatorial problems can be expressed within the
framework of the Constraint Satisfaction Problem (CSP). While the class
of all CSPs forms an NP-complete class of problems there are many
naturally defined subclasses that are tractable.
One common way to define a subclass of the CSP is to restrict the
relations that appear in the constraints of an instance to a specified
set of relations over some fixed domain, called a constraint language.
The Dichotomy Conjecture of Feder and Vardi states that for any
constraint language G, the corresponding subclass of the CSP,
denoted CSP(G), is NP-complete or tractable.
A polymorphism of a set of relations G over some set A
is a finitary operation on A that preserves all of the relations
in G. Groundbreaking work of Jeavons and his co-authors has shown
that the tractability of a constraint language is determined by the set
of its polymorphisms.
A k-edge operation t on a finite set A is a
(k+1)-ary operation that satisfies the identities
t(x,x,y,...,y) = t(x,y,x,y,...,y) = y
and
t(y,y,y,x,y,y,...,y) = t(y,y,y,y,x,y,...,y) = ... = t(y,y,y,...,y,x) = y.
We prove that any constraint language G that, for some
k > 1, has a k-edge operation as a
polymorphism is globally tractable. We also show that the set of
relations definable over G using quantified generalized formulas is
polynomially exactly learnable using improper equivalence queries.
Special instances of k-edge operations are Mal'cev and
near-unanimity operations and so this class of constraint languages
includes many well known examples.
Proof Complexity
- Examining The Fragments of G
- Steven James Perron
-
We prove a witnessing theorem for G1*
similar in style to the KPT witnessing theorem for
T2i. This theorem is then used to show that
S2i proves G1*
is sound with respect to prenex
Σqi+1 formulas. The witnessing
theorem is also used to show that G1* is
p-equivalent to a quantified version of extended-Frege. We
finish by proving that S2 can be axiomatized by
S21 plus axioms stating that the cut-free
version of G* is sound.
- Separating DAG-Like and Tree-Like Proof Systems
- Phuong Nguyen
-
We show that tree-like (Gentzen's calculus) PK where all cut-formulas
have depth at most a constant d does not simulate cut-free PK.
Generally, we exhibit a family of sequents that have polynomial size
cut-free proofs but requires super-polynomial tree-like proofs even when
the cut rule is allowed on a class of cut-formulas that satisfies some
plausible hardness assumption. This gives (in some cases,
conditionally) negative answers to several questions from a recent work
of Maciel and Pitassi (LICS 2006).
- The Complexity of Proving the Discrete Jordan Curve Theorem
- Phuong Nguyen and Stephen A. Cook
-
We formalize and prove the Jordan Curve Theorem in the context of
grid graphs, under different input settings, in theories of bounded
arithmetic that correspond to the complexity classes AC0 and
AC0(2). One consequence is that the Hex tautologies and the
st-Connectivity tautologies have polynomial size
AC0(2)-Frege-proofs. This improves results of Buss, who gave
similar upper bounds for these tautologies for the stronger proof system
TC0-Frege.
Finite Model Theory
- Locally Excluding a Minor
- Anuj Dawar, Martin Grohe and Stephan Kreutzer
-
We introduce the concept of locally excluded minors. For short, a
class of graphs locally excludes a minor, if for every radius r
there is a graph H(r) so that every r-neighbourhood
in a graph of this class excludes H(r). Graph classes
locally excluding a minor generalise the concept of excluded minor
classes but also of graph classes with bounded local tree-width and even
graph classes with bounded expansion.
We show that any first-order definable decision problem becomes
fixed-parameter tractable on any class of graphs with locally excluded
minors. This strictly generalises analogous results by Flum and Grohe on
excluded minor classes and Frick and Grohe on classes with bounded local
tree-width.
As an important consequence of the proof we obtain fixed-parameter
algorithms for problems such as dominating or independent set on graph
classes excluding a minor, where now the parameter is the size of the
dominating set and the excluded minor. Up to now, the question whether
these problems are fixed-parameter tractable if the excluded minor is
part of the parameter had been open.
We also study graph classes with excluded minors, where the minor may
grow slowly with the size of the graphs and show that again, any
first-order definable decision problem is fixed-parameter tractable on
any such class of graphs.
- Lindström theorems for fragments of first-order logic
- Balder ten Cate, Jouko Vaananen and Johan van Benthem
-
Lindstrom theorems characterize logics in terms of model-theoretic
conditions such as Compactness and the Lowenheim-Skolem property. Most
existing Lindstrom theorems characterize extensions of first-order
logic. On the other hand, many logics relevant to computer science are
fragments or extensions of fragments of first-order logic, for example
k-variable logics and various modal logics. Finding Lindström
theorems for these languages can be a challenging problem, because most
techniques used in the past rely on coding arguments that seem to
require the full expressive power of first-order logic.
In this paper, we provide Lindstrom characterizations for a number of
fragments of first-order logic. These include the k-variable
fragments for k>2, Tarski's relation algebra, graded modal
logic, and the binary guarded fragment. We use two different proof
techniques. One is a modification of the original Lindstrom proof, but
with a new twist. The other involves the modal concepts of
bisimulation, tree unraveling, and finite depth. Our results can also be
used to derive semantic preservation theorems for these fragments of
first-order logic.
Characterizing the 2-variable fragment or the full guarded fragment
remain open problems.
Concurrency and Process Calculus
- Environment Bisimulations for Higher-Order Languages
- Naoki Kobayashi, Davide Sangiorgi and Eijiro Sumii
-
Developing a theory of bisimulation in higher-order languages can be
hard. Particularly challenging can be the proof of congruence and,
related to this, enhancements of the bisimulation proof method with
“up-to context” techniques. Also, it is challenging to obtain methods
that are robust (definitions and results that can be transported onto
languages with different features).
We present environment bisimulations, a form of bisimulation for
higher-order languages, and its basic theory. We consider four
representative calculi: pure λ-calculi (call-by-name and
call-by-value), call-by-value lambda-calculus with higher-order store,
and Higher-Order π-calculus. In each case: we present the basic
properties of environment bisimilarity, including congruence; we show
that it coincides with contextual equivalence; we develop some up-to
techniques, including up-to context, as examples of possible
enhancements of the associated bisimulation method.
Unlike previous approaches (such as applicative bisimulations,
logical relations, Sumii-Pierce-Koutavas-Wand), our method is direct in
the sense that it does not require induction or indices on evaluation
derivation or steps (which may complicate the proofs of congruence,
transitivity, and the combination with up-to techniques), or
sophisticated methods such as Howe's for proving congruence, and is
robust in the sense that definitions and results scale from the pure
lambda-calculi to the richer calculi, incurring no essential difficulty,
with simple congruence proof.
- π-Calculus in Logical Form
- Marcello Bonsangue and Alexander Kurz
-
Abramsky's logical formulation of domain theory is extended to
encompass the domain theoretic model for π-calculus processes of
Stark and of Fiore, Moggi and Sangiorgi. This is done by defining a
logical counterpart of categorical constructions including dynamic
name allocation and name exponentiation, and showing that they are
dual to standard constructs in functor categories. We show that
initial algebras of functors defined in terms of these constructs give
rise to a logic that is sound, complete, and characterises
bisimilarity. The approach is modular, and we apply it to derive a
logical formulation of π-calculus. The resulting logic is a modal
calculus with primitives for input, free output and bound output.
- Characterising Testing Preorders for Finite Probabilistic Processes
- Yuxin Deng, Rob van Glabbeek, Matthew Hennessy, Carroll Morgan and Chenyi Zhang
-
In 1992 Larsen and Wang extended the may- and must preorders of De
Nicola and Hennessy to processes featuring probabilistic as well as
nondeterministic choice. They concluded with two problems that have
remained open throughout the years, namely to find complete
axiomatisations and alternative characterisations for these preorders.
This paper solves both problems for finite processes with silent moves.
It characterises the may preorder in terms of simulation, and the must
preorder in terms of failure simulation. It also gives a
characterisation of both preorders using a modal logic. Finally it
axiomatises both preorders over a probabilistic version of CSP.
Semantics of Programming Languages
- Bialgebraic operational semantics and modal logic
- Bartek Klin
-
A novel, general approach is proposed to proving compositionality of
process equivalences on languages defined by Structural Operational
Semantics (SOS). The approach, based on modal logic, is inspired by the
simple observation that if the set of formulas satisfied by a process
can be derived from the corresponding sets for its subprocesses, then
the logical equivalence is a congruence. Striving for generality, SOS
rules are modeled categorically as bialgebraic distributive laws for
some notions of process syntax and behaviour, and modal logic is modeled
via coalgebraic polyadic modal logic. Compositionality is proved by
providing a suitable notion of behaviour for the logic together with a
dual distributive law, coherent with the one modeling the SOS
specification. Concretely, the dual laws may appear as SOS-like rules
where logical formulas play the role of processes, and their behaviour
models logical decomposition over process syntax. The approach can be
used either to proving compositionality for specific languages or for
defining SOS congruence formats.
- Relational Parametricity for Computational Effects
- Rasmus Ejlers Mogelberg and Alex Simpson
-
According to Strachey, a polymorphic program is parametric if it
applies a uniform algorithm independently of the type instantiations at
which it is applied. The notion of relational parametricity, introduced
by Reynolds, is one possible mathematical formulation of this idea.
Relational parametricity provides a powerful tool for establishing data
abstraction properties, proving equivalences of datatypes, and
establishing equalities of programs. Such properties have been well
studied in a pure functional setting. Real programs, however, exhibit
computational effects. In this paper, we develop a framework for
extending the notion of relational parametricity to languages with
effects.
- Static Name Control for FreshML
- Francois Pottier
-
FreshML extends ML with constructs for declaring and manipulating
abstract syntax trees that involve names and statically scoped binders.
It is impure: name generation is an observable side effect. In practice,
this means that FreshML allows writing programs that create fresh names
and unintentionally fail to bind them. Following in the steps of early
work by Pitts and Gabbay, this paper defines Pure FreshML, a subset of
FreshML equipped with a static proof system that guarantees purity. Pure
FreshML relies on a rich binding specification language, on
user-provided assertions, expressed in a logic that allows reasoning
about values and about the names that they contain, and on a
conservative, automatic decision procedure for this logic. It is argued
that Pure FreshML can express non-trivial syntax-manipulating
algorithms.
- Local Action and Abstract Separation Logic
- Cristiano Calcagno, Peter O'Hearn and Hongseok Yang
-
Separation logic is an extension of Hoare's logic which supports a
local way of reasoning about programs that mutate memory. We present a
study of the semantic structures lying behind the logic. The core idea
is of a local action, a state transformer that mutates the state in a
local way. We formulate local actions for a general class of models
called separation algebras, abstracting from the RAM and other specific
concrete models used in work on separation logic. Local actions provide
a semantics for a generalized form of (sequential) separation logic. We
also show that our conditions on local actions provide sufficient
circumscription to allow a general soundness proof for a separation
logic for concurrency, interpreted over arbitrary separation
algebras.
Game Semantics
- Resource Modalities in Game Semantics
- Paul-André Melliès and Nicolas Tabareau
-
The description of resources in game semantics has never achieved the
simplicity and precision of linear logic, because of a misleading
conception: the belief that linear logic is more primitive than game
semantics.
We advocate instead the contrary: that game semantics is conceptually
more primitive than linear logic.
Starting from this revised point of view, we define a categorical
model of resources in game semantics, and construct an arena game model
where the usual notion of bracketing is extended to multi-bracketing, or
sessions, in order to capture various resource policies: linear, affine
and exponential.
- Full abstraction for nominal general references
- Nikos Tzevelekos
-
Game semantics has been used with considerable success in formulating
fully abstract semantics for languages with higher-order procedures and
a wide range of computational effects. Recently, nominal games have
been proposed for modeling functional languages with names. These are
ordinary games cast in the theory of nominal sets developed by Pitts and
Gabbay. Here we take nominal games one step further, by developing a
fully abstract semantics for a language with nominal general
references.
Linear Logic
- Stratified Bounded Affine Logic for Logarithmic Space
- Ulrich Schoepp
-
A number of complexity classes, most notably PTIME have been
characterised by restricted versions of linear logic. In this paper we
show that the functions computable in logarithmic space can also be
characterised by a version of linear logic. We introduce Stratified
Bounded Affine Logic (BAL), a restricted version of Bounded Linear
Logic, in which not only the modality ! but also the universal
quantifier is bounded by a resource polynomial. We show that the proofs
of certain sequents in BAL represent exactly the functions computable
logarithmic space. The proof that BAL-proofs can be compiled to LOGSPACE
functions rests on modelling computation by interaction dialogues in the
style of game semantics. We formulate the compilation of BAL-proofs to
space-efficient programs as an interpretation in a realisability model,
in which realisers are taken from a Geometry of Interaction
situation.
- Light Logics and Optimal Reduction: Completeness and Complexity
- Patrick Baillot, Paolo Coppola and Ugo Dal Lago
-
We study the applications of Elementary Affine Logic (EAL) and Light
Affine Logic (LAL) typing to optimal reduction. This setting allows to
use the abstract version of Lamping's algorithm, without the delicate
oracle decision part. We consider the extensions of these type systems
with fixpoints, define translations of typed lambda-terms into sharing
graphs and prove the soundness of Lamping's abstract algorithm with
respect to beta-reduction. This is done using a suitable context
semantics. We then show our main result, establishing intrinsic
complexity bounds for the optimal reduction of these sharing graphs: for
sharing graphs coming from LAL (respectively, EAL) typed terms and at a
given depth, optimal reduction is performed in polynomial (respectively,
elementary) time.
- Modified Realizability Interpretation of Classical Linear Logic
- Paulo Oliva
-
This paper presents a modified realizability interpretation of
classical linear logic. Our interpretation is based on work of de Paiva
(1989), Blass (1995), and Shirahata (2006) on categorical models of
classical linear logic using Gödel's Dialectica interpretation. Whereas
the Dialectica categories provide models of linear logic, our
interpretation is presented as an endo-interpretation of proofs, which
does not leave the realm of classical linear logic. The advantage is
that we can obtain stronger versions of the disjunction and existence
properties, and new conservation results for certain choice principles.
Of particular interest is a simple form of branching quantifier, dubbed
“simultaneous” quantifier, which is introduced in the order to obtain a
completeness result for the modified realizability interpretation.
Topology and Computable Mathematics
- Infinite sets that admit fast exhaustive search II
- Martín Escardó
-
Perhaps surprisingly, there are infinite sets that admit mechanical
exhaustive search in finite time, for any decidable property. A first
example, the Cantor set of infinite sequences of booleans, goes back to
Gandy and Berger independently.
Our primary contribution is a comprehensive investigation of such
sets from the point of view of computability theory. We develop tools
for systematically building them and a compelling characterization. They
include the finite sets, and are closed under intersections with
decidable sets, and under the formation of computable images, and finite
and countably infinite products. In total higher-type computation over
the natural numbers, they are precisely the computable images of the
Cantor set. These results assume that one works with Kleene-Kreisel
functionals; the Cantor set fails to be exhaustible from the point of
view of hereditarily effective functionals.
The specifications of our algorithms can be understood without much
background, but an understanding of the algorithms themselves requires a
fair amount of topology and domain theory. The closure properties and
characterization of exhaustibility resemble those of compactness in
topology. This is no accident: exhaustible sets are to compact sets as
computable functions are to continuous maps. This plays a crucial role
in the correctness proofs of the algorithms, and, indeed, in their very
construction.
Our secondary contribution is a preliminary investigation of
efficiency and complexity. We have promising experimental results,
implemented in the higher-type programming language Haskell, and
tentative theoretical explanations.
- On Noetherian Spaces
- Jean Goubault-Larrecq
-
Call a topology well if and only if every open is compact. The
starting point of this note is that this notion generalizes that of
well-quasi order, in the sense that an Alexandroff topology is well if
and only if its specialization quasi-ordering is well. For more general
topologies, this opens the way to verifying infinite transition systems
based on non-well quasi ordered sets, but where the Pre operator
satisfies an additional continuity assumption. The technical
development rests heavily on techniques arising from topology and domain
theory, including sobriety and the de Groot dual of a stably compact
space. Finally, we note that if X is well topologized, then the set of
all (even infinite) subsets of X is again well topologized, a result
that fails for well-quasi orders, and which is usually taken as
justification for moving to the theory of better quasi-orderings. Well
topologies therefore offer an alternative to the latter in
verification.
- A Computable Approach to Measure and Integration Theory
- Abbas Edalat
-
We introduce, for the first time, a computable framework for
Lebesgue's measure and integration theory. For an effectively given
locally compact second countable Hausdorff space and an effectively
given locally finite Borel measure on the space, we define the notion of
a computable measurable set with respect to the given measure, provide a
simple characterization of computable open sets with respect to the
measure and show that the set of computable measurable subsets is closed
under complementation, finite unions and finite intersections. We then
introduce the bi-omega-complete partial order of extended real
interval-valued measurable functions and show that they are precisely
those functions whose upper and lower parts are measurable as extended
real-valued functions. We then develop the notion of computable
measurable functions using interval-valued simple functions. This leads
us to the interval versions of the main results of the theory of
Lebesgue integration which provide a computable framework for measure
and integration theory. This includes the notion of a computable
Lebesgue integrable function. The Lebesgue integral of a computable
integrable function with respect to an effectively given
(σ-)finite Borel measure on an effectively given (locally)
compact second countable Hausdorff space can be computed up to any
required accuracy. We show finally that, with respect to a metric
induced from the L1 norm, the set of Scott continuous
interval-valued functions is dense in the set of interval-valued
integrable functions.
Short presentations
- The common fragment of CTL and LTL
needs existential modalities
- Mikołaj Bojańczyk
-
- A Bisimulation-based Proof System for
the Equational Theory of Kleene Algebra
- James Worthington
-
- On the effect of bad variables
- Andrzej Murawski
-
- Descriptive characterization of
quantum circuits (Abstract)
- Manas Patra
-
|
|