# Publications

The following is a list of recent publications by the Resource Reasoning group:

TACAS 2016

We present the open-source tool T2, the first public release from the TERMINATOR project [9]. T2 has been extended over the past decade to support automatic temporal-logic proving techniques and to handle a general class of user-provided liveness and safety properties. Input can be provided in a native format and in C, via the support of the LLVM compiler framework. We briefly discuss T2’s architecture, its underlying techniques, and conclude with an experimental illustration of its competitiveness and directions for future extensions.

ESOP 2016

We present Tau-TaDA, a program logic for verifying the total correctness of concurrent programs: that such programs both terminate and produce the correct result. With Tau-TaDA, we can specify constraints on a thread’s concurrent environment that are necessary to guarantee termination. This allows us to verify total correctness for non-blocking algorithms, e.g. a counter and a stack. Our specifications can express lock- and wait-freedom. More generally, they can express that one operation cannot impede the progress of another, a new non-blocking property we call non-impedance. Moreover, our approach is modular. We can verify the operations of a module independently, and build up modules on top of each other.

@inproceedings{TotalTaDA, author = {da Rocha Pinto, Pedro and Dinsdale-Young, Thomas and Gardner, Philippa and Sutherland, Julian}, title = {Modular Termination Verification for Non-blocking Concurrency}, booktitle = {ESOP}, year = {2016}, }

APLAS 2015

Separation logic has been successful at verifying that programs do not crash due to illegal use of resources. The underlying assumption, however, is that machines do not fail. In practice, machines can fail unpredictably for various reasons, e.g. power loss, corrupting resources. Critical software, e.g. file systems, employ recovery methods to mitigate these effects. We introduce an extension of the Views framework to reason about such methods. We use concurrent separation logic as an instance of the framework to illustrate our reasoning, and explore programs using write-ahead logging, e.g. an ARIES recovery algorithm.

@inproceedings{Faults, author = {Ntzik, Gian and da Rocha Pinto, Pedro and Gardner, Philippa}, title = {Fault-tolerant Resource Reasoning}, booktitle = {APLAS}, year = {2015}, pages={169-188}, }

RAMICS 2015

We prove strong completeness of a range of substructural logics with respect to their relational semantics by completeness-via-canonicity. Specifically, we use the topological theory of canonical (in) equations in distributive lattice expansions to show that distributive substructural logics are strongly complete with respect to their relational semantics. By formalizing the problem in the language of coalgebraic logics, we develop a modular theory which covers a wide variety of different logics under a single framework, and lends itself to further extensions.

@Inbook{Dahlqvist2015, author="Dahlqvist, Fredrik and Pym, David", editor="Kahl, Wolfram and Winter, Michael and Oliveira, Jos{\'e}", title="Completeness via Canonicity for Distributive Substructural Logics: A Coalgebraic Perspective", bookTitle="Relational and Algebraic Methods in Computer Science: 15th International Conference, RAMiCS 2015, Braga, Portugal, September 28 - October 1, 2015, Proceedings", year="2015", publisher="Springer International Publishing", address="Cham", pages="119--135", isbn="978-3-319-24704-5", doi="10.1007/978-3-319-24704-5_8", url="http://dx.doi.org/10.1007/978-3-319-24704-5_8" }

CAV 2015

In this paper we introduce the first known fully automated tool for symbolically proving CTL∗ properties of (infinite-state) integer programs. The method uses an internal encoding which facilitates reasoning about the subtle interplay between the nesting of path and state temporal operators that occurs within CTL∗ proofs. A precondition synthesis strategy is then used over a program transformation which trades nondeterminism in the transition relation for nondeterminism explicit in variables predicting future outcomes when necessary. We show the viability of our approach in practice using examples drawn from device drivers and various industrial examples.

LICS 2015

I chart a line of development from category-theoretic models of programs and logics to automatic program verification/analysis techniques that are in deployment at Facebook. Our journey takes in a number of concepts from the computer science logician’s toolkit – including categorical logic and model theory, denotational semantics, the Curry-Howard isomorphism, sub structural logic, Hoare Logic and Separation Logic, abstract interpretation, compositional program analysis, the frame problem, and abductive inference.

MFPS 2015

The specification of a concurrent program module is a difficult problem. The specifications must be strong enough to enable reasoning about the intended clients without reference to the underlying module implementation. We survey a range of verification techniques for specifying concurrent modules, in particular highlighting four key concepts: auxiliary state, interference abstraction, resource ownership and atomicity. We show how these concepts combine to provide powerful approaches to specifying concurrent modules.

@inproceedings{spec-concurrency, author = {da Rocha Pinto, Pedro and Dinsdale-Young, Thomas and Gardner, Philippa}, title = {Steps in Modular Specifications for Concurrent Modules (Invited Tutorial Paper)}, booktitle = {MFPS}, year = {2015}, pages={3-18}, }

Journal of Logic and Computation 2015

Understanding the boundaries of trust is a key aspect of accurately modelling the structure and behaviour of multi-agent systems with heterogeneous motivating factors. Reasoning about these boundaries in highly interconnected, information-rich ecosystems is complex, and dependent upon modelling at the correct level of abstraction. Building on an established mathematical systems modelling framework that captures the classical view of distributed systems, we develop a modelling framework that incorporates both logical and cost-based descriptions of systems, which allows us to establish a definition of an agent’s trust domain based on the satisfaction of logical properties at acceptable utility (handled here simply as cost) to the agent, of verification. In addition to the technical properties of the modelling framework itself, we establish a theory of logical combinators, including substitution, for composing trust domains to form relatively complex models of trust. We illustrate the ideas with examples throughout.

Journal of Logic and Computation

TinyToCS 3

We embrace overapproximation to prove nontermination: a live abstraction’s closed recurrence implies the input program’s nontermination.

ESOP 2015

We propose SplInter, a new technique for proving properties of heap-manipulating programs that marries (1) a new separation logic–based analysis for heap reasoning with (2) an interpolation-based technique for refining heap-shape invariants with data invariants. SplInter is property directed, precise, and produces counterexample traces when a property does not hold. Using the novel notion of spatial interpolants modulo theories, SplInter can infer complex invariants over general recursive predicates, e.g., of the form all elements in a linked list are even or a binary tree is sorted. Furthermore, we treat interpolation as a black box, which gives us the freedom to encode data manipulation in any suitable theory for a given program (e.g., bit vectors, arrays, or linear arithmetic), so that our technique immediately benefits from any future advances in SMT solving and interpolation.

NFM 2015

For organisations like Facebook, high quality software is important. However, the pace of change and increasing complexity of modern code makes it difficult to produce error-free software. Available tools are often lacking in helping programmers develop more reliable and secure applications. Formal verification is a technique able to detect software errors statically, before a product is actually shipped. Although this aspect makes this technology very appealing in principle, in practice there have been many difficulties that have hindered the application of software verification in industrial environments. In particular, in an organisation like Facebook where the release cycle is fast compared to more traditional industries, the deployment of formal techniques is highly challenging. This paper describes our experience in integrating a verification tool based on static analysis into the software development cycle at Facebook.

Theoretical Computer Science 614:63-96, 2016

Mathematical modelling and simulation modelling are fundamental tools of engineering, science, and social sciences such as economics, and provide decision-support tools in management. Mathematical models are essentially deployed at all scales, all levels of complexity, and all levels of abstraction. Models are often required to be executable, as a simulation, on a computer. We present some contributions to the process-theoretic and logical foundations of discrete-event modelling with resources and processes. Building on previous work in resource semantics, process calculus, and modal logic, we describe a process calculus with an explicit representation of resources in which processes and resources co-evolve. The calculus is closely connected to a substructural modal logic that may be used as a specification language for properties of models. In contrast to earlier work, we formulate the resource semantics, and its relationship with process calculus, in such a way that we obtain soundness and completeness of bisimulation with respect to logical equivalence for the naturally full range of logical connectives and modalities. We give a range of examples of the use of the process combinators and logical structure to describe system structure and behaviour.

Proc 15th Int. Conf. on Relational and Algebraic Methods in Computer Science (RAMiCS 2015)

We prove strong completeness of a range of substructural logics with respect to their relational semantics by completeness-via-canonicity. Specifically, we use the topological theory of canonical (in) equations in distributive lattice expansions to show that distributive substructural logics are strongly complete with respect to their relational semantics. By formalizing the problem in the language of coalgebraic logics, we develop a modular theory which covers a wide variety of different logics under a single framework, and lends itself to further extensions.

SIMUTools 2015

Security managers face the challenge of designing security policies that deliver the objectives required by their organizations. We explain how a rigorous modelling framework and methodology—grounded in semantically justified mathematical systems modelling, the economics of decision-making, and simulation—can be used to explore the operational consequences of their design choices and help security managers to make better decisions. The methodology is based on constructing executable system models that illustrate the effects of different policy choices. Models are compositional, allowing complex systems to be expressed as combinations of smaller, complete models. They capture the logical and physical structure of systems, the choices and behaviour of agents within the system, and the security managers’ preferences about outcomes. Utility theory is used to describe the extent to which security managers’ policies deliver their security objectives. Models are parametrized based on data obtained from observations of real-world systems that correspond closely to the examples described.

Tools and Algorithms for the Construction and Analysis of Systems,2015

In this paper we introduce the first known tool for symbolically proving fair -CTL properties of (infinite-state) integer programs. Our solution is based on a reduction to existing techniques for fairnessfree CTL model checking via the use of infinite non-deterministic branching to symbolically partition fair from unfair executions. We show the viability of our approach in practice using examples drawn from device drivers and algorithms utilizing shared resources.

A Logic of Separating Modalities. Submitted to a journal

We present a logic of separating modalities, LSM, that is based on Boolean BI. LSM’s modalities, which generalize those of S4, combine, within a quite general relational semantics, BI’s resource semantics with modal accessibility. We provide a range of examples illustrating their use in a range of examples from computer science. We give a proof system based on the method of labelled tableaux with countermodel extraction, establishing its soundness and completeness with respect to the resource semantics

FMCAD 2014

When disproving termination using known techniques (e.g. recurrence sets), abstractions that overapproximate the program’s transition relation are unsound. In this paper we introduce live abstractions, a natural class of abstractions that can be combined with the recent concept of closed recurrence sets to soundly disprove termination. To demonstrate the practical usefulness of this new approach we show how programs with nonlinear, nondeterministic, and heap-based commands can be shown nonterminating using linear overapproximations.

SAS 2014

We introduce cyclic abduction: a new method for automatically inferring safety and termination preconditions of heap-manipulating while programs, expressed as inductive definitions in separation logic. Cyclic abduction essentially works by searching for a cyclic proof of the desired property, abducing definitional clauses of the precondition as necessary in order to advance the proof search process. We provide an implementation, CABER, of our cyclic abduction method, based on a suite of heuristically guided tactics. It is often able to automatically infer preconditions describing lists, trees, cyclic and composite structures which, in other tools, previously had to be supplied by hand.

ECOOP 2014

To avoid data races, concurrent operations should either be at distinct times or on distinct data. Atomicity is the abstraction that an operation takes effect at a single, discrete instant in time, with linearisability being a well-known correctness condition which asserts that concurrent operations appear to behave atomically. Disjointness is the abstraction that operations act on distinct data resource, with concurrent separation logics enabling reasoning about threads that appear to operate independently on disjoint resources. We present TaDA, a program logic that combines the benefits of abstract atomicity and abstract disjointness. Our key contribution is the introduction of atomic triples, which offer an expressive approach to specifying program modules. By building up examples, we show that TaDA supports elegant modular reasoning in a way that was not previously possible.

@inproceedings{TaDA, author = {da Rocha Pinto, Pedro and Dinsdale-Young, Thomas and Gardner, Philippa}, title = {{TaDA}: A Logic for Time and Data Abstraction}, booktitle = {ECOOP}, year = {2014}, pages={207-231}, }

Formal Aspects of Computing

John Reynolds (1935-2013) was a pioneer of programming languages research. In this paper we pay tribute to the man, his ideas, and his influence.

POPL 2014

JavaScript is the most widely used web language for client-side applications. Whilst the development of JavaScript was initially just led by implementation, there is now increasing momentum behind the ECMA standardisation process. The time is ripe for a formal, mechanised specification of JavaScript, to clarify ambiguities in the ECMA standards, to serve as a trusted reference for high-level language compilation and JavaScript implementations, and to provide a platform for high-assurance proofs of language properties.

We present JSCert, a formalisation of the current ECMA standard in the Coq proof assistant, and JSRef, a reference interpreter for JavaScript extracted from Coq to OCaml. We give a Coq proof that JSRef is correct with respect to JSCert and assess JSRef using test262, the ECMA conformance test suite. Our methodology ensures that JSCert is a comparatively accurate formulation of the English standard, which will only improve as time goes on. We have demonstrated that modern techniques of mechanised specification can handle the complexity of JavaScript.

@inproceedings{jscert, author = {Martin Bodin and Arthur Chargu\'eraud and Daniele Filaretti and Philippa Gardner and Sergio Maffeis and Daiva Naud\v{z}i\={u}nien\.{e} and Alan Schmitt and Gareth Smith}, title = {A Trusted Mechanised {JavaScript} Specification}, booktitle = {POPL}, year = {2014}, }

POPL 2014

In this paper, we close the logical gap between provability in the logic BBI, which is the propositional basis for separation logic, and validity in an intended class of separation models, as employed in applications of separation logic such as program verification. An intended class of separation models is usually specified by a collection of axioms describing the specific model properties that are expected to hold, which we call a separation theory.

Our main contributions are as follows. First, we show that several typical properties of separation theories are not definable in BBI. Second, we show that these properties become definable in a suitable hybrid extension of BBI, obtained by adding a theory of naming to BBI in the same way that hybrid logic extends normal modal logic. The binder-free extension HyBBI captures most of the properties we consider, and the full extension HyBBI(\downarrow) with the usual binder of hybrid logic covers all these properties. Third, we present an axiomatic proof system for our hybrid logic whose extension with any set of “pure” axioms is sound and complete with respect to the models satisfying those axioms. As a corollary of this general result, we obtain, in a parametric manner, a sound and complete axiomatic proof system for any separation theory from our considered class. To the best of our knowledge, this class includes all separation theories appearing in the published literature.

@inproceedings{BV-popl14, author = {Brotherston, James and Villard, Jules}, booktitle = {POPL}, publisher = {ACM}, title = {Parametric Completeness for Separation Theories}, year = {2014}, ee = {http://dx.doi.org/10.1145/2535838.2535844}, }

LNCS 8533: 233--245

Security managers face the challenge of formulating and implementing policies that deliver their desired system security postures — for example, their preferred balance of confidentiality, integrity, and availability — within budget (monetary and otherwise). In this paper, we describe a security modelling methodology, grounded in rigorous mathematical systems modelling and economics, that captures the managers’ policies and the behavioural choices of agents operating within the system. Models are executable, so allowing systematic experimental exploration of the system-policy co-design space, and compositional, so managing the complexity of large-scale systems.

Formal Methods in Computer-Aided Design, 2014

In this paper, we describe a new symbolic model checking procedure for CTL verification of infinite-state programs. Our procedure exploits the natural decomposition of the state space given by the control-flow graph in combination with the nesting of temporal operators to optimize reasoning performed during symbolic model checking. An experimental evaluation against competing tools demonstrates that our approach not only gains orders-of-magnitude performance improvement, but also allows for scalability of temporal reasoning for larger programs

Logical Methods in Computer Science

Linearizability is a commonly accepted notion of correctness for libraries of concurrent algorithms. Unfortunately, it assumes a complete isolation between a library and its client, with interactions limited to passing values of a given data type. This is inappropriate for common programming languages, where libraries and their clients can communicate via the heap, transferring the ownership of data structures, and can even run in a shared address space without any memory protection. In this paper, we present the first definition of linearizability that lifts this limitation and establish an Abstraction Theorem: while proving a property of a client of a concurrent library, we can soundly replace the library by its abstract implementation related to the original one by our generalisation of linearizability. This allows abstracting from the details of the library implementation while reasoning about the client. We also prove that linearizability with ownership transfer can be derived from the classical one if the library does not access some of data structures transferred to it by the client.

@article{GotsmanYang-LMCS13, author = {Alexey Gotsman and Hongseok Yang}, title = {Linearizability with Ownership Transfer}, journal = {Logical Methods in Computer Science}, volume = {9}, number = {3}, year = {2013} }

CAV 2013

One of the difficulties of proving program termination is managing the subtle interplay between the finding of a termination argument and the finding of the argument’s supporting invariant. In this paper we propose a new mechanism that facilitates better cooperation between these two types of reasoning. In an experimental evaluation we find that our new method leads to dramatic performance improvements.

@inproceedings{bro:coo:fuh:13, author = {Marc Brockschmidt and Byron Cook and Carsten Fuhs}, title = {Better termination proving through cooperation}, booktitle = {Proc.\ CAV~'13}, series = {LNCS}, year = {2013}, volume = {8044}, pages = {413-429}, }

SAS 2013

We present a formal framework for static specification mining. The main idea is to represent partial temporal specifications as symbolic automata – automata where transitions may be labeled by variables, and a variable can be substituted by a letter, a word, or a regular language. Using symbolic automata, we construct an abstract domain for static specification mining, capturing both the partialness of a specification and the precision of a specification. We show interesting relationships between lattice operations of this domain and common operators for manipulating partial temporal specifications, such as building a more informative specification by consolidating two partial specifications.

@inproceedings{PelegSYY13, author = {Hila Peleg and Sharon Shoham and Eran Yahav and Hongseok Yang}, title = {Symbolic Automata for Static Specification Mining}, booktitle = {SAS}, year = {2013}, pages = {63-83}, ee = {http://dx.doi.org/10.1007/978-3-642-38856-9_6} }

PLDI 2013

We propose a technique to efficiently search a large family of abstractions in order to prove a query using a parametric dataflow analysis. Our technique either finds the cheapest such abstraction or shows that none exists. It is based on counterexample-guided abstraction refinement but applies a novel meta-analysis on abstract counterexample traces to efficiently find abstractions that are incapable of proving the query. We formalize the technique in a generic framework and apply it to two analyses: a type-state analysis and a thread-escape analysis. We demonstrate the effectiveness of the technique on a suite of Java benchmark programs.

@inproceedings{ZhangNY13, author = {Xin Zhang and Mayur Naik and Hongseok Yang}, title = {Finding optimum abstractions in parametric dataflow analysis}, booktitle = {PLDI}, year = {2013}, pages = {365-376}, ee = {http://doi.acm.org/10.1145/2462156.2462185} }

ESOP 2013

Memory management is one of the most complex aspects of modern concurrent algorithms, and various techniques proposed for it—such as hazard pointers, read-copy-update and epoch-based reclamation—have proved very challenging for formal reasoning. In this paper, we show that different memory reclamation techniques actually rely on the same implicit synchronisation pattern, not clearly reflected in the code, but only in the form of assertions used to argue its correctness. The pattern is based on the key concept of a grace period, during which a thread can access certain shared memory cells without fear that they get deallocated. We propose a modular reasoning method, motivated by the pattern, that handles all three of the above memory reclamation techniques in a uniform way. By explicating their fundamental core, our method achieves clean and simple proofs, scaling even to realistic implementations of the algorithms without a significant increase in proof complexity. We formalise the method using a combination of separation logic and temporal logic and use it to verify example instantiations of the three approaches to memory reclamation.

@inproceedings{GotsmanRY13, author = {Alexey Gotsman and Noam Rinetzky and Hongseok Yang}, title = {Verifying Concurrent Memory Reclamation Algorithms with Grace}, booktitle = {ESOP}, year = {2013}, pages = {249-269}, ee = {http://dx.doi.org/10.1007/978-3-642-37036-6_15} }

ESOP 2013

Multiprocessors implement weak memory models, but program verifiers often assume Sequential Consistency (SC), and thus may miss bugs due to weak memory. We propose a sound transformation of the program to verify, enabling SC tools to perform verification w.r.t. weak memory. We present experiments for a broad variety of models (from x86-TSO to Power) and a vast range of verification tools, quantify the additional cost of the transformation and highlight the cases when we can drastically reduce it. Our benchmarks include work-queue management code from PostgreSQL.

Mathematical Structures in Computer Science, 2013

Frame and anti-frame rules have been proposed as proof rules for modular reasoning about programs. Frame rules allow one to hide irrelevant parts of the state during verification, whereas the anti-frame rule allows one to hide local state from the context.

We discuss the semantic foundations of frame and anti-frame rules, and present the first sound model for Chargueraud and Pottier’s type and capability system including both of these rules. The model is a possible worlds model based on the operational semantics and step-indexed heap relations, and the worlds are given by a recursively defined metric space. We also extend the model to account for Pottier’s generalized frame and anti-frame rules, where invariants are generalized to families of invariants indexed over preorders. This generalization enables reasoning about some well-bracketed as well as (locally) monotone uses of local state.

@article{SchwinghammerBPRSY13, author = {Jan Schwinghammer and Lars Birkedal and Fran\c{c}ois Pottier and Bernhard Reus and Kristian St{\o}vring and Hongseok Yang}, title = {A step-indexed Kripke model of hidden state A step-indexed Kripke model of hidden state}, journal = {Mathematical Structures in Computer Science}, volume = {23}, number = {1}, year = {2013}, pages = {1-54}, ee = {http://journals.cambridge.org/action/displayAbstract?aid=8782519}, bibsource = {DBLP, http://dblp.uni-trier.de} }

POPL 2013

Compositional abstractions underly many reasoning principles for concurrent programs: the concurrent environment is abstracted in order to reason about a thread in isolation; and these abstractions are composed to reason about a program consisting of many threads. For instance, separation logic uses formulae that describe part of the state, abstracting the rest; when two threads use disjoint state, their specifications can be composed with the separating conjunction. Type systems abstract the state to the types of variables; threads may be composed when they agree on the types of shared variables.

In this paper, we present the “Concurrent Views Framework”, a metatheory of concurrent reasoning principles.
The theory is parameterised by an abstraction of state with a notion of composition, which we call *views*.
The metatheory is remarkably simple, but highly applicable: the rely-guarantee method, concurrent separation logic, concurrent abstract predicates, type systems for recursive references and for unique pointers, and even an adaptation of the Owicki-Gries method can all be seen as instances of the Concurrent Views Framework.
Moreover, our metatheory proves each of these systems is sound without requiring induction on the operational semantics.

@inproceedings{Dinsdale-Young:2013:VCR:2429069.2429104, author = {Dinsdale-Young, Thomas and Birkedal, Lars and Gardner, Philippa and Parkinson, Matthew and Yang, Hongseok}, title = {Views: compositional reasoning for concurrent programs}, booktitle = {Proceedings of the 40th annual ACM SIGPLAN-SIGACT symposium on Principles of programming languages}, series = {POPL '13}, year = {2013}, isbn = {978-1-4503-1832-7}, location = {Rome, Italy}, pages = {287--300}, numpages = {14}, url = {http://doi.acm.org/10.1145/2429069.2429104}, doi = {10.1145/2429069.2429104}, acmid = {2429104}, publisher = {ACM}, address = {New York, NY, USA}, keywords = {axiomatic semantics, compositional reasoning, concurrency}, }

POPL 2013

Programs manipulating mutable data structures with intrinsic sharing present a challenge for modular verification. Deep aliasing inside data structures dramatically complicates reasoning in isolation over parts of these objects because changes to one part of the structure (say, the left child of a dag node) can affect other parts (the right child or some of its descendants) that may point into it. The result is that finding intuitive and compositional proofs of correctness is usually a struggle. We propose a compositional proof system that enables local reasoning in the presence of sharing.

While the AI “frame problem” elegantly captures the reasoning required to verify programs without sharing, we contend that natural reasoning about programs with sharing instead requires an answer to a different and more challenging AI problem, the “ramification problem”: reasoning about the indirect consequences of actions. Accordingly, we present a RAMIFY proof rule that attacks the ramification problem head-on and show how to reason with it. Our framework is valid in any separation logic and permits sound compositional and local reasoning in the context of both specified and unspecified sharing. We verify the correctness of a number of examples, including programs that manipulate dags, graphs, and overlaid data structures in nontrivial ways.

@inproceedings{Hobor:2013:RSD:2429069.2429131, author = {Hobor, Aquinas and Villard, Jules}, title = {The ramifications of sharing in data structures}, booktitle = {Proceedings of the 40th annual ACM SIGPLAN-SIGACT symposium on Principles of programming languages}, series = {POPL '13}, year = {2013}, isbn = {978-1-4503-1832-7}, location = {Rome, Italy}, pages = {523--536}, numpages = {14}, url = {http://doi.acm.org/10.1145/2429069.2429131}, doi = {10.1145/2429069.2429131}, acmid = {2429131}, publisher = {ACM}, address = {New York, NY, USA}, keywords = {aliasing, heap/shape, modularity, separation logic}, }

International Conferences on Logic for Programming, Artificial Intelligence and Reasoning, 2013

Logical Methods in Computer Science, 2012

Recently, data abstraction has been studied in the context of separation logic, with noticeable practical successes: the developed logics have enabled clean proofs of tricky challenging programs, such as subject-observer patterns, and they have become the basis of efficient verification tools for Java (jStar), C (VeriFast) and Hoare Type Theory (Ynot). In this paper, we give a new semantic analysis of such logic-based approaches using Reynolds’s relational parametricity. The core of the analysis is our lifting theorems, which give a sound and complete condition for when a true implication between assertions in the standard interpretation entails that the same implication holds in a relational interpretation. Using these theorems, we provide an algorithm for identifying abstraction-respecting client-side proofs; the proofs ensure that clients cannot distinguish two appropriately-related module implementations.

@article{ThamsborgBY-LMCS12, author = {Jacob Thamsborg and Lars Birkedal and Hongseok Yang}, title = {Two for the Price of One: Lifting Separation Logic Assertions}, journal = {Logical Methods in Computer Science}, volume = {8}, number = {3}, year = {2012}, bibsource = {DBLP, http://dblp.uni-trier.de} }

FSE 2012

We present an algorithm and a system for generating input events to exercise smartphone apps. Our approach is based on concolic testing and generates sequences of events automatically and systematically. It alleviates the path-explosion problem by checking a condition on program executions that identifies subsumption between different event sequences. We also describe our implementation of the approach for Android, the most popular smartphone app platform, and the results of an evaluation that demonstrates its effectiveness on five Android apps.

@inproceedings{AnandNHY12, author = {Saswat Anand and Mayur Naik and Mary Jean Harrold and Hongseok Yang}, title = {Automated concolic testing of smartphone apps}, booktitle = {SIGSOFT FSE}, year = {2012}, pages = {59}, ee = {http://doi.acm.org/10.1145/2393596.2393666}, bibsource = {DBLP, http://dblp.uni-trier.de} }

Studia Logica: Special Issue on Recent Developments related to Residuated Lattices and Substructural Logics, 2012

We formulate a unified display calculus proof theory for the four principal varieties of bunched logic by combining display calculi for their component logics. Our calculi satisfy cut-elimination, and are sound and complete with respect to their standard presentations. We show how to constrain applications of display-equivalence in our calculi in such a way that an exhaustive proof search need be only finitely branching, and establish a full deduction theorem for the bunched logics with classical additives, BBI and CBI. We also show that the standard sequent calculus for BI can be seen as a reformulation of its display calculus, and argue that analogous sequent calculi for the other varieties of bunched logic are very unlikely to exist.

@Article{Brotherston:12, author = "James Brotherston", title = "Bunched Logics Displayed", journal = "Studia Logica: Special Issue on Recent Developments related to Residuated Lattices and Substructural Logics", volume = 100, number = 6, pages = "1223--1254", publisher = "Springer", year = 2012 }

DISC 2012

Modern programming languages, such as C++ and Java, provide a sequentially consistent (SC) memory model for well-behaved programs that follow a certain synchronisation discipline, e.g., for those that are data-race free (DRF). However, performance-critical libraries often violate the discipline by using low-level hardware primitives, which have a weaker semantics. In such scenarios, it is important for these libraries to protect their otherwise well-behaved clients from the weaker memory model.

In this paper, we demonstrate that a variant of linearizability can be used to reason formally about the interoperability between a high-level DRF client and a low-level library written for the Total Store Order (TSO) memory model, which is implemented by x86 processors. Namely, we present a notion of linearizability that relates a concrete library implementation running on TSO to an abstract specification running on an SC machine. A client of this library is said to be DRF if its SC executions calling the abstract library specification do not contain data races. We then show how to compile a DRF client to TSO such that it only exhibits SC behaviours, despite calling into a racy library.

@inproceedings{GotsmanMY12, author = {Alexey Gotsman and Madanlal Musuvathi and Hongseok Yang}, title = {Show No Weakness: Sequentially Consistent Specifications of TSO Libraries}, booktitle = {DISC}, year = {2012}, pages = {31-45}, ee = {http://dx.doi.org/10.1007/978-3-642-33651-5_3}, bibsource = {DBLP, http://dblp.uni-trier.de} }

PPDP 2012

There exist many powerful techniques to analyze termination and complexity of term rewrite systems (TRSs). Our goal is to use these techniques for the analysis of other programming languages as well. For instance, approaches to prove termination of definite logic programs by a transformation to TRSs have been studied for decades. However, a challenge is to handle languages with more complex evaluation strategies (such as Prolog, where predicates like the cut influence the control flow). In this paper, we present a general methodology for the analysis of such programs. Here, the logic program is first transformed into a symbolic evaluation graph which represents all possible evaluations in a finite way. Afterwards, different analyses can be performed on these graphs. In particular, one can generate TRSs from such graphs and apply existing tools for termination or complexity analysis of TRSs to infer information on the termination or complexity of the original logic program.

@inproceedings{gie:str:sch:emm:fuh:12, author = {J\"urgen Giesl and Thomas Str\"oder and Peter {Schneider-Kamp} and Fabian Emmes and Carsten Fuhs}, title = {Symbolic Evaluation Graphs and Term Rewriting: A General Methodology for Analyzing Logic Programs}, booktitle = {Proc.\ PPDP~'12}, editor = {Danny De Schreye and Gerda Janssens and Andy King}, pages = {1-12}, year = {2012}, publisher = {ACM Press}, }

CONCUR 2012

Linearizability is a commonly accepted notion of correctness for libraries of concurrent algorithms. Unfortunately, it assumes a complete isolation between a library and its client, with interactions limited to passing values of a given data type. This is inappropriate for common programming languages, where libraries and their clients can communicate via the heap, transferring the ownership of data structures, and can even run in a shared address space without any memory protection.

In this paper, we present the first definition of linearizability that lifts this limitation and establish an Abstraction Theorem: while proving a property of a client of a concurrent library, we can soundly replace the library by its abstract implementation related to the original one by our generalisation of linearizability. We also prove that linearizability with ownership transfer can be derived from the classical one if the library does not access some of data structures transferred to it by the client.

@inproceedings{GotsmanY12, author = {Alexey Gotsman and Hongseok Yang}, title = {Linearizability with Ownership Transfer}, booktitle = {CONCUR}, year = {2012}, pages = {256-271}, ee = {http://dx.doi.org/10.1007/978-3-642-32940-1_19}, bibsource = {DBLP, http://dblp.uni-trier.de} }

Annals of Pure and Applied Logic, 2012

Starting from Girard’s seminal paper on light linear logic (LLL), a number of works investigated systems derived from linear logic to capture polynomial time computation within the *computation-as-cut-elimination* paradigm.

The original syntax of LLL is too complicated, mainly because one has to deal with sequents which do not just consist of formulas but also of ‘blocks’ of formulas.

We circumvent the complications of ‘blocks’ by introducing a new modality *∇* which is exclusively in charge of ‘additive blocks’. One of the most interesting features of this purely multiplicative *∇* is the possibility of the second-order encodings of additive connectives.

The resulting system (with the traditional syntax), called Easy-LLL, is still powerful to represent any deterministic polynomial time computations in purely logical terms.

Unlike the original LLL, Easy-LLL admits polynomial time strong normalization together with the Church–Rosser property, namely, cut elimination terminates in a *unique* way in polytime by any choice of cut reduction strategies.

@article{DBLP:journals/apal/Kanovich12, author = {Max I. Kanovich}, title = {Light linear logics with controlled weakening: Expressibility, confluent strong normalization}, journal = {Ann. Pure Appl. Logic}, volume = {163}, number = {7}, year = {2012}, pages = {854-874}, ee = {http://dx.doi.org/10.1016/j.apal.2011.09.012}, bibsource = {DBLP, http://dblp.uni-trier.de} }

IHI 2012

Before a drug can be made available to the general public, its effectiveness has to be experimentally evaluated. Experiments that involve human subjects are called Clinical Investigations (CIs). Since human subjects are involved, procedures for CIs are elaborated so that data required for validating the drug can be collected while ensuring the safety of subjects. Moreover, CIs are heavily regulated by public agencies, such as the Food and Drug Administration (FDA). Violations of regulations or deviations from procedures should be avoided as they may incur heavy penalties and more importantly may compromise the health of subjects. However, CIs are prone to human error, since CIs are carried out by the study team, which might be overloaded with other tasks, such as hospital and/or pharmacy duties, other trials, etc. In order to avoid discrepancies, we propose developing an automated assistant for helping all the parties to correctly carry out CIs as well as to detect and prevent discrepancies as early as possible. This way the proposed automated assistant would minimize error, and therefore increase the safety of the involved subjects. This paper takes the first steps towards that direction. In particular, we propose a model for collaborative systems with explicit time, called Timed Local State Transition Systems (TLSTS), and argue that it can be used for specifying procedures and regulations for CIs, which mention time explicitly. Finally we show how to implement a TLSTS specification using Maude, an existing computational tool based on rewriting.

@inproceedings{DBLP:conf/ihi/NigamKSTKP12, author = {Vivek Nigam and Tajana Ban Kirigin and Andre Scedrov and Carolyn L. Talcott and Max I. Kanovich and Ranko Perovic}, title = {Towards an automated assistant for clinical investigations}, booktitle = {IHI}, year = {2012}, pages = {773-778}, ee = {http://doi.acm.org/10.1145/2110363.2110456}, crossref = {DBLP:conf/ihi/2012}, bibsource = {DBLP, http://dblp.uni-trier.de} } @proceedings{DBLP:conf/ihi/2012, editor = {Gang Luo and Jiming Liu and Christopher C. Yang}, title = {ACM International Health Informatics Symposium, IHI '12, Miami, FL, USA, January 28-30, 2012}, booktitle = {IHI}, publisher = {ACM}, year = {2012}, isbn = {978-1-4503-0781-9}, ee = {http://dl.acm.org/citation.cfm?id=2110363}, bibsource = {DBLP, http://dblp.uni-trier.de} }

RTA 2012

Activities such as clinical investigations or financial processes are subject to regulations to ensure quality of results and avoid negative consequences. Regulations may be imposed by multiple governmental agencies as well as by institutional policies and protocols. Due to the complexity of both regulations and activities there is great potential for violation due to human error, misunderstanding, or even intent. Executable formal models of regulations, protocols, and activities can form the foundation for automated assistants to aid planning, monitoring, and compliance checking. We propose a model based on multiset rewriting where time is discrete and is specified by timestamps attached to facts. Actions, as well as initial, goal and critical states may be constrained by means of relative time constraints. Moreover, actions may have non-deterministic effects, that is, they may have different outcomes whenever applied. We demonstrate how specifications in our model can be straightforwardly mapped to the rewriting logic language Maude, and how one can use existing techniques to improve performance. Finally, we also determine the complexity of the plan compliance problem, that is, finding a plan that leads from an initial state to a desired goal state without reaching any undesired critical state. We consider all actions to be balanced, that is, their pre and post-conditions have the same number of facts. Under this assumption on actions, we show that the plan compliance problem is PSPACE-complete when all actions have only deterministic effects and is EXPTIME-complete when actions may have non-deterministic effects.

@inproceedings{DBLP:conf/rta/KanovichKNSTP12, author = {Max I. Kanovich and Tajana Ban Kirigin and Vivek Nigam and Andre Scedrov and Carolyn L. Talcott and Ranko Perovic}, title = {A Rewriting Framework for Activities Subject to Regulations}, booktitle = {RTA}, year = {2012}, pages = {305-322}, ee = {http://dx.doi.org/10.4230/LIPIcs.RTA.2012.305}, crossref = {DBLP:conf/rta/2012}, bibsource = {DBLP, http://dblp.uni-trier.de} } @proceedings{DBLP:conf/rta/2012, editor = {Ashish Tiwari}, title = {23rd International Conference on Rewriting Techniques and Applications (RTA'12) , RTA 2012, May 28 - June 2, 2012, Nagoya, Japan}, booktitle = {RTA}, publisher = {Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik}, series = {LIPIcs}, volume = {15}, year = {2012}, isbn = {978-3-939897-38-5}, bibsource = {DBLP, http://dblp.uni-trier.de} }

LICS 2012 (short paper)

We present ribbon proofs, a diagrammatic proof system for separation logic. Inspired by an eponymous system due to Bean, ribbon proofs emphasise the structure of a proof, so are intelligible and hence useful pedagogically. Because they contain less redundancy than proof outlines, and allow each proof step to be checked locally, they are highly scalable (and we illustrate this with a ribbon proof of the Version 7 Unix memory manager). Where proof outlines are cumbersome to modify, ribbon proofs can be visually manoeuvred to yield proofs of variant programs. This paper introduces the ribbon proof system, proves its soundness and completeness, and outlines a prototype tool for validating the diagrams in Isabelle.

RTA 2012

The termination method of weakly monotonic algebras, which has been defined for higher-order rewriting in the HRS formalism, offers a lot of power, but has seen little use in recent years. We adapt and extend this method to the alternative formalism of algebraic functional systems, where the simply-typed lambda-calculus is combined with algebraic reduction. Using this theory, we define higher-order polynomial interpretations, and show how the implementation challenges of this technique can be tackled. A full implementation is provided in the termination tool WANDA.

@inproceedings{fuh:kop:12, author = {Carsten Fuhs and Cynthia Kop}, title = {Polynomial Interpretations for Higher-Order Rewriting}, booktitle = {Proc.\ RTA~'12}, editor = {Ashish Tiwari}, series = {LIPIcs}, publisher = {Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik}, volume = {15}, year = {2012}, pages = {176-192}, }

ESOP 2012

Linearizability is a commonly accepted notion of correctness for libraries of concurrent algorithms. Unfortunately, it is only appropriate for sequentially consistent memory models, while the hardware and software platforms that algorithms run on provide weaker consistency guarantees. In this paper, we present the first definition of linearizability on a weak memory model, Total Store Order (TSO), implemented by x86 processors. We establish that our definition is a correct one in the following sense: while proving a property of a client of a concurrent library, we can soundly replace the library by its abstract implementation related to the original one by our generalisation of linearizability. This allows abstracting from the details of the library implementation while reasoning about the client. We have developed a tool for systematically testing concurrent libraries against our definition and applied it to several challenging algorithms.

@inproceedings{BurckhardtGMY-ESOP12, author = {Sebastian Burckhardt and Alexey Gotsman and Madanlal Musuvathi and Hongseok Yang}, title = {Concurrent Library Correctness on the TSO Memory Model}, booktitle = {ESOP}, year = {2012}, pages = {87-107} }

POPL 2012

JavaScript has become the most widely used language for client-side web programming. The dynamic nature of JavaScript makes understanding its code notoriously difficult, leading to buggy programs and a lack of adequate static-analysis tools. We believe that logical reasoning has much to offer JavaScript: a simple description of program behaviour, a clear understanding of module boundaries, and the ability to verify security contracts.

We introduce a program logic for reasoning about a broad subset of JavaScript, including challenging features such as prototype inheritance and with. We adapt ideas from separation logic to provide tractable reasoning about JavaScript code: reasoning about easy programs is easy; reasoning about hard programs is possible. We prove a strong soundness result. All libraries written in our subset and proved correct with respect to their specifications will be well-behaved, even when called by arbitrary JavaScript code.

@inproceedings{DBLP:conf/popl/GardnerMS12, author = {Philippa Gardner and Sergio Maffeis and Gareth David Smith}, title = {Towards a Program Logic for JavaScript}, booktitle = {POPL}, year = {2012}, pages = {31--44}, ee = {http://doi.acm.org/10.1145/2103656.2103663}, bibsource = {DBLP, http://dblp.uni-trier.de} }

POPL 2012

We present a framework for leveraging dynamic analysis to find good abstractions for static analysis. A static analysis in our framework is parametrised. Our main insight is to directly and efficiently compute from a concrete trace, a necessary condition on the parameter configurations to prove a given query, and thereby prune the space of parameter configurations that the static analysis must consider. We provide constructive algorithms for two instance analyses in our framework: a flow- and context-sensitive thread-escape analysis and a flow- and context-insensitive points-to analysis. We show the efficacy of these analyses, and our approach, on six Java programs comprising two million bytecodes: the thread-escape analysis resolves 80% of queries on average, disproving 28% and proving 52%; the points-to analysis resolves 99% of queries on average, disproving 29% and proving 70%.

@inproceedings{NaikYCS-POPL12, author = {Mayur Naik and Hongseok Yang and Ghila Castelnuovo and Mooly Sagiv}, title = {Abstractions from tests}, booktitle = {POPL}, year = {2012}, pages = {373-386}, ee = {http://doi.acm.org/10.1145/2103656.2103701} }

Formal Methods in System Design

We present a static program analysis for overlaid data structures such that a node in the structure includes links for multiple data structures and these links are intended to be used at the same time. These overlaid data structures are frequently used in systems code, in order to impose multiple types of indexing structures over the same set of nodes. Our analysis implements two main ideas. The first is to run multiple sub-analyses that track information about non-overlaid data structures, such as lists. The second idea is to control the communication among the sub-analyses using ghost states and ghost instructions. The purpose of this control is to achieve a high level of efficiency by allowing only necessary information to be transferred among sub-analyses and at as few program points as possible. Our analysis has been successfully applied to prove the memory safety of the Linux deadline IO scheduler and AFS server.

@article{LeeYP-FMSD12, author = {Oukseh Lee and Hongseok Yang and Rasmus Petersen}, title = {A divide-and-conquer approach for analysing overlaid data structures}, journal = {Formal Methods in System Design}, volume = {41}, number = {1}, year = {2012}, pages = {4-24} }

Journal of the ACM

The accurate and efficient treatment of mutable data structures is one of the outstanding problem areas in automatic program verification and analysis. Shape analysis is a form of program analysis that attempts to infer descriptions of the data structures in a program, and to prove that these structures are not misused or corrupted. It is one of the more challenging and expensive forms of program analysis, due to the complexity of aliasing and the need to look arbitrarily deeply into the program heap. This paper describes a method of boosting shape analyses by deﬁning a compositional method, where each procedure is analyzed independently of its callers. The analysis algorithm uses a restricted fragment of separation logic, and assigns a collection of Hoare triples to each procedure; the triples provide an over-approximation of data structure usage. Our method brings the usual beneﬁts of compositionality – increased potential to scale, ability to deal with incomplete programs, graceful way to deal with imprecision – to shape analysis, for the first time.

The analysis rests on a generalized form of abduction (inference of explanatory hypotheses) which we call bi-abduction. Bi-abduction displays abduction as a kind of inverse to the frame problem: it jointly infers anti-frames (missing portions of state) and frames (portions of state not touched by an operation), and is the basis of a new analysis algorithm. We have implemented our analysis and we report case studies on smaller programs to evaluate the quality of discovered speciﬁcations, and larger code bases (e.g. sendmail, an imap server, a Linux distribution) to illustrate the level of automation and scalability that we obtain from our compositional method. The paper makes number of specific technical contributions on proof procedures and analysis algorithms, but in a sense its more important contribution is holistic: the explanation and demonstration of how a massive increase in automation is possible using abductive inference.

@article{DBLP:journals/jacm/CalcagnoDOY11, author = {Cristiano Calcagno and Dino Distefano and Peter W. O'Hearn and Hongseok Yang}, title = {Compositional Shape Analysis by Means of Bi-Abduction}, journal = {J. ACM}, volume = {58}, number = {6}, year = {2011}, pages = {26} }

Journal of Logic and Computation

This paper formalises and compares two different styles of reasoning with inductively defined predicates, each style being encapsulated by a corresponding sequent calculus proof system.

The first system, LKID, supports traditional proof by induction, with induction rules formulated as rules for introducing inductively defined predicates on the left of sequents. We show LKID to be cut-free complete with respect to a natural class of Henkin models; the eliminability of cut follows as a corollary.

The second system, LKID^{ω}, uses infinite (non-well-founded)
proofs to represent arguments by infinite descent. In this
system, the left-introduction rules for inductively defined
predicates are simple case-split rules, and an infinitary,
global condition on proof trees is required in order to ensure
soundness. We show LKID^{ω} to be cut-free complete with respect
to standard models, and again infer the eliminability of cut.

The infinitary system LKID^{ω} is unsuitable for formal reasoning.
However, it has a natural restriction to proofs given by
regular trees, i.e. to those proofs representable by finite
graphs, which is so suited. We demonstrate that this restricted
“cyclic” system, CLKID^{ω}, subsumes LKID, and conjecture that
CLKID^{ω} and LKID are in fact equivalent, i.e., that proof by
induction is equivalent to regular proof by infinite descent.

@Article{Brotherston-Simpson:10, author = "James Brotherston and Alex Simpson", title = "Sequent Calculi for Induction and Infinite Descent", journal = "Journal of Logic and Computation", volume = 21, issue = 6, year = 2011 }

LMCS 2011

Separation logic is a Hoare-style logic for reasoning about programs with heap-allocated mutable data structures. As a step toward extending separation logic to high-level languages with ML-style general (higher-order) storage, we investigate the compatibility of nested Hoare triples with several variations of higher-order frame rules.

The interaction of nested triples and frame rules can be subtle, and the inclusion of certain frame rules is in fact unsound. A particular combination of rules can be shown consistent by means of a Kripke model where worlds live in a recursively defined ultrametric space. The resulting logic allows us to elegantly prove programs involving stored code. In particular, using recursively defined assertions, it leads to natural specifications and proofs of invariants required for dealing with recursion through the store.

@Article(deepframe-LMCS11, Author = "Jan Schwinghammer and Lars Birkedal and Bernhard Reus and Hongseok Yang", Title = "Nested Hoare Triples and Frame Rule for Higher-order Store", Journal = "Logical Methods in Computer Science", Year = "2011", Volume = "7", Number = "3" )

ICFP 2011

Most major OS kernels today run on multiprocessor systems and are preemptive: it is possible for a process running in the kernel mode to get descheduled. Existing modular techniques for verifying concurrent code are not directly applicable in this setting: they rely on scheduling being implemented correctly, and in a preemptive kernel, the correctness of the scheduler is interdependent with the correctness of the code it schedules. This interdependency is even stronger in mainstream kernels, such as Linux, FreeBSD or XNU, where the scheduler and processes interact in complex ways.

We propose the first logic that is able to decompose the verification of preemptive multiprocessor kernel code into verifying the scheduler and the rest of the kernel separately, even in the presence of complex interdependencies between the two components. The logic hides the manipulation of control by the scheduler when reasoning about preemptable code and soundly inherits proof rules from concurrent separation logic to verify it thread-modularly. This is achieved by establishing a novel form of refinement between an operational semantics of the real machine and an axiomatic semantics of OS processes, where the latter assumes an abstract machine with each process executing on a separate virtual CPU. The refinement is local in the sense that the logic focuses only on the relevant state of the kernel while verifying the scheduler. We illustrate the power of our logic by verifying an example scheduler, modelled on the one from Linux 2.6.11.

@InProceedings{scheduler-icfp11, author = "Alexey Gotsman and Hongseok Yang", title = "Modular verification of preemptive OS kernels", booktitle = "Proceedings of the 16th ACM SIGPLAN International Conference on Functional Programming", year = "2011", month = "September", pages = "404--417", publisher = "ACM", ee = "http://doi.acm.org/10.1145/2034773.2034827", isbn = "978-1-4503-0865-6", address = "Tokyo, Japan" }

SAS 2011

Abduction, the problem of discovering hypotheses that support a conclusion, has mainly been studied in the context of philosophical logic and Artificial Intelligence. Recently, it was used in a compositional program analysis based on separation logic that discovers (partial) pre/post specifications for un-annotated code which approximates memory requirements. Although promising practical results have been obtained, completeness issues and the computational hardness of the problem have not been studied. We consider a fragment of separation logic that is representative of applications in program analysis, and we study the complexity of searching for feasible solutions to abduction. We show that standard entailment is decidable in polynomial time, while abduction ranges from NP-complete to polynomial time for different subproblems.

@inproceedings{Gorogiannis2011b, author = {Nikos Gorogiannis and Max Kanovich and Peter W. O'Hearn}, title = {The Complexity of Abduction for Separated Heap Abstractions}, booktitle = {The 18th International Static Analysis Symposium, SAS 2011}, year = {2011}, pdf = {publications/2011-sas.pdf}, series = {Lecture Notes in Computer Science}, publisher = {Springer}, volume = {6887}, pages = {25--42}, doi = {10.1007/978-3-642-23702-7_7} }

CONCUR 2011

In 2004, Berdine, Calcagno and O’Hearn introduced a fragment of separation logic that allows for reasoning about programs with pointers and linked lists. They showed that entailment in this fragment is in coNP, but the precise complexity of this problem has been open since. In this paper, we show that the problem can actually be solved in polynomial time. To this end, we represent separation logic formulae as graphs and show that every satisﬁable formula is equivalent to one whose graph is in a particular normal form. Entailment between two such for- mulae then reduces to a graph homomorphism problem. We also discuss natural syntactic extensions that render entailment intractable.

@inproceedings{DBLP:conf/concur/CookHOPW11, author = {Byron Cook and Christoph Haase and Jo{\"e}l Ouaknine and Matthew J. Parkinson and James Worrell}, title = {Tractable Reasoning in a Fragment of Separation Logic}, booktitle = {22nd CONCUR, SPRINGER LNCS 6901}, year = {2011}, pages = {235-249}, ee = {http://dx.doi.org/10.1007/978-3-642-23217-6_16}, crossref = {DBLP:conf/concur/2011}, bibsource = {DBLP, http://dblp.uni-trier.de} }

CONCUR 2011

This paper studies algebraic models for concurrency, in light of recent work on Concurrent Kleene Algebra and Separation Logic. It clariﬁes that there is a strong connection between the Concurrency and Frame Rules of Separation Logic and a variants of the exchange law of Category Theory. The algebraic laws admit two standard models: one uses sets of traces, and the other is state-based, using assertions and weakest preconditions. We relate the latter to standard models of the heap as a partial function. We exploit the power of algebra to unify models and classify their variations.

@inproceedings{DBLP:conf/concur/HoareHMOPS11, author = {C. A. R. Hoare and Akbar Hussain and Bernhard M{\"o}ller and Peter W. O'Hearn and Rasmus Lerchedahl Petersen and Georg Struth}, title = {On Locality and the Exchange Law for Concurrent Processes}, booktitle = {22nd CONCUR, Springer LNCS 6901}, year = {2011}, pages = {250-264} }

CAV 2011

We call a data structure overlaid, if a node in the structure includes links for multiple data structures and these links are intended to be used at the same time. In this paper, we present a static program analysis for overlaid data structures. Our analysis implements two main ideas. The first is to run multiple sub-analyses that track information about non-overlaid data structures, such as lists. Each sub-analysis infers shape properties of only one component of an overlaid data structure, but the results of these sub-analyses are later combined to derive the desired safety properties about the whole overlaid data structure. The second idea is to control the communication among the sub-analyses using ghost states and ghost instructions. The purpose of this control is to achieve a high level of efficiency by allowing only necessary information to be transferred among sub-analyses and at as few program points as possible. Our analysis has been successfully applied to prove the memory safety of the Linux deadline IO scheduler and AFS server.

@InProceedings{multiview-cav11, author = "Oukseh Lee and Hongseok Yang and Rasmus Petersen", title = "Program Analysis for Overlaid Data Structures", booktitle = "Proceedings of the 23rd International Conference on Computer Aided Verification", volume = "6806", series = "Lecture Notes in Computer Science", year = "2011", month = "July", pages = "592--608", publisher = "Springer-Verlag", isbn = "978-3-642-22109-5", address = "Utah, USA" }

CAV 2011

We describe a reduction from temporal property veriﬁcation to a program analysis problem. We produce an encoding which, with the use of recursion and nondeterminism, enables off-the-shelf program analysis tools to naturally perform the reasoning necessary for proving temporal properties (e.g. backtracking, eventuality checking, tree counterexamples for branching-time properties, abstraction refinement, etc.). Using examples drawn from the PostgreSQL database server, Apache web server, and Windows OS kernel, we demonstrate the practical via- bility of our work.

@inproceedings{DBLP:conf/cav/CookKV11, author = {Byron Cook and Eric Koskinen and Moshe Y. Vardi}, title = {Temporal Property Verification as a Program Analysis Task}, booktitle = {23rd CAV, Springer LNCS 6806}, year = {2011}, pages = {333-348}, ee = {http://dx.doi.org/10.1007/978-3-642-22110-1_26} }

TABLEAUX 2011

We give a general proof-theoretic method for proving Craig interpolation for displayable logics, based on an analysis of the individual proof rules of their display calculi. Using this uniform method, we prove interpolation for a spectrum of display calculi differing in their structural rules, including those for multiplicative linear logic, multiplicative additive linear logic and ordinary classical logic. Our analysis of proof rules also provides new insights into why interpolation fails, or seems likely to fail, in many substructural logics. Specifically, contraction appears particularly problematic for interpolation except in special circumstances.

@InProceedings{Brotherston-Gore:11, author = "James Brotherston and Rajeev Gor\'e", title = "Craig Interpolation in Displayable Logics", booktitle = "Proceedings of {TABLEAUX-20}", publisher = "Springer", series = "LNAI", pages = "88--103", year = 2011 }

ICALP 2011

Modern concurrent algorithms are usually encapsulated in libraries, and complex algorithms are often constructed using libraries of simpler ones. We present the first theorem that allows harnessing this structure to give compositional liveness proofs to concurrent algorithms and their clients. We show that, while proving a liveness property of a client using a concurrent library, we can soundly replace the library by another one related to the original library by a generalisation of a well-known notion of linearizability. We apply this result to show formally that lock-freedom, an often-used liveness property of non-blocking algorithms, is compositional for linearizable libraries, and provide an example illustrating our proof technique.

@InProceedings{livlinear-icalp11, author = "Alexey Gotsman and Hongseok Yang", title = "Liveness-preserving Atomicity Abstraction", booktitle = "Proceedings of the 38th International Colloquium on Automata, Languages and Programming", volume = "6756", series = "Lecture Notes in Computer Science", year = "2011", month = "July", pages = "453--465", publisher = "Springer-Verlag", isbn = "978-3-642-22011-1", address = "Z\{"}urich, Switzerland" }

Theoretical Computer Science, 2011

The typical AI problem is that of making a plan of the actions to be performed by a controller so that it could get into a set of *final* situations, if it started with a certain *initial* situation.

The plans, and related winning strategies, happen to be finite in the case of a finite number of states and a finite number of *instant* actions.

The situation becomes much more complex when we deal with planning under *temporal uncertainty* caused by actions with *delayed effects*.

Here we introduce a tree-based formalism to express plans, or winning strategies, in finite state systems in which actions may have *quantitatively delayed effects*. Since the delays are non-deterministic and continuous, we need an infinite branching to display all possible delays. Nevertheless, under reasonable assumptions, we show that infinite winning strategies which may arise in this context can be captured by finite plans.

The above planning problem is specified in logical terms within a Horn fragment of affine logic. Among other things, the advantage of linear logic approach is that we can easily capture ‘preemptive/anticipative’ plans (in which a new action *β* may be taken at some moment within the running time of an action *α* being carried out, in order to be prepared before completion of action *α*).

In this paper we propose a comprehensive and adequate logical model of strong planning under temporal uncertainty which addresses infinity concerns. In particular, we establish a direct correspondence between linear logic proofs and plans, or winning strategies, for the actions with quantitative delayed effects.

@article{DBLP:journals/tcs/KanovichV11, author = {Max I. Kanovich and Jacqueline Vauzeilles}, title = {Linear logic as a tool for planning under temporal uncertainty}, journal = {Theor. Comput. Sci.}, volume = {412}, number = {20}, year = {2011}, pages = {2072-2092}, ee = {http://dx.doi.org/10.1016/j.tcs.2010.12.027}, bibsource = {DBLP, http://dblp.uni-trier.de} }

Journal of Automated Reasoning

Collaboration among organizations or individuals is common.While these participants are often unwilling to share all their information with each other, some information sharing is unavoidable when achieving a common goal. The need to share information and the desire to keep it confidential are two competing notions which affect the outcome of a collaboration. This paper proposes a formal model of collaboration which addresses confidentiality concerns. We draw on the notion of a plan which originates in the AI literature. We use data confidentiality policies to assess confidentiality in transition systems whose actions have an equal number of predicates in their pre- and post-conditions. Under two natural notions of policy compliance, we show that it is PSPACE-complete to schedule a plan leading from a given initial state to a desired goal state while simultaneously deciding compliance with respect to the agents’ policies.

@article{DBLP:journals/jar/KanovichRS11, author = {Max I. Kanovich and Paul Rowe and Andre Scedrov}, title = {Collaborative Planning with Confidentiality}, journal = {J. Autom. Reasoning}, volume = {46}, number = {3-4}, year = {2011}, pages = {389-421}, ee = {http://dx.doi.org/10.1007/s10817-010-9190-1}, bibsource = {DBLP, http://dblp.uni-trier.de} }

POPL 2011

Over the last decade, there has been extensive research on modelling challenging features in programming languages and program logics, such as higher-order store and storable resource invariants. A recent line of work has identified a common solution to some of these challenges: Kripke models over worlds that are recursively defined in a category of metric spaces. In this paper, we broaden the scope of this technique from the original domain-theoretic setting to an elementary, operational one based on step indexing. The resulting method is widely applicable and leads to simple, succinct models of complicated language features, as we demonstrate in our semantics of Chargueraud and Pottier’s type-and-capability system for an ML-like higher-order language. Moreover, the method provides a high-level understanding of the essence of recent approaches based on step indexing.

@InProceedings{recursiveworlds-popl11, Author = "Lars Birkedal and Bernhard Reus and Jan Schwinghammer and Kristian St{\o}vring and Jacob Thamsborg and Hongseok Yang", Title = "Step-Indexed Kripke Models over Recursive Worlds", Booktitle = "Proceedings of the 38th ACM Symposium on Principles of Programming Languages", Address = "Austin, USA", Publisher = "ACM", Month = "January", Year = "2011" }

POPL 2011

We describe a new algorithm for proving temporal properties expressed in LTL of infinite-state programs. Our
approach takes advantage of the fact that LTL properties can often be proved more efficiently using techniques
usually associated with the branching-time logic CTL than they can with native LTL algorithms. The caveat is that,
in certain instances, nondeterminism in the system’s transition relation can cause CTL methods to report
counterexamples that are spurious with respect to the original LTL formula. To address this problem we describe an
algorithm that, as it attempts to apply CTL proof methods, finds and then removes problematic nondeterminism via an
analysis on the potentially spurious counterexamples. Problematic nondeterminism is characterized using *decision
predicates*, and removed using a partial, symbolic determinization procedure which introduces new prophecy variables
to predict the future outcome of these choices. We demonstrate—using examples taken from the PostgreSQL database
server, Apache web server, and Windows OS kernel—that our method can yield enormous performance improvements in
comparison to known tools, allowing us to automatically prove properties of programs where we could not prove them
before.

@inproceedings{CookK11, author = {Byron Cook and Eric Koskinen}, title = {Making prophecies with decision predicates}, pages = {399-410}, Booktitle = "Proceedings of the 38th ACM Symposium on Principles of Programming Languages", Address = "Austin, USA", Publisher = "ACM", Month = "January", Year = "2011" }

VMCAI 2011

We describe an efficient procedure for proving stabilization of biological systems modeled as qualitative networks or genetic regulatory networks. For scalability, our procedure uses modular proof techniques, where state-space exploration is applied only locally to small pieces of the system rather than the entire system as a whole. Our procedure exploits the observation that, in practice, the form of modular proofs can be restricted to a very limited set. For completeness, our technique falls back on a non-compositional counterexample search. Using our new procedure, we have solved a number of challenging published examples, including: a 3-D model of the mammalian epidermis; a model of metabolic networks operating in type-2 diabetes; a model of fate determination of vulval precursor cells in the C. elegans worm; and a model of pair-rule regulation during segmentation in the Drosophila embryo. Our results show many orders of magnitude speedup in cases where previous stabilization proving techniques were known to succeed, and new results in cases where tools had previously failed.

@inproceedings{DBLP:conf/vmcai/CookFKP11, author = {Byron Cook and Jasmin Fisher and Elzbieta Krepska and Nir Piterman}, title = {Proving Stabilization of Biological Systems}, booktitle = {12th VMCAI, Springer LNCS 6538}, year = {2011}, pages = {134-149} }

FAST 2010

This paper extends existing models for collaborative systems. We investigate how much damage can be done by insiders alone, without collusion with an outside adversary. In contrast to traditional intruder models, such as in protocol security, all the players inside our system, including potential adversaries, have similar capabilities. They have bounded storage capacity, that is, they can only remember at any moment a bounded number of facts. This is technically imposed by only allowing balanced actions, that is, actions that have the same number of facts in their pre and post conditions. On the other hand, the adversaries inside our system have many capabilities of the standard Dolev-Yao intruder, namely, they are able, within their bounded storage capacity, to compose, decompose, overhear, and intercept messages as well as update values with fresh ones. We investigate the complexity of the decision problem of whether or not an adversary is able to discover secret data. We show that this problem is PSPACE-complete when all actions are balanced and can update values with fresh ones. As an application we turn to security protocol analysis and demonstrate that many protocol anomalies, such as the Lowe anomaly in the Needham-Schroeder public key exchange protocol, can also occur when the intruder is one of the insiders with bounded memory.

@inproceedings{DBLP:conf/ifip1-7/KanovichKNS10, author = {Max I. Kanovich and Tajana Ban Kirigin and Vivek Nigam and Andre Scedrov}, title = {Bounded Memory Dolev-Yao Adversaries in Collaborative Systems}, booktitle = {Formal Aspects in Security and Trust}, year = {2010}, pages = {18-33}, ee = {http://dx.doi.org/10.1007/978-3-642-19751-2_2}, crossref = {DBLP:conf/ifip1-7/2010}, bibsource = {DBLP, http://dblp.uni-trier.de} } @proceedings{DBLP:conf/ifip1-7/2010, editor = {Pierpaolo Degano and Sandro Etalle and Joshua D. Guttman}, title = {Formal Aspects of Security and Trust - 7th International Workshop, FAST 2010, Pisa, Italy, September 16-17, 2010. Revised Selected Papers}, booktitle = {Formal Aspects in Security and Trust}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, volume = {6561}, year = {2011}, isbn = {978-3-642-19750-5}, ee = {http://dx.doi.org/10.1007/978-3-642-19751-2}, bibsource = {DBLP, http://dblp.uni-trier.de} }

Theoretical Computer Science, 2010

Concurrent data structures are usually designed to satisfy correctness conditions such as sequential consistency or
linearizability. In this paper, we consider the following fundamental question: what guarantees are provided by these
conditions for client programs? We formally show that these conditions can be *characterized* in terms of
observational refinement. Our study also provides a new understanding of sequential consistency and linearizability
in terms of abstraction of dependency between computation steps of client programs.

@article{FilipovicORY:tcs10, author = {Ivana Filipovic and Peter W. O'Hearn and Noam Rinetzky and Hongseok Yang}, title = {Abstraction for concurrent objects}, journal = {Theor. Comput. Sci.}, volume = {411}, number = {51-52}, year = {2010}, pages = {4379-4398}, ee = {http://dx.doi.org/10.1016/j.tcs.2010.09.021} }

APLAS 2010

We present a framework for defining abstract interpreters for liveness properties, in particular program termination. The framework makes use of the theory of metric spaces to define a concrete semantics, relates this semantics with the usual order-theoretic semantics of abstract interpretation, and identifies a set of conditions for determining when an abstract interpreter is sound for analysing liveness properties. Our soundness proof of the framework is based on a novel relationship between unique fixpoints in metric semantics and post-fixpoints computed by abstract interpreters. We illustrate the power of the framework by providing an instance that can automatically prove the termination of programs with general (not necessarily tail) recursion.

@inproceedings{ChawdharyYang:APLAS10, author = {Aziem Chawdhary and Hongseok Yang}, title = {Metric Spaces and Termination Analyses}, booktitle = {APLAS}, year = {2010}, pages = {156-171}, ee = {http://dx.doi.org/10.1007/978-3-642-17164-2_12} }

FSE 2010

Statement *st* transitively depends on statement *st _{seed}* if the execution of

*st*may affect the execution of

_{seed}*st*. Computing transitive program dependences is a fundamental operation in many automatic software analysis tools. Existing tools find it challenging to compute transitive dependences for programs manipulating large aggregate structure variables, and their limitations adversely affect analysis of certain important classes of software systems, e.g., large-scale

*enterprise resource planning (ERP)*systems.

This paper presents an efficient conservative interprocedural static analysis algorithm for computing field-sensitive transitive program dependences in the presence of large aggregate structure variables. Our key insight is that program dependences coming from operations on whole substructures can be precisely (i.e., field-sensitively) represented at the granularity of substructures instead of individual fields. Technically, we adapt the interval domain to concisely record dependences between *multiple* pairs of fields of aggregate structure variables by exploiting the fields’ spatial arrangement.

We *prove* that our algorithm is as precise as any algorithm which works at the granularity of individual fields, the most-precise known approach for this problem. Our empirical study, in which we analyzed industrial *ERP* programs with over 100,000 lines of code in average, shows significant improvements in both the running times and memory consumption over existing approaches: The baseline is an efficient field-insensitive *whole-structure* that incurs a 62% false error rate. An *atomization*-based algorithm, which disassemble every aggregate structure variable into the collection of its individual fields, can remove all these false errors at the cost of doubling the average analysis time, from 30 to 60 minutes. In contrast, our new precise algorithm removes all false errors by increasing the time only to 35 minutes. In terms of memory consumption, our algo- rithm increases the footprint by less than 10%, compared to 50% overhead of the atomizing algorithm.

@INPROCEEDINGS{FSE:LDBRS10, Title="Field-Sensitive Program Dependence Analysis", Author= "S. Litvak and N. Dor and R. Bodik and N. Rinetzky and M. Sagiv", booktitle="ACM SIGSOFT 18th International Symposium on the Foundations of Software Engineering (FSE)", year = {2010}, pages = {287--296}, publisher = {ACM}, }

Formal Aspects of Computing, 2010

Data refinement is a common approach to reasoning about programs, based on establishing that a concrete program indeed satisfies all the required properties imposed by an intended abstract pattern. Reasoning about programs in this setting becomes complex when use of pointers is assumed and, moreover, a well-known method for proving data refinement, namely the forward simulation method, becomes unsound in presence of pointers. The reason for unsoundness is the failure of the “lifting theorem” for simulations: that a simulation between abstract and concrete modules can be lifted to all client programs. The result is that simulation does not imply that a concrete can replace an abstract module in all contexts.

Our diagnosis of this problem is that unsoundness is due to interference from the client programs. Rather than blaming a module for the unsoundness of lifting simulations, our analysis places the blame on the client programs which cause the interference: when interference is not present, soundness is recovered. Technically, we present a novel instrumented semantics which is capable of detecting interference between a module and its client. With use of special simulation relations, namely growing relations, and interpreting the simulation method using the instrumented semantics, we obtain a lifting theorem. We then show situations under which simulation does indeed imply refinement.

@article{FilipovicOTY:fac10, author = {Ivana Filipovic and Peter W. O'Hearn and Noah Torp-Smith and Hongseok Yang}, title = {Blaming the client: on data refinement in the presence of pointers}, journal = {Formal Asp. Comput.}, volume = {22}, number = {5}, year = {2010}, pages = {547-583}, ee = {http://dx.doi.org/10.1007/s00165-009-0125-8} }

VSTTE 2010

Local reasoning has become a well-established technique in program verification, which has been shown to be useful at many different levels of abstraction. In separation logic, we use a low-level abstraction that is close to how the machine sees the program state. In context logic, we work with high-level abstractions that are close to how the clients of modules see the program state. We apply program refinement to local reasoning, demonstrating that high-level local reasoning is sound for module implementations. We consider two approaches: one that preserves the high-level locality at the low level; and one that breaks the high-level “fiction” of locality.

@INPROCEEDINGS{DGW10, author = {Dinsdale-Young, Thomas and Gardner, Philippa and Wheelhouse, Mark}, title = {Abstraction and Refinement for Local Reasoning}, booktitle = {Proceedings of the Third international conference on Verified software: theories, tools, experiments}, year = {2010}, pages = {199--215}, address = {Berlin, Heidelberg}, publisher = {Springer-Verlag} }

PODC 2010

We present a proof of safety and linearizability of a highly-concurrent optimistic set algorithm. The key step in our proof is the Hindsight Lemma, which allows a thread to infer the existence of a global state in which its operation can be linearized based on limited local atomic observations about the shared state. The Hindsight Lemma allows us to avoid one of the most complex and non-intuitive steps in reasoning about highly concurrent algorithms: considering the linearization point of an operation to be in a different thread than the one executing it.

The Hindsight Lemma assumes that the algorithm maintains certain simple invariants which are resilient to interference, and which can themselves be verified using purely thread-local proofs. As a consequence, the lemma allows us to unlock a perhaps-surprising intuition: a high degree of interference makes non-trivial highly-concurrent algorithms in some cases much easier to verify than less concurrent ones.

@INPROCEEDINGS{ORV+10, author = {O'Hearn, Peter W. and Rinetzky, Noam and Vechev, Martin T. and Yahav, Eran and Yorsh, Greta}, title = {Verifying Linearizability with Hindsight}, booktitle = {Proceedings of the 29th ACM SIGACT-SIGOPS symposium on Principles of Distributed Computing}, year = {2010}, pages = {85--94}, address = {New York, NY, USA}, publisher = {ACM} }

Logical Methods in Computer Science, 2010

We present *Classical* BI (CBI), a new addition to the family of *bunched logics*
which originates in O’Hearn and Pym’s logic of bunched implications BI. CBI differs from
existing bunched logics in that its multiplicative connectives behave classically rather than
intuitionistically (including in particular a multiplicative version of classical negation). At
the semantic level, CBI-formulas have the normal bunched logic reading as declarative
statements about resources, but its resource models necessarily feature more structure
than those for other bunched logics; principally, they satisfy the requirement that every
resource has a unique dual. At the proof-theoretic level, a very natural formalism for CBI
is provided by a display calculus *à la* Belnap, which can be seen as a generalisation of the
bunched sequent calculus for BI. In this paper we formulate the aforementioned model
theory and proof theory for CBI, and prove some fundamental results about the logic,
most notably completeness of the proof theory with respect to the semantics.

@Article{Brotherston-Calcagno:10, author = "James Brotherston and Cristiano Calcagno", title = "Classical {BI}: {I}ts Semantics and Proof Theory", journal = "Logical Methods in Computer Science", volume = 6, number = 3, year = 2010 }

LICS 2010

Separation logic has proven an effective formalism for the analysis of memory-manipulating programs.

We show that the purely propositional fragment of separation
logic is undecidable. In fact, for *any* choice of concrete
heap-like model of separation logic, validity in that model
remains undecidable. Besides its intrinsic technical interest,
this result also provides new insights into the nature of
decidable fragments of separation logic.

In addition, we show that a number of propositional systems which approximate separation logic are undecidable as well. In particular, these include both Boolean BI and Classical BI.

All of our undecidability results are obtained by means of a single direct encoding of Minsky machines.

@InProceedings{Brotherston-Kanovich:10, author = "James Brotherston and Max Kanovich", title = "Undecidability of propositional separation logic and its neighbours", booktitle = "Proceedings of {LICS-25}", pages = "137--146", year = 2010 }

TOOLS Europe 2010

Modern object-oriented languages support higher-order implementations through function objects such as delegates in C#, agents in Eiffel, or closures in Scala. Function objects bring a new level of abstraction to the object-oriented programming model, and require a comparable extension to specification and verification techniques. We introduce a verification methodology that extends function objects with auxiliary side-effect free (pure) methods to model logical artifacts: preconditions, postconditions and modifies clauses. These pure methods can be used to specify client code abstractly, that is, independently from specific instantiations of the function objects. To demonstrate the feasibility of our approach, we have implemented an automatic prover, which verifies several non-trivial examples.

@INPROCEEDINGS{NCM+10, author = {Martin Nordio and Cristiano Calcagno and Bertrand Meyer and Peter M{\"u}ller and Julian Tschannen}, title = {Reasoning about Function Objects}, booktitle = {Proceedings of the 48th international conference on Objects, Models, Components and Patterns}, year = {2010}, pages = {79--96}, address = {Berlin, Heidelberg}, publisher = {Springer-Verlag} }

ECOOP 2010

Abstraction is key to understanding and reasoning about large computer systems. Abstraction is simple to achieve
if the relevant data structures are disjoint, but rather difficult when they are partially shared, as is often the
case for concurrent modules. We present a program logic for reasoning abstractly about data structures that
provides a fiction of disjointness and permits compositional reasoning. The internal details of a module are
completely hidden from the client by *concurrent abstract predicates*. We reason about a module’s
implementation using separation logic with permissions, and provide abstract specifications for use by client
programs using concurrent abstract predicates. We illustrate our abstract reasoning by building two
implementations of a lock module on top of hardware instructions, and two implementations of a concurrent set
module on top of the lock module.

@INPROCEEDINGS{DDG+10, author = {Dinsdale-Young, Thomas and Dodds, Mike and Gardner, Philippa and Parkinson, Matthew J. and Vafeiadis, Viktor}, title = {Concurrent Abstract Predicates}, booktitle = {Proceedings of the 24th European conference on Object-oriented programming}, year = {2010}, pages = {504--528}, address = {Berlin, Heidelberg}, publisher = {Springer-Verlag} }

ECOOP 2010

Specifications of Object-Oriented programs conventionally
employ Boolean expressions of the programming language for assertions.
Programming errors can be discovered by checking at runtime whether
an assertion, such as a precondition or class invariant, holds. In this work,
we show how separation logic can be used to verify that these executable
specifications will always hold at runtime. Both the program and its executable
assertions are verified with respect to separation logic specifications.
A novel notion called *relative purity* embraces historically problematic
side-effects in executable specifications, and verification boils down
to proving *connecting implications*. Even model-based specifications can
be verified. The framework is also well-suited to separation logic proof
tools and now implemented in jStar. Numerous automatically verified
examples illustrate the framework’s use and utility.

@INPROCEEDINGS{vSCM10, author = {Stephan van Staden and Cristiano Calcagno and Bertrand Meyer}, title = {Verifying Executable Object-Oriented Specifications with Separation Logic}, booktitle = {Proceedings of the 24th European conference on Object-oriented programming}, year = {2010}, pages = {151--174}, address = {Berlin, Heidelberg}, publisher = {Springer-Verlag} }

MFPS 2010

We formulate a unified display calculus proof theory for the four principal varieties of bunched logic by combining display calculi for their component logics. Our calculi satisfy cut-elimination, and are sound and complete with respect to their standard presentations. We show that the standard sequent calculus for BI can be seen as a reformulation of its display calculus, and argue that analogous sequent calculi for the other varieties of bunched logic seem very unlikely to exist.

@InProceedings{Brotherston:10, author = "James Brotherston", title = "A Unified Display Proof Theory for Bunched Logic", booktitle = "Proceedings of {MFPS-26}", pages = "197--211", series = "ENTCS", volume = 265, publisher = "Elsevier B.V.", year = 2010 }

ESOP 2010

We propose a new formalisation of stability for Rely-Guarantee, in which an assertion’s stability is encoded into its syntactic form. This allows two advances in modular reasoning. Firstly, it enables Rely-Guarantee, for the ﬁrst time, to verify concurrent libraries independently of their clients’ environments. Secondly, in a sequential setting, it allows a module’s internal interference to be hidden while verifying its clients. We demonstrate our approach by verifying, using RGSep, the Version 7 Unix memory manager, uncovering a twenty-year-old bug in the process.

@inproceedings{expstab:esop10, author = {John Wickerson and Mike Dodds and Matthew Parkinson}, title = {Explicit {S}tabilisation for {M}odular {R}ely-{G}uarantee {R}easoning}, booktitle = {19th European Symposium on Programming (ESOP 2010), Paphos, Cyprus}, editor = {Andrew D. Gordon}, pages = {611-630}, year = {2010}, month = {mar}, series = {Lecture Notes in Computer Science}, volume = {6012}, publisher = {Springer-Verlag} }

FASE 2010

This paper describes a compositional analysis algorithm for statically detecting leaks in Java programs. The algorithm is based on separation logic and exploits the concept of bi-abductive inference for identifying the objects which are reachable but no longer used by the program.

@INPROCEEDINGS{DF10, author = {Dino Distefano and Ivana Filipovi{\'c}}, title = {Memory Leaks Detection in Java by Bi-abductive Inference}, booktitle = {Proceedings of the 13th international conference on Fundamental Approaches to Software Engineering}, year = {2010}, pages = {278--292}, address = {Berlin, Heidelberg}, publisher = {Springer-Verlag} }

TACAS 2010

Ranking function synthesis is a key aspect to the success of modern termination provers for imperative programs. While it is well-known how to generate linear ranking functions for relations over (mathematical) integers or rationals, efficient synthesis of ranking functions for machine-level integers (bit-vectors) is an open problem. This is particularly relevant for the verification of low-level code. We propose several novel algorithms to generate ranking functions for relations over machine integers: a complete method based on a reduction to Presburger arithmetic, and a template-matching approach for predefined classes of ranking functions based on reduction to SAT- and QBF-solving. The utility of our algorithms is demonstrated on examples drawn from Windows device drivers.

@inproceedings{CookKRW10, author = {Byron Cook and Daniel Kroening and Philipp R{\"u}mmer and Christoph M. Wintersteiger}, title = {Ranking Function Synthesis for Bit-Vector Relations}, booktitle = {Tools and Algorithms for the Construction and Analysis of Systems, 16th International Conference, TACAS 2010, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2010, Paphos, Cyprus, March 2010. Springer LNCS 6015}, year = {2010}, pages = {236-250}, ee = {http://dx.doi.org/10.1007/978-3-642-12002-2_19} }

TACAS 2010

Heap-Hop is a program prover for concurrent heap-manipulating programs
that use Hoare monitors and message-passing synchronization. Programs
are annotated with pre and post-conditions and loop invariants, written in a fragment
of separation logic. Communications are governed by a form of session
types called *contracts*. Heap-Hop can prove safety and race-freedom and, thanks
to contracts, absence of memory leaks and deadlock-freedom. It has been used
in several case studies, including concurrent programs for copyless list transfer,
service provider protocols, and load-balancing parallel tree disposal.

@INPROCEEDINGS{VLC10, author = {Jules Villard and {\'E}tienne Lozes and Cristiano Calcagno}, title = {Tracking Heaps That Hop with Heap-Hop}, booktitle = {Proceedings of the 16th international conference on Tools and Algorithms for the Construction and Analysis of Systems}, year = {2010}, pages = {275--279}, address = {Berlin, Heidelberg}, publisher = {Springer-Verlag} }

FoSSaCS 2010

We present the first complete soundness proof of the antiframe rule, a recently proposed proof rule for capturing information hiding in the presence of higher-order store. Our proof involves solving a non-trivial recursive domain equation, and it helps identify some of the key ingredients for soundness.

@INPROCEEDINGS{SYB+10, author = {Jan Schwinghammer and Hongseok Yang and Lars Birkedal and Fran\c{c}ois Pottier and Bernhard Reus}, title = {A Semantic Foundation for Hidden State}, booktitle = {Proceedings of the 13th international conference on Foundations of Software Science and Computational Structures}, year = {2010}, pages = {2--17}, address = {Berlin, Heidelberg}, publisher = {Springer-Verlag} }

POPL 2010

Serializability is a commonly used correctness condition in concurrent programming. When a concurrent module is serializable, certain other properties of the module can be verified by considering only its sequential executions. In many cases, concurrent modules guarantee serializability by using standard locking protocols, such as tree locking or two-phase locking. Unfortunately, according to the existing literature, verifying that a concurrent module adheres to these protocols requires considering concurrent interleavings.

In this paper, we show that adherence to a large class of locking protocols (including tree locking and two-phase locking) can be verified by considering only sequential executions. The main consequence of our results is that in many cases, the (manual or automatic) verification of serializability can itself be done using sequential reasoning.

@INPROCEEDINGS{POPL:ARR10, Author= "H. Attiya and G. Ramalingam and N. Rinetzky", Title="Sequential Verification of Serializability", booktitle="37th Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL)", year = 2010, pages = {31--42} }

POPL 2010

Traditional transactional memory systems suffer from overly conservative conflict detection, yielding so-called false conflicts, because they are based on fine-grained, low-level read/write conflicts. In response, the recent trend has been toward integrating various abstract data-type libraries using ad-hoc methods of high-level conflict detection. These proposals have led to improved performance but a lack of a unified theory has led to confusion in the literature.

We clarify these recent proposals by defining a generalization of transactional memory in which a transaction consists of coarse-grained (abstract data-type) operations rather than simple memory read/write operations. We provide semantics for both pessimistic (e.g. transactional boosting) and optimistic (e.g. traditional TMs and recent alternatives) execution. We show that both are included in the standard atomic semantics, yet find that the choice imposes different requirements on the coarse-grained operations: pessimistic requires operations be left-movers, optimistic requires right-movers. Finally, we discuss how the semantics applies to numerous TM implementation details discussed widely in the literature.

@inproceedings{DBLP:conf/popl/KoskinenPH10, author = {Eric Koskinen and Matthew J. Parkinson and Maurice Herlihy}, title = {Coarse-grained transactions}, booktitle = {POPL}, year = {2010}, pages = {19-30}, ee = {http://doi.acm.org/10.1145/1706299.1706304}, crossref = {DBLP:conf/popl/2010}, bibsource = {DBLP, http://dblp.uni-trier.de} } @proceedings{DBLP:conf/popl/2010, editor = {Manuel V. Hermenegildo and Jens Palsberg}, title = {Proceedings of the 37th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL 2010, Madrid, Spain, January 17-23, 2010}, booktitle = {POPL}, publisher = {ACM}, year = {2010}, isbn = {978-1-60558-479-9}, bibsource = {DBLP, http://dblp.uni-trier.de} }

To request a paper be added to this list, please contact us.