Separation Logic II: recursive predicates
I discuss separation logic basics some more, as presented in the seminal paper by John C. Reynolds. An important idea is describing data structure using separating conjunction and recursive predicates.
I discuss separation logic basics some more, as presented in the seminal paper by John C. Reynolds. An important idea is describing data structure using separating conjunction and recursive predicates.
I discuss separation logic, as presented in this seminal paper by the great John C. Reynolds. I did not go very far into the topic, so please expect a follow-up episode.
In this episode, I talk briefly about Rust, which uses compile-time analysis to ensure that code is memory-safe (and also free of data races in concurrent code) without using a garbage collector. Fantastic! The language draws on but richly develops ideas on ownership that originated in academic research.
I discuss the idea of statically typed region-based memory management, proposed by Tofte and Talpin. The idea is to allow programmers to declare explicitly the region from which to satisfy individual allocation requests. Regions are created in a statically scoped way, so that after execution leaves the body of the region-creation construct, the entire region may be deallocated. Type inference is used to make sure that no dangling pointers are dereferenced (after the associated region is deallocated). This journal paper about the idea is not easy reading, but has a lot of good explanations in and around all the technicalities. Even better, I found, is this paper about region-based memory management in Cyclone. There are lots of intuitive explanations of the ideas, and not so much gory technicality.
In this episode, I start a new chapter (we are up to Chapter 16, here), about verifying safe manual management of memory. I have personally gotten pretty interested in this topic, having seen through some simple experiments with Haskell how much time can go into garbage collection for seemingly simple benchmarks. I also talk about why verifying memory-usage properties of programs is challenging in proof assistants.
I laud the Metamath proof checker and its excellent book. I am also looking for suggestions on what to discuss next, as I am ready to wrap up this chapter on proof assistants.
In this episode I share my initial impressions -- very positive! -- of the Metamath system. Metamath allows one to develop theorems from axioms which you state. Typing or other syntactic requirements of axioms or theorems are also expressed axiomatically. The system exhibits an elegant coherent vision for how such a tool should work, and was super easy to download and try out.
I discuss a book edited by Freek Wiedijk (pronounced "Frake Weedike"), which describes the solutions he received in response to a call for formalized proofs of the irrationality of the square root of 2. The book was published in 2006, and made an impression on me then. The provers we have discussed so far all have a solution in the book, except for Lean, which was created after that year. Happily, you can find a PDF of the book here, on Wiedijk's web site.
I talk about my positive experience trying out the tools for Lean, specifically the 'lean' executable and lean-mode in emacs.
In this episode, I talk about what I have learned so far about the Lean prover, especially from an excellent (if somewhat advanced) Master's thesis, "The Type Theory of Lean" by Marco Garneiro.
I talk about my attempts to use Isabelle as a newbie, and reflect a little on the complexity of both Isabelle and Coq.
The Isabelle theorem prover supports different logics, but its most developed seems to be Higher-Order Logic (HOL). In this episode, I talk about the logic and approach of Isabelle/HOL, as far as I have understood them.
I talk a bit more about the Agda proof assistant.
In this episode I talk a bit about the Agda proof assistant.
I talk about a couple good resources for learning Coq, the problem of too many ways to do things in type theory, and issues trying to explain and document a very complex language.
I discuss Coq, a widely used proof assistant based on a constructive type theory. One episode definitely cannot do justice to the complexity of a tool like this -- but I take a first try at covering its features at a high level.
This is the start of Chapter 15, about interactive theorem provers (ITPs). In this episode, I talk about the difference between fully automatic and interactive provers, and my plan to discuss and compare several different ITPs, in future episodes of this chapter.
In this episode, I outline the argument for why the proof-theoretic ordinal (in the sense of Rathjen, as presented last episode) is epsilon-0. My explanation has something of a hole, in explaining how one would go about deriving induction for ordinals strictly less than epsilon-0 in Peano Arithmetic. To help paper over this hole a little, I discuss a really nice recent exposition of encoding ordinals in Agda.
Ordinal analysis seeks to determine the strength of a logical theory by assigning an ordinal to it. Which one? In this episode I describe a definition of the proof-theoretic ordinal of a logical theory from a paper by proof theorist Michael Rathjen. It is basically a measure of how strong an induction principle is derivable in the theory. (The first parts of the paper are pretty accessible, but the rest gets hard, at least for me.)
Ordinal analysis is an important branch of proof theory, which seeks to compare, quantitatively, the strengths of different proof systems. The quantities in question are ordinals, which extend the ordering character of natural numbers into the infinite. In this episode, I discuss these ideas a bit further, and also review a little the ordinals up to epsilon 0.
Stay up to date
For any inquiries, please email us at hello@podcastworld.io