Proving Confluence for Untyped Lambda Calculus II
Discussion of the basic idea of the Tait--Martin-Loef proof of confluence for untyped lambda calculus. Let me know any requests for what to discuss in Chapter 8!
Explore " metatheory " with insightful episodes like "Proving Confluence for Untyped Lambda Calculus II", "Proving Confluence for Untyped Lambda Calculus I", "Confluence, and its use for conversion checking", "Normalization and logical consistency" and "Normalization in type theory: where it is needed, and where not" from podcasts like ""Iowa Type Theory Commute", "Iowa Type Theory Commute", "Iowa Type Theory Commute", "Iowa Type Theory Commute" and "Iowa Type Theory Commute"" and more!
Discussion of the basic idea of the Tait--Martin-Loef proof of confluence for untyped lambda calculus. Let me know any requests for what to discuss in Chapter 8!
Start of discussion on how to prove confluence for untyped lambda calculus. Also some discussion about the research community interested in confluence.
The basic property of confluence of a nondeterministic reduction semantics: if from starting term t you can reach t1 and also t2 (by two finite reduction sequences), then there is some t3 to which t1 and t2 both reduce in a finite number of steps. The use of confluence for ensuring completeness of the conversion-checking algorithm that tests conversion of t1 and t2 by normalizing both terms and checking for alpha-equivalence (or maybe alpha,eta-equivalence).
Discussion of the connection between normalization and logical consistency. One approach is to prove normalization and type preservation, to show (in proof-theoretic terms) that all detours can be eliminated from proofs (this is normalization) and that the resulting proof still proves the same theorem (this is type preservation). I mention an alternative I use for Cedille, which is to use a realizability semantics (often used for normalization proofs) directly to prove consistency.
Normalization (every term reaches a normal form via some reduction sequence) is needed essentially in type theory due to the Curry-Howard isomorphism: diverging programs become unsound proofs. Traditionally, type theorists have also desired normalization or even termination (every term reaches a normal form no matter what reduction sequence is explored in a nondeterministic operational semantics) for conversion checking. This is the process of confirming that types are equivalent during type checking, which, due to dependent types, can require checking program equivalence. The latter is usually restricted to just beta-equivalence (where beta-reduction is substitution of argument for input variable when applying a function), because richer notions of program equivalence are usually undecidable. I have a mini-rant in this episode explaining why this usual requirement of normalization for conversion checking is not sensible.
Also I note that you can find the episodes of the podcast organized by chapter on my web page.
Discussion of normalization (there is some way to reach a normal form) versus termination (no matter how you execute the term you reach a normal form). A little more discussion of strong FP. For type theory, the need for normalization due to Curry-Howard and due to conversion checking.
Type safety proofs are big confirmations requiring consideration of all your operational and typing rules. So they rarely contain much deep insight, but are needed to confirm your language's type system is correct. Looking ahead, this episode also talks about the different between normalization and termination when your language is nondeterministic, and the property of confluence.
We review the metatheoretic property of type safety, decomposed into two properties called type preservation and progress. Discussion of progress in the context of type theory, where adding axioms can lead to a failure of progress.
Type safety is a basic property of both statically typed programming languages and type theories. It has traditionally (past few decades) been decomposed into type preservation and progress. Type preservation says that if a program expression e has some type T, then running e a bit will give a result that still has type T (and type preservation would apply again to that result, to preserve the type T indefinitely along the execution of e). Progress says that well-typed expressions cannot get stuck computationally: they cannot reduce to a form where the operational semantics is then undefined. This is how we model the idea that the type system is preventing certain kinds of failures: make those failures correspond to undefined behavior.
Metatheory is concerned with proving properties about theories, in this case type theories or programming languages.
Stay up to date
For any inquiries, please email us at hello@podcastworld.io