Logic for Computable Functions

Logic for Computable Functions (LCF) is an interactive automated theorem prover developed at Stanford and Edinburgh by Robin Milner and collaborators in early 1970s, based on the theoretical foundation of logic of computable functions previously proposed by Dana Scott. Work on the LCF system introduced the general-purpose programming language ML to allow users to write theorem-proving tactics, supporting algebraic data types, parametric polymorphism, abstract data types, and exceptions.

Basic idea

edit

Theorems in the system are terms of a special "theorem" abstract data type. The general mechanism of abstract data types of ML ensures that theorems are derived using only the inference rules given by the operations of the theorem abstract type. Users can write arbitrarily complex ML programs to compute theorems; the validity of theorems does not depend on the complexity of such programs, but follows from the soundness of the abstract data type implementation and the correctness of the ML compiler.

Advantages

edit

The LCF approach provides similar trustworthiness to systems that generate explicit proof certificates but without the need to store proof objects in memory. The Theorem data type can be easily implemented to optionally store proof objects, depending on the system's run-time configuration, so it generalizes the basic proof-generation approach. The design decision to use a general-purpose programming language for developing theorems means that, depending on the complexity of programs written, it is possible to use the same language to write step-by-step proofs, decision procedures, or theorem provers.

Disadvantages

edit

Trusted computing base

edit

The implementation of the underlying ML compiler adds to the trusted computing base. Work on CakeML[1] resulted in a formally verified ML compiler, alleviating some of these concerns.

Efficiency and complexity of proof procedures

edit

Theorem proving often benefits from decision procedures and theorem proving algorithms, whose correctness has been extensively analyzed. A straightforward way of implementing these procedures in an LCF approach requires such procedures to always derive outcomes from the axioms, lemmas, and inference rules of the system, as opposed to directly computing the outcome. A potentially more efficient approach is to use reflection to prove that a function operating on formulas always gives correct result.[2]

Influences

edit

Among subsequent implementations is Cambridge LCF. Later systems simplified the logic to use total instead of partial functions, leading to HOL, HOL Light, and the Isabelle proof assistant that supports various logics. As of 2019, the Isabelle proof assistant still contains an implementation of an LCF logic, Isabelle/LCF.

Notes

edit
  1. ^ "CakeML". Retrieved 2 November 2019.
  2. ^ Boyer, Robert S; Moore, J Strother. Metafunctions: Proving Them Correct and Using Them Efficiently as New Proof Procedures (PDF) (Report). Technical Report CSL-108, SRI Projects 8527/4079. pp. 1–111. Archived (PDF) from the original on November 2, 2019. Retrieved 2 November 2019.

References

edit