1 Introduction
Hoare’s logic is a formalism allowing us to reason about program correctness. It was introduced fifty years ago in the seminal article [Hoa69] of Tony Hoare that focused on a small class of while programs, and was soon extended by him in [Hoa71a] to programs allowing local variables and recursive procedures. This approach became the most influential method of verifying programs, mainly because its syntaxoriented style made it possible to extend it to almost any type of programs. Also, thanks to parallel developments in program semantics, this approach leans itself naturally to a rigorous analysis based on the methods of mathematical logic. Since then several books appeared that discuss Hoare’s logic, or at least have a chapter on it: [dB80, LS87, TZ88, Fra92, Win93, AFPdS11, Ben12], to name a few.
More than thirty years ago two surveys of Hoare’s logic appeared, [Apt81], concerned with deterministic programs, and [Apt84], concerned with nondeterministic programs. At the beginning of nineties an extensive survey [Cou90] was published that also included an account of verification of parallel programs and a discussion of alternative approaches to program verification.
A systematic exposition of Hoare’s logics for deterministic, nondeterministic and parallel programs appeared in our book [AO91]. The last edition of it, [AdBO09], written jointly with F.S. de Boer, extended the presentation to recursive procedures and objectoriented programs. In this paper we occasionally rely on the material presented in this book, notably to structure the presentation, but we also analyze various matters omitted there, for example the issues concerning local variables, parameter mechanisms, auxiliary rules, the full power of Algol 60, and the problem of completeness. We also discuss various alternative approaches.
The literature on the subject is really vast. In particular, according to Google Scholar, the original article [Hoa69] has been cited more than 7000 times. This forced us to make some selection in the presented material. Some omissions, such as the treatment of the now hardly used goto statement or coroutines, or logical analysis of issues related to completeness, were dictated by our effort to trace and explain the developments that withstood the test of time.
Further, we did not introduce any program semantics. Consequently, we do not establish any soundness or completeness results. Instead, we focus on a systematic account of the established results combined with an explanation of the reasons some concepts were introduced, and on a discussion of some, occasionally subtle, ways Hoare’s logic differs from customary logics.
We begin the exposition by discussing in the next section the contributions to program verification by Alan Turing and Robert Floyd that preceded those of Hoare. Then, in Section 3, we discuss Hoare’s initial contributions that focused on the while programs and programs with recursive procedures, though we extend the exposition by an account of program termination. Next, we discuss in Section 4 the soundness and completeness of the discussed proof systems. An essential difference between Hoare’s logic and firstorder logic has to do with the features specific to programming languages, such as subscripted variables, local variables, and parameter mechanisms. We discuss these matters in Section 5. This provides a natural starting point for an account of verification of programs with arbitrary procedures, notably procedures that allow procedures as parameters. This forms the subject of Section 6.
In Section 7 we discuss verification of nondeterministic programs, the corresponding issue of fairness, and verification of probabilistic programs. Then, in Section 8 we focus on the verification of parallel and distributed programs. Next, in Section 9, we provide an account of verification of objectoriented programs. The final two sections, 10 and 11, shed light on alternative approaches to program verification and attempt to explain and assess the impact of Hoare’s logic.
2 Precursors
2.1 Turing
The concern about correctness of computer programs is as old as computers themselves. In 1949, Alan Turing gave a presentation entitled “Checking a Large Routine” at a conference in Cambridge U.K. at the occasion of the launching the computer EDSAC (Electronic Delay Storage Automatic Calculator), published as [Tur49]. F.L. Morris and C.B. Jones recovered [MJ84] the original typescript of Turing’s presentation and made it available for a wider audience, thereby correcting several typing errors.
Turing started by asking
“How can one check a routine in the sense of making sure that it is right?”
and proposed that
“… the programmer should make a number of definite assertions which can be checked individually, and from which the correctness of the whole programme easily follows.”
Turing demonstrated his ideas for a flowchart program with nested loops computing the factorial of a given natural number , where multiplication is achieved by repeated addition; see Figure1. Note that the effect of a command in the flowchart is represented by an equation like , where denotes the value of the variable after the execution of the command. Today, this notation is still in use in logical representations of computation steps like in the specification language Z (see, e.g., [Spi92]) and bounded model checking.
Turing referred already to assertions. In the example he presented them in the form of a table referring to the numbers of the locations storing the variables , see Figure 2. From today’s viewpoint these assertions are admittedly very specific and difficult to read.
Turing was not only concerned with delivering correct values, but also with termination. He wrote
“Finally the checker has to verify that the process comes to an end. Here again he should be assisted by the programmer giving a further definite assertion to be verified. This may take the form of a quantity which is asserted to decrease continually and vanish when the machine stops.”
This refers already to the concept of a termination function. Turing stated a global termination function for the example program, i.e., an integer expression yielding a nonnegative value that decreases with every step of the program.
Summarizing, Turing introduced the notions of assertions and termination functions but did not state loop invariants and local termination functions for the two loops of the program. Still, as we explain the Appendix, his approach can be represented within the framework of Hoare’s logic.
2.2 Floyd
Robert Floyd was the first to propose in [Flo67] a fully formal method for proving the correctness of flowchart programs known as inductiveassertions method. Here the assertions are logical formulas in terms of the variables appearing in the flowcharts. The begin of the flowchart is annotated with an assertion stating the assumptions under which the flowchart is supposed to work. The end of the flowchart is annotated with an assertion specifying the desired result. To verify that these inputoutput annotations are correct, each loop of the flowchart needs to be cut and annotatated with an assertion that should hold whenever the control reaches this cut point. The assertion should thus be an invariant at the cut point. Floyd states rules how to verify this by completing the flowchart so that there is at least one assertion between any two subsequent statements. The rules explain how to modify a given assertion when passing a test statement and when passing an assignment statement. When two assertions are adjacent to the same arc then the logical implication has to hold in the direction of the arc.
In Figure 3 we show Turing’s example as a flowchart with annotations according to Floyd’s method. At the begin of the flowchart the annotation states the assumption of the computation, at the end the annotation specifies the desired result that should hold whenever the computation reaches . To verify that this annotation is correct, every loop has to be cut and annotatated by an invariant. In this example, we cut the loops at the bullet points and annotate them with the assertions
and
3 Hoare’s Contributions
3.1 Reasoning about while programs
To reason about programs Hoare introduced in [Hoa69] a new notation
with the interpretation
“If the assertion is true before initiation of a program , then the assertion will be true on its completion.”
Nowadays one rather writes
so that additional assertions can be freely inserted in the program text, by putting the brackets around them. Such a possibility will turn out to be especially important when reasoning about parallel programs. In what follows we shall use the latter notation. In this context is referred to as a precondition and as a postcondition.
Subsequently Hoare introduced an axiom to reason about the assignment statement and the proof rules to reason about the program composition and the while statement. He also introduced two consequence rules, now combined into one, that allow one to strengthen the precondition and to weaken the postcondition. He then used these axioms and rules to establish correctness of the following simple program, let us call it DIV, that finds “the quotient and a remainder obtained on dividing by ”:
All variables are assumed to range over the nonnegative integers.
In what follows we review these steps. The assignment axiom has the form
ASSIGNMENT
where is a variable, is an expression, and is the result of substituting for all free occurrences of .
The already mentioned consequence rule has the following form:
CONSEQUENCE
Here it is assumed that the mentioned implications can be established in some further unspecified proof system exclusively concerned with the assertions. (Hoare just referred to as ‘logical implication’.)
The final two rules were:
COMPOSITION
and
WHILE
Nowadays one refers to the assertion that satisfies the premise of this rule as a loop invariant.
Hoare’s correctness proof of the DIV program is presented in Figure 4. (Hoare wrote the postcondition of the conclusion of the WHILE rule as and this is how it is recorded in Figure 4.) It yields the desired conclusion that is the quotient and the remainder resulting from dividing by . The crucial step in this proof is line 10 that clarifies the role played by the assertion . This line establishes that is a loop invariant of the considered while statement and its discovery is essential for the proof to succeed.
Line  Formal proof  Justification 

number  
1  logic  
2  ASSIGNMENT  
3  ASSIGNMENT  
4  CONSEQUENCE (1,2)  
5  COMPOSITION (4,3)  
6  logic  
7  ASSIGNMENT  
8  ASSIGNMENT  
9  COMPOSITION (7,8)  
10  CONSEQUENCE (6,9)  
11  
WHILE (10)  
12  
CONSEQUENCE (5,11) 
The arguments in the right column of the rules refer to the line numbers to which they were applied and ‘logic’ indicates that the relevant formulas are true (Hoare referred to specific axioms of Peano arithmetic).
As pointed out in [JR10] the assignment axiom was originally proposed in [Kin69], the PhD thesis of J. King. From [Flo67] one can distill a more complex assignment axiom
ASSIGNMENT I
that reasons “forward” starting from the precondition .
The striking simplicity of the ASSIGNMENT axiom reveals a close relation between the assignment statement and the substitution operation. This is achieved, in contrast to Floyd’s approach, by reasoning ‘backwards’, so starting from the postcondition
. The adoption of this axiom by Hoare probably influenced a couple of years later Edsger W. Dijkstra to propose the weakest precondition semantics that adopted this reasoning ‘backward’ to all program statements. We shall discuss this alternative approach to program verification in Section
10. From the mathematical point of view Hoare’s proof rules and axioms form an unusual mix: the assignment axiom adopts the ‘backward’ reasoning, while all the proof rules embrace the ‘forward’ reasoning.Hoare’s paper turned out to be a beginning of a far reaching change in reasoning about programs, resulting from moving from flowcharts to programs expressed in the customary textual form. This opened the way to reasoning about programs that cannot be readily expressed as flowcharts, for example, recursive procedures or programs with variable declarations. Also it made it possible to adopt a syntaxdirected reasoning about programs by using their structure as a guidance in organizing the proof.
A related, implicit, feature of the proof system proposed by Hoare is that it encourages program development by allowing one to first specify the desired preconditions and postconditions of a program component and subsequently to look for a program fragment for which the corresponding correctness statement can be established. Hoare took a lead in this novel view of program correctness by publishing in [Hoa71b] a correctness proof of the FIND program the purpose of which is to find the th largest element of an array by rearranging its elements so that upon termination
The program is very subtle —it uses a triply nested while loop— and as a result its correctness proof is highly nontrivial. The proof is not carried out in the proof system of [Hoa69] but from the way it is written it is clear that it can be done so. In fact, Hoare refers in a number of places to invariants that he defines as formulas that remain true throughout the execution of the program independently of the values of the program variables.
In [Hoa71b], Hoare also showed termination of the program. Since this property is not captured by his proof system [Hoa69], he used informal arguments. Nowadays, one talks of partial correctness, which refers to the conditional statement ‘if the program terminates starting from a given precondition, then it satisfies the desired postcondition’ and this is precisely what Hoare’s proof system allows one to accomplish. A more demanding property is total correctness, which stipulates that all program computations starting from a given precondition terminate and satisfy the desired postcondition. We shall formalize these notions in the next section.
According to this terminology, Hoare established total correctness of the program FIND. He noticed that the termination proof required invariants in addition to those needed for proving partial correctness. However, he did not introduce the concept of a termination function (sometimes called a bound function or a variant) with a corresponding proof rule for total correctness of while programs.
Hoare expressed already in [Hoa71b] the desire for computer support in “formulating the lemmas, and perhaps even checking the proofs.” Only much later, Filliâtre [Fil07] published a mechanized proof of FIND using the theorem prover Coq and following Hoare’s proof as closely as possible. Filliâtre noticed that Hoare’s informal termination proof does not meet the requirements of a termination function in the sense that the additional invariants used by Hoare are not real invariants.
A similar in style contribution is [Hoa72b], in which a correctness proof was given of a program encoding the sieve of Eratosthenes. The difference was that the program was developed together with its correctness proof and presented using nonrecursive procedures and classes, drawing on the contemporary works of E.W. Dijkstra on structured programming and O.J. Dahl on the objectoriented programming language SIMULA 67, which appeared as chapters in [DDH72]. These two contributions of Hoare, [Hoa71b] and [Hoa72b], showed that his original logic could be seen not only as a tool to verify programs but also as a guide to design correct programs. These ideas were further developed by Dijkstra, notably in his book [Dij76a].
All approaches to proving program termination formalize Floyd’s [Flo67] observation that
“Proofs of termination are dealt with by showing that each step of a program decreases some entity which cannot decrease indefinitely.”
The challenge is to incorporate such a reasoning into Hoare’s framework in a simple way. The first extension of Hoare’s proof system to total correctness was proposed in [MP74], but the proposed strengthening of the WHILE rule was somewhat elaborate. In [Har79] the appropriate rule took a simpler form:
WHILE I
where is an assertion with a free variable that does not appear in and ranges over natural numbers.
Still, a disadvantage of this rule is that it requires to find a parameterized loop invariant such that the value of decreases exactly by 1 with each loop iteration. Such a precise information is not needed to establish termination and sometimes is difficult to come up with. Additionally, as witnessed by Hoare’s correctness proof of the FIND program, it is often inconvenient to reason about partial correctness and termination at the same time. These concerns were addressed in the following proof rule introduced in [OG76a] that adds two new premises to the original WHILE rule:
WHILE II
where is an integer expression, called a termination function, and is an integer variable that does not appear in or .
This proof rule corresponds to Dijkstra’s modification of his weakest precondition semantics proposed in [Dij76b] and reproduced as [Dij82]. Returning to the above DIV program note that it does not terminate when . To prove its termination one needs to assume that initially and use a stronger loop invariant, namely . The termination function is particularly simple here: it is just . The relevant claims, so
and
are straightforward to prove.
3.2 Reasoning about recursive procedures
Let us continue with another milestone in the history of Hoare’s logic. In [FH71] Foley and Hoare established correctness of the program Quicksort, originally proposed by Hoare in [Hoa61]. Foley and Hoare stated:
“The purpose of the program Quicksort is to sort the elements to of an array into ascending order, while leaving untouched those below and above .”
The main difficulty was that Quicksort uses recursion. (Actually it was the first nontrivial example of a successful use of recursion.) This required appropriate proof rules that were introduced by Hoare in [Hoa71a].
In what follows given a program we denote by the set of variables that are subject to change in it. Further, we use to denote the declaration of a procedure with the body and two sorts of formal parameters: is the list of all global variables of which are subject to change by , i.e., , and is the list of all other global variables of (readonly variables). (Hoare actually used a slightly different notation that is now obsolete.)
Legal procedure calls are of the form , where

is a list of distinct variables of the same length as that are substituted for ,

is a list of expressions not containing any variable of , of the same length as , that are substituted for .
The following proof rule dealt with a ‘generic’ procedure call :
RECURSION
where the procedure is declared by .
(Hoare actually included the procedure declaration as an additional premise of the rule.) What is the intuition behind this rule? Hoare states in [Hoa71a] that it permits
“the use of the desired conclusion as a hypothesis in the proof of the body itself.”
More specifically, the symbol in the premise denotes the provability relation. So this rule is actually a metarule. According to [FH71] the premise of this rule
“permits to be assumed as a hypothesis in the proof of .”
This proof is supposed to be carried out using the remaining axioms and proof rules. The conclusion of the rule then coincides with this hypothesis.
To transfer a result established by the recursion rule to any other procedure call with actual parameters, say the lists and , the following substitution rule was introduced:
SUBSTITUTION
where the following holds for the substitutions applied to and :

is a list of free variables of or that do not occur in or , but which occur in or . Then is a list of fresh variables of the same length as that are substituted for ,

and are such that the call is legal.
So the substitution of the formal parameters by the actual ones is carried out together with an appropriate renaming of the ‘potentially conflicting’ variables in and .
Hoare noted that the above two rules are not sufficient to reason about recursive procedures. To have a more powerful proof method, he introduced the following rule, where stands for the set of free variables in an assertion and similarly with :
ADAPTATION
where is a list of variables with .
The precondition of the conclusion of this rule looks complicated. What does it express? Hoare explained in [Hoa71a]:
“If is the desired result of executing a procedure call, , and is already given, what is the weakest precondition such that is universally valid? It turns out that this precondition is .”
To deal with the declarations of local variables Hoare introduced the following rule:
DECLARATION
where and does not appear in unless the variables and are the same.
Additionally, the following proof rule, originally proposed in [Lau71], to reason about the conditional statement was used:
CONDITION
The correctness proof of Quicksort by Foley and Hoare in [FH71] was carried out using the above proof rules for partial correctness, originally presented in [Hoa71a]. The authors formulated two correctness criteria that should hold upon termination of Quicksort:

Sorted: the output array should be sorted within the given bounds and .

Perm: the output array should be a permutation of the original input array within the given bounds and but untouched outside these bounds.
The proof established these properties simultaneously, using appropriate assertions. On termination of Quicksort only very few remarks were spent. Since the recursive procedure Quicksort calls the nonrecursive procedure Partition, the correctness of Partition was also shown. Partition is an instantiation of a part of the while program FIND (see Subsection 3.1).
In [AdBO09] a detailed modular proof of total correctness Quicksort was given. Modular means that first the property Perm was proved and next, based on this result, the property Sorted. Also, termination was proved separately. These proofs relied on corresponding results for Partition that were established first. In particular, the termination proof of Partition required a more subtle invariant for the termination function of the outer loop than anticipated by Hoare in his termination proof of FIND [Hoa71b]. This agrees with the observation made by Filliâtre [Fil07].
4 Soundness and Completeness Matters
4.1 Soundness
In mathematical logic a standard way to judge the adequacy of a proof system is by means of the soundness and completeness concepts. It is then natural to address these matters for the proof systems introduced in the previous section. This requires some care since the CONSEQUENCE rule also uses customary formulas as premises, the WHILE II rule refers to integer variables and expressions, while the RECURSION rule refers in its premise to the provability relation.
For these considerations one needs to define some semantics with respect to which the introduced axioms and proof rules can be assessed. The first step is to define semantics of the underlying programming concepts. This can be done in a number of ways. The common denominator of all approaches is the concept of a state, a function that assigns appropriate values to all variables. In the case of simple variables these values should be taken from the domain corresponding to the variable type. In the case of array variables such a value should be a function from the domain of the array to the range type. Using such a function we can then assign the values to subscripted variables. As the complexity of the considered programming language grows, the concept of the state gets more complex. At this stage we limit ourselves to the notion of a state that assigns values to all simple and array variables.
In Hoare’s logic the types of the variables in the considered programs, for instance in the program DIV in Figure 4, are usually omitted and one simply assumes that all variables are typed and that the considered programs are correctly typed.
The second step is to define semantics of the programs. Several approaches were proposed in the literature. Their discussion and comparison is beyond the scope of this paper. For the sake of the subsequent discussion we assume a semantics of the programs that allows us to define computations of each considered program, which are identified here with the sequences of states that can be generated by it.
The final step is to define when a state satisfies an assertion and when the implications used in the premises of the CONSEQUENCE rule are true. To proceed in a systematic way we need to recall some basic notions from mathematical logic. Assume a firstorder language . An interpretation for consists of

a nonempty domain ,

an assignment to each ary function symbol in an ary function over ,

an assignment to each ary predicate symbol an ary relation over .
Given an interpretation each state is just a function from the set of variables to the domain . The definition of disregards our assumption that all variables are typed. However, it is easy to amend it by replacing the domain by the set of typed domains and by stipulating that each variable ranges over the domain associated with its type. Another, natural, adjustment can be done to include array variables in this framework.
The next step is to define when, given an interpretation , a state satisfies a formula of , written as , a definition we omit. We then say that a formula is true in , written as , if for all states we have .
Let us return now to assertions and programs. Suppose that all assertions are formulas in a given firstorder language and that all considered programs use function and predicate symbols of . Each interpretation for then determines the set of states and thus allows us for each program to define the set of its computations over . This in turn allows us to introduce the following notions.
Fix an interpretation . We say that the correctness formula is true in in the sense of partial correctness if the following holds:
every terminating computation of over that starts
in a state that satisfies ends in a state that satisfies .
Further, we say that the correctness formula is true in in the sense of total correctness if the following holds:
every computation of over that starts in a state that
satisfies terminates and ends in a state that satisfies .
Denote now by the original proof system of Hoare presented in Subsection 3.1 and by the proof system obtained from by replacing the WHILE rule by the WHILE II rule. The following two results capture the crucial properties of these proof systems.
Soundness Theorem 1 Consider a proof of the correctness formula in the system that uses a set of assertions for the CONSEQUENCE rule. Consider an interpretation in which all assertions from are true. Then is true in in the sense of partial correctness.
This property of the proof system is called soundness in the sense of partial correctness. It was first established in [HL74] w.r.t. the relational semantics in which programs were represented as binary relations on the sets of states.
The following counterpart of it justifies the reasoning about termination. It is, however, important to read it in conjunction with the qualifications that follow.
Soundness Theorem 2 Consider a proof of the correctness formula in the system that uses a set of assertions for the CONSEQUENCE rule. Consider an interpretation in which all assertions from are true. Then is true in in the sense of total correctness.
This property of the proof system is called soundness in the sense of total correctness. The first proof was given in [Har79] and referred to the proof system in which instead of the WHILE II rule the WHILE I rule was used. In this rule the assertion refers to a free variable that ranges over natural numbers. To guarantee the correct interpretation of such assertions one needs to ensure that in each state such a variable is interpreted as a variable of type ‘natural number’. In [Har79] this is achieved by considering assertion languages that extend the language of Peano arithmetics and by limiting one’s attention to arithmetic interpretations. These are interpretations that extend the standard model for arithmetic. Additionally one stipulates that there is a formula in the assertion language that, when interpreted, encodes finite sequences of the domain elements by one element. (This requirement is only needed for the completeness.)
In the case of the WHILE II rule similar considerations are needed to ensure the correct interpretation of the integer expression and the integer variable . The corresponding result was given in [AO91] and reproduced in the subsequent two editions of the book. As in [AO91] all variables are assumed to be typed, and are correctly interpreted and the need for the arithmetic interpretations disappears.
4.2 Completeness
The completeness of the and proof systems aims at establishing some form of converse of the Soundness Theorems. It is a subtle matter and requires a careful analysis. Let us start with the proof system . It is incomplete for an obvious reason. Consider for instance the correctness formula . By the ASSIGNMENT axiom we get . To conclude the proof we need to establish the obvious implication and apply the CONSEQUENCE rule. However, we have no proof rules and axioms that allow us to derive this implication.
A way out is to augment by a proof system allowing us to prove all true implications between the assertions. Unfortunately, in general such proof systems do not exist. This is a consequence of two results in mathematical logic. The first one states that the set of theorems in a proof system with recursive sets of axioms and finitary rules is recursively enumerable. The second one is Tarski’s undefinability theorem of [Tar36]. It implies that the set of formulas of Peano arithmetic that are true in the standard model of arithmetic is not arithmetically definable, so in particular not recursively enumerable. This means that completeness of the proof system cannot be established even if we add to it a proof system concerned with the assertions.
A natural solution is to try to establish completeness relative to the set of true assertions, that is to use the set of true assertions as an ‘oracle’ that can be freely consulted in the correctness proof. However, even then a problem arises because the assertion language can fail to be sufficiently expressive. Namely [Wan78] exhibited a true correctness formula that cannot be proved because the necessary intermediate assertions cannot be expressed in the considered assertion language. Simpler examples of such assertion languages were provided in [BT82].
A solution to these complications was proposed by S.A. Cook in [Coo78]. To explain it we need to introduce some additional notions. We call set of states is definable in an interpretation iff for some formula we have .
Next, we assign to each program its meaning relative to , defined by
At this moment the set
has at most one element, which will not be anymore the case when nondeterministic or parallel programs are considered.Then given an assertion and a program we define
So is the set of states that can be reached by executing over starting in a state satisfying ; ‘’ stands for the strongest postcondition.
We then say that the language is expressive relative to an interpretation and a class of programs if for every assertion and program the set of states is definable. Finally, given a firstorder language , a proof system for a class of programs is called complete in the sense of Cook if for every interpretation such that is expressive relative to and the following holds:
every correctness formula true in in the sense of partial
correctness can be proved in assuming all true formulas in .
In other words, completeness in the sense of Cook is a restricted form of relative completeness mentioned above, in which we limit ourselves to the class of interpretations w.r.t. which the underlying language is expressive.
The result presented in [Coo78] shows in particular that the proof system for partial correctness of while programs is complete in the sense of Cook. The main difficulty in the proof, that proceeds by induction on the program structure, consists in finding the loop invariants. A simpler argument was provided in [Cla79], where a dual definition of expressiveness was used. Instead of the strongest postcondition it relied on the socalled weakest liberal precondition, which, given an interpretation , assertion and a program , is defined by
So is the set of states from which all terminating computations of over end in a state satisfying . The qualification ‘liberal’ refers to the fact that termination is not guaranteed. The assumption that the set of states is definable makes it possible to find a very simple loop invariant. Namely, assuming that is true in an interpretation such that is expressive relative to it in this revised sense, it turns out that a loop invariant is simply an assertion defining . Additionally, both and are true in , which allows one to establish by the WHILE and CONSEQUENCE rules.
In that context it is useful to mention a proposal put forward in [BG87] by A. Blass and Y. Gurevich. They suggested to use a different assertion language than firstorder logic (or its multisorted variants dealing with subscripted variables or typed variables). The proposed assertion language is a fragment of the secondorder logic, called existential fixedpoint logic (EFL). EFL extends a fragment of firstorder logic, in which negation is applied only to atomic formulas and the universal quantifier is absent, by a fixedpoint operator. The authors showed that EFL is sufficient for proving relative completeness of the proof system without any expressiveness assumption. The reason is that both the strongest postconditions and the weakest liberal preconditions of the while programs (also in presence of recursive parameterless procedures) are definable in EFL.
Consider now the proof system for total correctness. To establish its completeness in the appropriate sense we encounter the same complications as in the case of , but additionally we have to deal with the problem of definability of the termination functions used in the WHILE II rule. In [Har79] completeness was established for the assertion languages that extend the language of Peano arithmetic and for arithmetic interpretations defined in the previous subsection, but the paper considered the WHILE I rule in which the termination functions are absent. In [AO91] and the subsequent two editions of the book relative completeness of was established. To this end, it was assumed that the underlying assertion language is expressive, which meant that for every while loop there exists an integer expression such that whenever terminates when started in a state , then the value is the number of loop iterations. In the adopted setup the assumption that all variables are typed automatically ensures that the considered interpretations included the standard model of Peano arithmetic and that is a natural number.
5 Finetuning the Approach
The matters discussed until now gloss over certain issues that have to do with the adjustments of the preconditions and postconditions, various uses of variables, and procedure parameters. In this section we discuss closely these matters, as they reveal some differences between customary logics and Hoare’s logic and show the subtleties of reasoning about various uses of variables in the context of programs.
5.1 Adaptation rules
In Hoare’s logic we see two types of rules. First, for each programming construct there is at least one axiom or rule dealing with its correctness. Together, they make possible a syntaxdirected reasoning about program correctness. Second, there are proof rules where the same program is considered in the premise and the conclusion. These rules allow us to adapt an already established correctness formula about to another proof context. Most prominent is the CONSEQUENCE rule that allows us to strengthen the precondition to a precondition with and to weaken the postcondition to a postcondition with , thus arriving at the conclusion . Another one is Hoare’s ADAPTATION rule dealing with procedure calls. Hoare stated in [Hoa71a] that in the absence of recursion, i.e., in his proof system for while programs, his ADAPTATION rule is a derived rule. So the power of this rule is only noticeable in the context of recursion.
Other rules can be conceived that are concerned with the same program in the premise and conclusion. For example, the following rules were used in various proof systems in the literature. Here and elsewhere we denote the set of variables of a program by .
INVARIANCE
where .
INTRODUCTION
where .
SUBSTITUTION I
where .
We shall return to these rules shortly. But first, following [Old83b], let us discuss the ADAPTATION rule in the more general setting of programs. We say that a program is based on a finite set of variables if holds. Now we can recast Hoare’s ADAPTATION rule as follows:
ADAPTATION I
where and are lists of variables, is based on , and .
Following Hoare, the precondition in the conclusion of this rule intends to express the weakest precondition such that holds (in the sense of partial correctness), assuming that is the desired result of executing and is already established. This intention can be phrased as follows: find the weakest assertion such that holds for all programs based on that satisfy . In [Old83b] this precondition was calculated as follows:
Comparing with the precondition used in the conclusion of the ADAPTATION I rule shows that the implication
holds but the converse is false. Thus Hoare’s precondition is sound but is stronger than necessary. This suggests the following variant of the rule:
ADAPTATION II
where is the precondition calculated above and are as in the ADAPTATION I rule.
To compare the power of different adaptation rules, S. de Gouw and J. Rot [dGR16] used the following notion due to [Kle99]. A set of proof rules for a class of programs is called adaptation complete if for all assertions and finite sets of variables

whenever for all programs based on the truth of implies the truth of in the sense of partial correctness,

then for all program based on there is a derivation of from using only rules of , written as .
By the result of [Old83b], the set is adaptation complete. Further, enjoys two properties, as noted in [dGR16]:

Other adaptation rules, like INVARIANCE, INTRODUCTION, SUBSTITUTION I, are derivable from .

Any derivation in can be replaced by a single application of each of the two rules in .
What about Hoare’s adaptation rule? Let . From a counterexample given in [Old83b] it follows that this set is not adaptation complete. Nevertheless, enjoys property 1, but not property 2 of .
The paper [Old83b] also investigated three other adaptation rules proposed in the literature. An adaptation rule introduced in [GL80] turned out to be sound but not adaptation complete when grouped together with the CONSEQUENCE rule. In turn an adaptation rule for the programming language Euclid given in [LGH78] is not even sound, while an adaptation rule introduced in [CO81] is both sound and adaptation complete when grouped together with the CONSEQUENCE rule.
5.2 Subscripted and local variables
Subscripted variables In both [Hoa71b] and [FH71] the ASSIGNMENT axiom was applied to subscripted variables, by implicitly assuming that the definition of substitution is obvious for such variables. This is indeed the case when the subscripts are simple expressions, for example a constant or a simple variable, which was indeed the case for both programs analyzed there. However, in the case of more complex subscripts difficulties may arise, as the following example discussed in [dB80] shows. In the case of an assignment to a simple variable any correctness formula , where is a constant is true. However, the correctness formula
is false. Indeed, given the precondition the execution of the assignment amounts to executing the assignment after which the expression evaluates to 2 and not 1. This suggests that the ASSIGNMENT axiom cannot be used for arbitrary subscripted variables.
This complication was clarified and solved in a systematic way in [dB80], by extending the definition of substitution to an arbitrary subscripted variable. The crucial step in the inductive definition of the substitution deals with the case when and , for which one defines
So in the if case one checks whether after performing the substitution on the subscripts and are aliases —and substitutes in that case by — while in the else case one applies the substitution inductively to the subscript of .
J.W. de Bakker showed that with this extended definition of substitution the ASSIGNMENT axiom remains sound for subscripted variables. Different axioms for assignment to subscripted variables are given in [HW73, Gri78, Apt81].
Local variables Consider now the local variables. They can be viewed as a counterpart of bound variables in logical formulas. However, the situation is more complicated because of the dynamic character of variables in programming languages and the presence of procedures.
We discussed already completeness in the sense of Cook of the proof system given in [Coo78]. Cook actually considered an extension of the proof system by axioms and proof rules for a small programming language that allows variable declarations and nonrecursive procedures and proved its completeness in the above sense. However, the semantics of the block statement made the corresponding completeness result invalid. It is useful to discuss this matter more closely.
Local variables were already dealt with in the DECLARATION rule mentioned in Subsection 3.2. This rule was slightly adjusted in [Coo78] so that one could reason about variable declarations in the context of nonrecursive procedures. But even without this adjustment a possible problem arises. Consider the program
In many programming languages it would yield an error because the righthand side of the second assignment refers to a value of the uninstantiated variable . However, according to the semantics proposed in [Coo81] such assignments were allowed. Local variables were modelled using a stack in which the last used value was kept on the stack and implicitly assigned to the next local variable. As a result the correctness formula
was true according to the semantics though there is no way to prove it.
[Coo81] provided a corrigendum in which two possible fixes were suggested. One was to modify the semantics so that the proposed proof system is still complete. This can be achieved by assigning to each newly declared variable a register that has not been used before and modifying the notion of a state accordingly.
Another one was to require all newly declared variables to be initialized to some fixed value, say . This option, first used in [Gor75], results in the following rule:
BLOCK
where .
To correct the proof of the relative completeness result given in [Coo78] one should then replace the DECLARATION rule by the BLOCK rule. Yet another option is to require all newly declared variables to be explicitly initialized to some arbitrary expression. This approach was taken in [AdBO09], where the following more general version of the corresponding rule was used that allowed a declaration of a list of new variables:
BLOCK I
where .
Here , where is a list of different variables and a corresponding list of expressions, is a parallel assignment, introduced in [Dij75] and further discussed in Section 10.
It is natural to postulate in the BLOCK I rule that the variables listed in do not appear in the expressions from . However, this is a syntactic condition concerning the program formation that is not needed to reason about partial correctness. Further, as we shall soon see, putting no restrictions on and turns out to be useful for modelling parameter passing in a subtle situation when some formal parameters happen to coincide with the global variables that are used in actual parameters.
An observant reader will notice that in the discussed rules substitution is used differently. In the DECLARATION rule the substitution is applied to the programs, in the BLOCK rule it is applied to the assertions, while —interestingly— in the BLOCK I rule it is not used at all. The resulting proof systems yield different results when applied to programs that use procedures. To illustrate the problem consider the parameterless procedure declared by , the program
and the correctness formula
(1) 
To reason about the procedure call we add to the proof system the following degenerated version of the RECURSION rule:
COPY
assuming the declaration of a parameterless nonrecursive procedure .
In our case it allows us to derive from . This in turn allows us to derive
Now, applying the DECLARATION rule we get (1).
However, using the BLOCK rule we get a different conclusion. Namely, we first establish
from which
(2) 
follows.
Finally, if we use the BLOCK I rule, and therefore consider a slightly modified program
we get .
These differences have to do with the way local variables are interpreted in the presence of procedures. According to the static scope the procedures should be evaluated in the environment in which they were declared, while according to the dynamic scope they should be evaluated in the environment in which they were called. So according to the static scope, which is adopted in most imperative languages, we should conclude (1) and not (2).
In [AdBO09] static scope is achieved by ensuring that the local variables are first renamed so that they differ from global variables. In the above example one thus considers the statement
instead of . Then we get , as desired.
5.3 Parameter mechanisms and procedure calls
The callbyname parameter mechanism was originally proposed in Algol 60. It was used in [Hoa71a] and [Coo78] and adopted in all subsequently discussed papers on procedures, unless stated otherwise. It boils down to a simultaneous substitution of the actual parameters for the formal ones, so it is natural that it was modelled in the SUBSTITUTION rule by a straightforward substitution.
However, a most commonly used parameter mechanism is callbyvalue. According to its semantics the actual parameters are evaluated first and subsequently their values assigned to the formal parameters. Some other parameter mechanisms were occasionally used. For example, the programming language Pascal (see [JW75]) also allows the callbyvariable mechanism (also called callbyreference), which is a mixture of callbyname and callbyvalue. The actual parameter has to be a variable. In case it is a subscripted variable, its index is evaluated first and the resulting subscripted variable is substituted for the formal parameter.
In [AdB77] it was proposed to model these two parameter mechanisms of Pascal by means of a ‘syntactic application’. In what follows we use in the procedure declaration the qualification to indicate callbyvalue and to indicate callbyvariable. Given a procedure declaration , so with called by value and called by variable, the call , where is an expression and a, possibly subscripted, variable, was modelled by the program defined by
where is a simple variable and do not appear in or .
This naturally leads to the following generalization of the COPY rule from the previous subsection:
CALLBYVALUE/CALLBYVARIABLE
where the nonrecursive procedure is declared by .
In [AdBO09] this approach to callbyvalue was slightly simplified by noticing that no variable renaming is needed to model it. The resulting rule, that needs to be used in conjunction with the BLOCK I rule, became:
CALLBYVALUE
where the nonrecursive procedure is declared by .
To see how this rule correctly handles a subtle situation when a formal parameter coincides with a global variable used in an actual parameter, consider a procedure declared by
Using the BLOCK I rule we can then establish the correctness formula
from which
follows by the CALLBYVALUE rule. This agrees with the semantics of the callbyvalue parameter mechanism. (The stronger postcondition, can be established using the axioms and proof rules introduced in the next section.) So the assignment refers on the lefthand side to the formal parameter and on the righthand side to the actual parameter that contains the global variable .
An obvious drawback of these two proof rules is that each procedure call has to be dealt with separately. It would be preferable if we had to our disposal a counterpart of the SUBSTITUTION rule that would allow us to establish a desired property for a ‘generic call’ just once, from which the needed properties of all specific calls would follow. In [CO81] it was shown that this can be done under some assumptions that in particular imply that static and dynamic scopes coincide. More recently, in [AdBO09] it was observed that the same be achieved for the calls in which no actual parameter happens to coincide with a global variable.
These two rules can be modified in a natural way to deal with recursive procedures. For the first one we then get the following rule to which we shall return in the next section:
RECURSION I
where the procedure is declared by and , , are the procedure calls that appear in and .
In [Apt81] it was suggested that other parameter mechanisms can be modelled by syntactic application and subsequently reasoned about within Hoare’s logic. An example is the callbyresult parameter mechanism of Algol W, a precursor of Pascal (see [WH66]). According to it the actual parameter is either a simple or a subscripted variable. Upon termination of the call the value of the formal parameter is assigned to the actual parameter. In the case the actual parameter is a subscripted variable, its index is evaluated first. This parameter mechanism is used in conjunction with the callbyvalue.
6 Reasoning about Arbitrary Procedures
6.1 Completeness results for recursive procedures
Partial correctness The relative completeness result established in [Coo78] dealt with the language considered in [Hoa71a] and Subsection 3.2, except that recursion was disallowed. To ensure soundness Cook stipulated that for the procedure calls no variable in different from formal parameters occurs globally in the procedure body.
This result was extended to the language in which recursive procedures were allowed in the Master Thesis of G.A. Gorelick, that was written under the supervision of Cook. The details are only available as a technical report [Gor75]. We present the essentials for the case of a single recursive procedure , in line with the presentation in Subsection 3.2.
The conceptual contribution of Gorelick is the introduction of most general formulas. He wrote:
“The completeness result for recursive programs is then obtained by exhibiting, for each recursive procedure , a “most general formula” such that , and for all true formulas about .”
Given a procedure declaration , a most general formula for the procedure is a correctness formula
where is the list of variables that appear in the formal parameters and or have a global occurrence in , and is a list a fresh variables (not occurring in , or ), of the same length as that serves to freeze the initial values of the variables in before they are changed by . The formula is taken to express the strongest postcondition introduced in Subsection 4.2. Since the variables in and may appear in as free variables, describes the relationship between the initial and final values of the variables in computed by the procedure body .
The crucial properties of most general formulas are captured by the following lemmas due to [Gor75].
Lemma G1 If is true in in the sense of partial correctness then it can be derived from and the set of all true formulas in using “suitable adaptation rules”.
Lemma G2 For each procedure call the most general formula can be derived from the set of all true formulas in using Lemma G1 and the RECURSION rule.
The proof of Lemma G1 is based on the following axiom and adaptation rules, proposed in [Gor75] for the case where is a procedure call :
INVARIANCE
where .
CONJUNCTION
VARIABLE SUBSTITUTION
where

,

if for any component of the list , , then the corresponding component of the list satisfies .
Using Lemmas G1 and G2, Gorelick established the following result.
Completeness Theorem For programs with recursive procedures as defined in Subsection 3.2, the proof system extended by the RECURSION and SUBSTITUTION rules of Subsection 3.2 and the above axiom and adaptation rules is complete in the sense of Cook.
The restrictions imposed on the actual parameters in the procedure calls were partly taken care of in [CO81]. In [Gor75] callbyname parameter mechanism was used. An analogous work was carried out for the case of the callbyvalue and callbyvariable parameter mechanisms. In [dB80] soundness and relative completeness was proved for a proof system in which the RECURSION I rule was used instead of the RECURSION rule and in which in addition to various rules mentioned so far also a proof rule dealing with the renaming of variables in programs was added. However, the proof was established only for the special case of a single recursive procedure, given the combinatorial explosion of the cases concerned with the relation between the actual and formal parameters. The main ideas of this proof were discussed in [Apt81].
Total correctness To deal with total correctness of the recursive procedures the following analogue of the WHILE I rule was proposed independently in [Cla76] and [Sok77]:
RECURSION II
given the procedure declaration , and where is an assertion with a free variable that does not appear in and ranges over natural numbers.
In [Apt81] it was stated without a proof that the proof system corresponding to the one used in [Gor75], more precisely the one in which the INVARIANCE axiom is dropped (it is not sound for total correctness), the procedures have no parameters and the RECURSION rule is replaced by the RECURSION II rule, is sound in the sense of total correctness. However, it was discovered in [AdB90a] that this claim is false. The problem has to do with the fact that the counter variable can be subject to quantifier elimination in the INTRODUCTION rule, and to substitution in the SUBSTITUTION rule.
For example, given the procedure declaration of an obviously nonterminating procedure one can establish the premises and of the above rule for and conclude by the RECURSION II and CONSEQUENCES rules.
The solution proposed in [AdB90a] was to stipulate that the counter variables are treated as constants in the INTRODUCTION and SUBSTITUTION rules. This allowed the authors to prove both soundness and relative completeness of the resulting proof system for total correctness of recursive procedures without parameters w.r.t. the arithmetic interpretations introduced in Subsection 4.1.
Having in mind the above complications, in [AdBO09] the following analogue of the WHILE II rule was used for recursive procedures with the callbyvalue parameter mechanism:
RECURSION III
Comments
There are no comments yet.