[TYPES] What's a program? (Seriously)

Tadeusz Litak tadeusz.litak at gmail.com
Wed May 19 13:32:48 EDT 2021


This controversy (between Gabriel and Thomas + Jason) is an instructive example.  One should perhaps distinguish between 
a "logical axiom" understood as a purely propositional scheme and its specific instances in  expressive languages. A 
purely propositional scheme may allow computational interpretations in some domains even when its specific instances 
fail in concrete domains of foundational significance.

But then, of course, the question arises whether such a scheme deserves the name of a *logical* axiom at all. 
Philosophers of logic tend to insist that such an axiom should be valid across all possible worlds. This contrasts 
rather funnily with the fact that most of them  still seem to prefer classical logic. Then again, they tend to be 
brought up on set theory and have little awareness of the fact that structures refuting excluded middle are actively 
explored and used in modern CS (and math).

(When it comes to logical character of Choice and suchlike, yet another terminological problems arises. There is a 
long-standing discussion concerning demarcation lines between logic, set/type theory and the rest of mathematics. 
Clearly, AC cannot even be stated if the ambient language is not sufficiently rich. Thus, even many philosophers who 
believe in ZFC as a foundation could refuse to call Choice a *logical* axiom.)

To come back to the question whether an inconsistent logic is a logic at all: there are several issues here. The first 
one is the validity of principle of explosion (ex falso quodlibet). It has been argued that Brouwer himself did not 
believe in this law. It also wasn't included in Kolmogorov's first attempt at axiomatizing intuitionistic logic (1925); 
he added it in a 1932 paper, after Heyting. Ingebrit Johansson explicitly disposed of this law in 1937. More broadly, 
paraconsistent/relevance logicians do study systems admitting some contradictions with a non-degenerate notion of 
theoremhood.

However, this is clearly of little help when discussing Turing-complete languages admitting general recursion, where 
*all* types are inhabited and the notion of theoremhood does collapse. To restore some connection with logic, one would 
need to claim that logic is concerned with notions subtler than theoremhood.

One possible choice, going back to Tarski and underlying the entire enterprise of Abstract Algebraic Logic (AAL), is to 
shift focus to the consequence relation (entailment or turnstile). I am not sure if this is a viable route here. One 
would need to come up with a natural turnstile s.t. A|- B may fail despite A -> B being inhabited and despite \emptyset 
|- B being valid. Otherwise, the consequence relation remains as degenerate as the notion of theoremhood.

So one would have to go even more fine-grained and claim that logic is (or should be) concerned with criteria of 
identity of proofs, i.e., the question of what makes two proofs of the same proposition identical. This has been 
suggested by some logicians focusing on proof theory and/or categorical logic, but most logicians would probably find it 
an extravagant view.

Best,
t.



On 19/5/21 5:26 PM, Gabriel Scherer wrote:
> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ]
>
> I am not convinced by the example of Jason and Thomas, which suggests that
> I am missing something.
>
> We can interpret the excluded middle in classical abstract machines (for
> example Curien-Herbelin-family mu-mutilda, or Parigot's earlier classical
> lambda-calculus), or in presence of control operators (classical abstract
> machines being nicer syntax for non-delimited continuation operators). If
> you ask for for a proof of (A \/ not A), you get a "fake" proof of (not A);
> if you ever manage to build a proof of A and try to use it to get a
> contradiction using this (not A), it will "cheat" by traveling back in time
> to your "ask",  and serve you your own proof of A.
>
> This gives a computational interpretation of (non-truncated) excluded
> middle that seems perfectly in line with Talia's notion of "program". Of
> course, what we don't get that we might expect is a canonicity property: we
> now have "fake" proofs of (A \/ B) that cannot be distinguished from "real"
> proofs by normalization alone, you have to interact with them to see where
> they take you. (Or, if you see those classical programs through a
> double-negation translation, they aren't really at type (A \/ B), but
> rather at its double-negation translation, which has weirder normal forms.)
>
>
>
>
> On Wed, May 19, 2021 at 5:07 PM Jason Gross <jasongross9 at gmail.com> wrote:
>
>> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list
>> ]
>>
>> Non-truncated Excluded Middle (that is, the version that returns an
>> informative disjunction) cannot have a computational interpretation in
>> Turing machines, for it would allow you to decide the halting problem.
>> More generally, some computational complexity theory is done with reference
>> to oracles for known-undecidable problems.  Additionally, I'd be suspicious
>> of a computational interpretation of the consistency of ZFC or PA ----
>> would having a computational interpretation of these mean having a type
>> theory that believes that there are ground terms of type False in the
>> presence of a contradiction in ZFC?
>>
>> On Wed, May 19, 2021, 07:38 Talia Ringer <tringer at cs.washington.edu>
>> wrote:
>>
>>> [ The Types Forum,
>> http://lists.seas.upenn.edu/mailman/listinfo/types-list
>>> ]
>>>
>>> Somewhat of a complementary question, and proof to the world that I'm up
>> at
>>> 330 AM still thinking about this:
>>>
>>> Are there interesting or commonly used logical axioms that we know for
>> sure
>>> cannot have computational interpretations?
>>>
>>> On Wed, May 19, 2021, 3:24 AM Neel Krishnaswami <
>>> neelakantan.krishnaswami at gmail.com> wrote:
>>>
>>>> [ The Types Forum,
>>> http://lists.seas.upenn.edu/mailman/listinfo/types-list
>>>> ]
>>>>
>>>> Dear Sandro,
>>>>
>>>> Yes, you're right -- I didn't answer the question, since I was too
>>>> taken by the subject line. :)
>>>>
>>>> Anyway, I do think that HoTT with a non-reducible univalence axiom is
>>>> still a programming language, because we can give a computational
>>>> interpretation to that language: for example, you could follow the
>>>> strategy of Angiuli, Harper and Wilson's POPL 2017 paper,
>>>> *Computational Higher-Dimensional Type Theory*.
>>>>
>>>> Another, simpler example comes from Martin Escardo's example upthread
>>>> of basic Martin-Löf type theory with the function extensionality
>>>> axiom. You can give a very simple realizability interpretation to the
>>>> equality type and extensionality axiom, which lets every compiled
>>>> program compute.
>>>>
>>>> What you lose in both of these cases is not the ability to give a
>>>> computational model to the language, but rather the ability to
>>>> identify normal forms and to use an oriented version of the equational
>>>> theory of the language as the evaluation mechanism.
>>>>
>>>> This is not an overly shocking phenomenon: it occurs even in much
>>>> simpler languages than dependent type theories. For example, once you
>>>> add the reference type `ref a` to ML, it is no longer the case that
>>>> the language has normal forms, because the ref type does not have
>>>> introduction and elimination rules with beta- and eta- rules.
>>>>
>>>> Another way of thinking about this is that often, we *aren't sure*
>>>> what the equational theory of our language is or should be. This is
>>>> because we often derive a language by thinking about a particular
>>>> semantic model, and don't have a clear idea of which equations are
>>>> properly part of the theory of the language, and which ones are
>>>> accidental features of the concrete model.
>>>>
>>>> For example, in the case of name generation – i.e., ref unit – our
>>>> intuitions for which equations hold come from the concrete model of
>>>> nominal sets. But we don't know which of those equations should hold
>>>> in all models of name generation, and which are "coincidental" to
>>>> nominal sets.
>>>>
>>>> Another, more practical, example comes from the theory of state. We
>>>> all have the picture of memory as a big array which is updated by
>>>> assembly instructions a la the state monad. But this model incorrectly
>>>> models the behaviour of memory on modern multicore systems. So a
>>>> proper theory of state for this case should have fewer equations
>>>> than what the folk model of state validates.
>>>>
>>>>
>>>> Best,
>>>> Neel
>>>>
>>>> On 19/05/2021 09:03, Sandro Stucki wrote:
>>>>> Talia: thanks for a thought-provoking question, and thanks everyone
>>> else
>>>>> for all the interesting answers so far!
>>>>>
>>>>> Neel: I love your explanation and all your examples!
>>>>>
>>>>> But you didn't really answer Talia's question, did you? I'd be
>> curious
>>>>> to know where and how HoTT without a computation rule for univalence
>>>>> would fit into your classification. It would certainly be a language,
>>>>> and by your definition it has models (e.g. cubical ones) which, if I
>>>>> understand correctly, can be turned into an abstract machine (either
>> a
>>>>> rewriting system per your point 4 or whatever the Agda backends
>> compile
>>>>> to). So according to your definition of programming language (point
>> 3),
>>>>> this version of HoTT would be a programming language simply because
>>>>> there is, in principle, an abstract machine model for it? Is that
>> what
>>>>> you had in mind?
>>>>>
>>>>> Cheers
>>>>> /Sandro
>>>>>
>>>>>
>>>>> On Wed, May 19, 2021 at 6:21 AM Neel Krishnaswami
>>>>> <neelakantan.krishnaswami at gmail.com
>>>>> <mailto:neelakantan.krishnaswami at gmail.com>> wrote:
>>>>>
>>>>>      [ The Types Forum,
>>>>>      http://lists.seas.upenn.edu/mailman/listinfo/types-list
>>>>>      <http://lists.seas.upenn.edu/mailman/listinfo/types-list> ]
>>>>>
>>>>>      Dear Talia,
>>>>>
>>>>>      Here's an imprecise but useful way of organising these ideas
>> that I
>>>>>      found helpful.
>>>>>
>>>>>      1. A *language* is a (generalised) algebraic theory. Basically,
>>> think
>>>>>           of a theory as a set of generators and relations in the
>> style
>>> of
>>>>>           abstract algebra.
>>>>>
>>>>>           You need to beef this up to handle variables (e.g., see the
>>>> work of
>>>>>           Fiore and Hamana) but (a) I promised to be imprecise, and
>> (b)
>>>> the
>>>>>           core intuition that a language is a set of generators for
>>> terms,
>>>>>           plus a set of equations these terms satisfy is already
>> totally
>>>>>           visible in the basic case.
>>>>>
>>>>>           For example:
>>>>>
>>>>>           a) the simply-typed lambda calculus
>>>>>           b) regular expressions
>>>>>           c) relational algebra
>>>>>
>>>>>      2. A *model* of a a language is literally just any old
>> mathematical
>>>>>           structure which supports the generators of the language and
>>>>>           respects the equations.
>>>>>
>>>>>           For example:
>>>>>
>>>>>           a) you can model the typed lambda calculus using sets
>>>>>              for types and mathematical functions for terms,
>>>>>           b) you can model regular expressions as denoting particular
>>>>>              languages (ie, sets of strings)
>>>>>           c) you can model relational algebra expressions as sets of
>>>>>              tuples
>>>>>
>>>>>      2. A *model of computation* or *machine model* is basically a
>>>>>           description of an abstract machine that we think can be
>>>> implemented
>>>>>           with physical hardware, at least in principle. So these are
>>>> things
>>>>>           like finite state machines, Turing machines, Petri nets,
>>>> pushdown
>>>>>           automata, register machines, circuits, and so on. Basically,
>>>> think
>>>>>           of models of computation as the things you study in a
>>>> computability
>>>>>           class.
>>>>>
>>>>>           The Church-Turing thesis bounds which abstract machines we
>>>> think it
>>>>>           is possible to physically implement.
>>>>>
>>>>>      3. A language is a *programming language* when you can give at
>>> least
>>>>>           one model of the language using some machine model.
>>>>>
>>>>>           For example:
>>>>>
>>>>>           a) the types of the lambda calculus can be viewed as partial
>>>>>              equivalence relations over Gödel codes for some
>> universal
>>>> turing
>>>>>              machine, and the terms of a type can be assigned to
>>>> equivalence
>>>>>              classes of the corresponding PER.
>>>>>
>>>>>           b) Regular expressions can be interpreted into finite state
>>>>>      machines
>>>>>              quotiented by bisimulation.
>>>>>
>>>>>           c) A set in relational algebra can be realised as
>> equivalence
>>>>>              classes of B-trees, and relational algebra expressions as
>>>> nested
>>>>>              for-loops over them.
>>>>>
>>>>>          Note that in all three cases we have to quotient the machine
>>>> model
>>>>>          by a suitable equivalence relation to preserve the equations
>> of
>>>> the
>>>>>          language's theory.
>>>>>
>>>>>          This quotient is *very* important, and is the source of a lot
>>> of
>>>>>          confusion. It hides the equivalences the language theory
>> wants
>>> to
>>>>>          deny, but that is not always what the programmer wants –
>> e.g.,
>>> is
>>>>>          merge sort equal to bubble sort? As mathematical functions,
>>> they
>>>>>          surely are, but if you consider them as operations running on
>>> an
>>>>>          actual computer, then we will have strong preferences!
>>>>>
>>>>>      4. A common source of confusion arises from the fact that if you
>>> have
>>>>>           a nice type-theoretic language (like the STLC), then:
>>>>>
>>>>>           a) the term model of this theory will be the initial model
>> in
>>>> the
>>>>>              category of models, and
>>>>>           b) you can turn the terms into a machine
>>>>>              model by orienting some of the equations the
>> lambda-theory
>>>>>              satisfies and using them as rewrites.
>>>>>
>>>>>           As a result we abuse language to talk about the theory of
>> the
>>>>>           simply-typed calculus as "being" a programming language.
>> This
>>> is
>>>>>           also where operational semantics comes from, at least for
>>> purely
>>>>>           functional languages.
>>>>>
>>>>>      Best,
>>>>>      Neel
>>>>>
>>>>>      On 18/05/2021 20:42, Talia Ringer wrote:
>>>>>       > [ The Types Forum,
>>>>>      http://lists.seas.upenn.edu/mailman/listinfo/types-list
>>>>>      <http://lists.seas.upenn.edu/mailman/listinfo/types-list> ]
>>>>>       >
>>>>>       > Hi friends,
>>>>>       >
>>>>>       > I have a strange discussion I'd like to start. Recently I was
>>>>>      debating with
>>>>>       > someone whether Curry-Howard extends to arbitrary logical
>>>>>      systems---whether
>>>>>       > all proofs are programs in some sense. I argued yes, he argued
>>>>>      no. But
>>>>>       > after a while of arguing, we realized that we had different
>>>>>      axioms if you
>>>>>       > will modeling what a "program" is. Is any term that can be
>> typed
>>>>>      a program?
>>>>>       > I assumed yes, he assumed no.
>>>>>       >
>>>>>       > So then I took to Twitter, and I asked the following questions
>>>> (some
>>>>>       > informal language here, since audience was Twitter):
>>>>>       >
>>>>>       > 1. If you're working in a language in which not all terms
>>> compute
>>>>>      (say,
>>>>>       > HoTT without a computational interpretation of univalence, so
>>> not
>>>>>      cubical),
>>>>>       > would you still call terms that mostly compute but rely on
>>> axioms
>>>>>       > "programs"?
>>>>>       >
>>>>>       > 2. If you answered no, would you call a term that does fully
>>>>>      compute in the
>>>>>       > same language a "program"?
>>>>>       >
>>>>>       > People actually really disagreed here; there was nothing
>>>> resembling
>>>>>       > consensus. Is a term a program if it calls out to an oracle?
>>>>>      Relies on an
>>>>>       > unrealizable axiom? Relies on an axiom that is realizable, but
>>>>>      not yet
>>>>>       > realized, like univalence before cubical existed? (I suppose
>>> some
>>>>>      reliance
>>>>>       > on axioms at some point is necessary, which makes this even
>>>>>      weirder to
>>>>>       > me---what makes univalence different to people who do not view
>>>>>      terms that
>>>>>       > invoke it as an axiom as programs?)
>>>>>       >
>>>>>       > Anyways, it just feels strange to get to the last three weeks
>> of
>>>> my
>>>>>       > programming languages PhD, and realize I've never once asked
>>> what
>>>>>      makes a
>>>>>       > term a program 😅. So it'd be interesting to hear your
>> thoughts.
>>>>>       >
>>>>>       > Talia
>>>>>       >
>>>>>



More information about the Types-list mailing list