[TYPES] decidability of BCCCs?

Gabriel Scherer gabriel.scherer at gmail.com
Sat Oct 8 07:13:46 EDT 2016


> Hence your construction must exploit simple-typedness.
> Can you comment on this?

The idea of saturation is to make all possible deductions from a
context, that is, to introduce all possible neutral terms of positive
type (all left-focusing phases); by doing this as early as possible,
we make sure that each neutral is introduced as early as it could and
that the result of the case-analysis on it is shared by all following
subterms. (One can think of the general algorithm as a mix of backward
and forward search, with invertible phases doing backward
goal-directed search and saturation phases doing forward
context-directed search).

Of course in general the set of neutral terms to introduce from the
context may be infinite, consider the context (z : X+, s : X+ -> X+)
for example (where "X+" denotes an atomic type that is positive). If
we asked for full saturation, any saturated term in this context would
be infinite. We relax this requirement by allowing to saturate on only
a subset of the possible neutrals. But if we allowed arbitrary
subsets, we would lose canonicity: if you stop saturation too early
you may "miss" a proof of 0 and thus not see that two terms are
actually equal. We need a compromise where we introduce enough
neutrals to eventually find any possible proof of 0.

Our compromise is the following. We ask that the selected subset be
complete from a provability point of view: that for any type that
could be introduced by saturation at this point, saturation does not
stop before this type exists in the context. (In this example
saturation can stop immediately as (z : X+) is already in the context.
For general positive types the notion of "exist in the context"
becomes "provable by a strong neutral", see Section 4.2 of the
article.) There, we use simple-typedness: we use the subformula
property (in cut-free proofs, the types that appear in sub-derivations
of a bigger derivation are sub-formulas of the types of the bigger
derivation's judgment; this doesn't hold in presence of polymorphism)
to argue that there always exist finite subsets that satisfy this
"complete for provability" restriction.

In System F or with Pi-types, when instantiating a polymorphic type or
a dependent abstraction, there is a potentially infinite space of
possible instantiation choices that all result in different types, and
we can't know in advance which will or will not be useful to the rest
of the computation.
On the other hand, focusing itself extends to richer type systems or
logics, so I hope that some of the ideas in this work (in particular
on the initial question of "which types have a unique inhabitant?")
can be extended to richer systems, to get semi-decidable procedures.

> isn¹t the only thing you get for the empty type exactly this: that is that any two terms
> are equal iff their context is logically inconsistent?

Note quite, for example in the empty (consistent) context, two terms
of type (0 -> Bool) are equal. If you do goal-directed proof search,
it is obvious that you should introduce a lambda-abstraction here and
thus discover an inconsistent context. But a given equivalence
algorithm works from the two terms to compare, and it may or may not
decide to look under that function type.

Thanks for the questions!
.

On Sat, Oct 8, 2016 at 4:05 AM, Thorsten Altenkirch
<Thorsten.Altenkirch at nottingham.ac.uk> wrote:
> Hi Gabriel,
>
> this is certainly an impressive achievement: congratulations.
>
> We know that your construction also contains an algorithm to decide
> logical consistency in intuitionistic propositional logic, that is G |-
> true = false iff G is logically inconsistent. That shows in particular
> that decidability can only hod for simply types, but not for dependent
> types because here we would decide logical consistency for intuitionistic
> predicate logic which is undecidable. Hence your construction must exploit
> simple-typedness. Can you comment on this?
>
> I also wonder about the following: if you have an algorithm to decide
> equality for non-empty sums, isn¹t the only thing you get for the empty
> type exactly this: that is that any two terms are equal iff their context
> is logically inconsistent? That would provide a way to extend our
> algorithm to empty types but I suspect proving this proposition is
> non-trivial.
>
> Cheers,
> Thorsten
>
> On 06/10/2016, 19:58, "Types-list on behalf of Gabriel Scherer"
> <types-list-bounces at lists.seas.upenn.edu on behalf of
> gabriel.scherer at gmail.com> wrote:
>
>>[ The Types Forum,
>>http://lists.seas.upenn.edu/mailman/listinfo/types-list ]
>>
>>Hi all,
>>
>>I'm coming back to this old types-list thread to announce the article
>>
>>  Deciding equivalence with sums and the empty type
>>  Gabriel Scherer, 2016
>>  https://arxiv.org/abs/1610.01213
>>
>>which proposes a proof of decidability of contextual equivalence for
>>the simply-typed lambda-calculus with sums and empty types, which
>>corresponds to decidability in the free bicartesian category. Any
>>feedback is of course warmly welcome.
>>
>>
>># Context of the work
>>
>>The proof builds on an extension of the proof-theoretic technique of
>>focusing. Focusing is a technique to make the proofs of a logic more
>>canonical by reasoning on the invertibility of proof rules, and
>>imposing an ordering between invertible phases (automatic) and
>>non-invertible phases (interesting). In previous work with Didier
>>Rémy, we proposed a refinement of focused intuitionistic logic that is
>>"saturated" (non-invertible phases have to perform all possible
>>non-invertible deductions at once, modulo some restrictions to
>>preserve termination), and give quasi-normal forms that are quite
>>close to the normal forms in previous work on equivalence in presence
>>of sums. The goal was to answer the question, in a type system with
>>sums but no empty types (we knew empty types are hard, in large part
>>due to the discussion in this very types-list thread): "which types
>>have a unique inhabitant?", which requires a type system with as
>>little redundant (equivalent) terms as possible.
>>
>>While experimenting with the implementation of an algorithm that
>>computes an answer the following question (given a type, explore its
>>saturated derivation to check whether there is a unique one), we
>>noticed that adding the empty type seemed to work perfectly, although
>>our theory did not account for it. While wrapping up my PhD manuscript
>>( https://hal.inria.fr/tel-01309712/ ; it contains a detailed
>>introduction to focusing and saturation), I tried to provide
>>a rigorous canonicity proof for the system extended with the empty
>>type, but could not provide a rigorous one in time for submission. The
>>above article now proposes an answer to this question.
>>
>>Many thanks to the participants to this types-list thread, which got
>>me interested in trying to extend the system with empty types. Without
>>it I also wouldn't have found about Arbob Ahmad's work, an
>>(unfortunately unpublished) independent use of focusing for program
>>equivalence which I find very interesting.
>>
>>
>># Related work
>>
>>I don't yet have a good understanding of the relation between this
>>approach and existing works on Normalization by Evaluation (NbE). NbE
>>requires a good understanding of how to combine/compose fragment of
>>normal forms together while evaluating the original source terms; our
>>normal forms have a very non-local structure that make it difficult
>>(In particular, I think that it's possible to present an equivalence
>>algorithm that compares two focused terms, computing a saturated form
>>on the fly; but starting from un-focused terms and doing focusing on
>>the fly looks rather difficult). Saturated normal forms are related to
>>the notion of "maximally multi-focused proof" (multi-focusing is an
>>extension of focusing that has been studied in the last decade), and
>>this corresponds to the known difficulty of formulating
>>cut-elimination processes that preserves maximal
>>multi-focusing. Giving a normalization-by-evaluation presentation of
>>saturation would thus be very interesting.
>>
>>It is easier to relate to the rewriting-based approaches of Neil Ghani
>>and Sam Lindley; the normalization of focused terms into saturated
>>terms is very close to the rewriting they propose. In rewriting-based
>>approaches, one try to extrude matches of the form (match n with ...)
>>(where n is a neutral term; if n started with a constructor, this
>>would be beta-reducible) to move them closer to the root of the term,
>>and then merge redundant matches. When rewriting a focused term into
>>a saturated term, the syntax has let-bindings for neutrals of positive
>>type (let x = n in ...), and those get extruded up in the term. Then
>>the focusing structure imposes that the corresponding match happen as
>>soon as possible after that (so the "merge" of redundant matches is
>>implied by focusing). (The use of control operators in Balat, Di
>>Cosmo, Fiore also serves the same purpose of finding the earliest
>>possible place to introduce a neutral). On the other hand, saturation
>>will do forward-based proof search to synthesize all definable
>>neutrals, instead of just permuting fragments of the two terms to
>>compare, and this is the key ingredient that makes it gracefully
>>extend to the empty type: one has to go out in the wild looking for
>>proofs of 0.
>>
>>
>>On Sat, Sep 21, 2013 at 10:25 AM, Ryan Wisnesky <ryan at cs.harvard.edu>
>>wrote:
>>> [ The Types Forum,
>>>http://lists.seas.upenn.edu/mailman/listinfo/types-list ]
>>>
>>> Hi Roberto,
>>>
>>> BCCCs seem to come up a lot at the intersection between database theory
>>>and programming language theory.  For example, if you say that a
>>>database schema is a finitely presented category, then the category of
>>>schemas and mappings between them is a BCCC.  For this particular BCCC,
>>>I believe that equality is semi-decidable, so I would like to say that
>>>its semi-decidability comes from its particular nature, and is not
>>>inherited from the free BCCC.  BCCCs also come up when you try to
>>>connect higher-order logic to the nested relational calculus via their
>>>categorical (topos) semantics.  I'd be happy to talk more about it
>>>offline.
>>>
>>> Thanks,
>>> Ryan
>>>
>>> On Sep 21, 2013, at 9:34 AM, Roberto Di Cosmo <roberto at dicosmo.org>
>>>wrote:
>>>
>>>> Dear Ryan,
>>>>     beta-eta equality in the presence of strong sums is a tricky
>>>>subject that has been object of some attention for more than 20 years,
>>>>both "per se" in the rewriting community, that looked for an elegant
>>>>confluent rewriting system, and as a "tool" for researchers interested
>>>>in type theory and semantics, that wanted to know for example whether
>>>>one can characterise object isomorphism in a BiCCC, problem that one
>>>>can try to attack by studying invertible terms in the corresponding
>>>>lambda calculus with strong types.
>>>>
>>>> If one stays in the 1,*,-> fragment, all is wonderful: you get a nice
>>>>confluent and normalising rewriting system for the lambda calculs based
>>>>on eta expansion (see a blazingly short proof in
>>>>http://www.dicosmo.org/Articles/POD.ps), the type isomorphisms are
>>>>finitely axiomatizable
>>>> and exactly correspond to the equations one knows from high schools
>>>>for natural numbers equipped with product, unit and exponentiation,
>>>>related to Tarski's High School algebra problem
>>>> (a nice proof through Finite Sets is due to Sergei Soloviev in 1988, I
>>>>gave one based on invertible terms in my PhD thesis back in 1992).
>>>>
>>>> You may add weak sums to the calculus, and still get a nice confluent
>>>>and normalising rewriting system, that can also accomodate bounded
>>>>recursion
>>>>(http://www.pps.univ-paris-diderot.fr/~kesner/papers/icalp93.ps.gz).
>>>>
>>>> As soon as you add strong sums, though, things get very hairy: Neil
>>>>Ghani proposed in his PhD thesis back in 1995 a decision procedure for
>>>>lambda terms with strong 1,+,*,-> (no zero, see
>>>>
>>>>https://personal.cis.strath.ac.uk/neil.ghani/papers/yellowthesis.ps.gz):
>>>> a rewriting system is used to bring terms in this calculus to a normal
>>>>form which is not unique, and then it is shown that all such normal
>>>>forms are equivalent modulo a sort of commuting conversions, so to
>>>>decide equality of two terms, you first normalise them, and then check
>>>>whether they are equivalent w.r.t. conversion (the equivalence classes
>>>>being finite, this is decidable). The NBE appoach of Thorsten
>>>>Altenkirch (et al.) of your LICS01 reference seems to provide a more
>>>>straightforward way to decide equality, even if somebody coming from
>>>>the rewriting world might like a more intuitive description of what the
>>>>normal form actually look like.
>>>>
>>>> If you add the zero, you are in for more surpises: looking at the
>>>>special BiCCC formed by the natural numbers equipped with 0,1,+,* and
>>>>exponentiation, you discover that equality is no longer finitely
>>>>axiomatisable, and yet a decision procedure does exist (see
>>>>http://www.dicosmo.org/Papers/zeroisnfa.pdf),
>>>> which solves in the negative, but with a positive note, Tarski's
>>>>problem for this system.
>>>>
>>>> But then, in BiCCCs, the connection between numerical equalities and
>>>>type isomorphisms breaks down and the nice result above says nothing
>>>>about what happens for the general case. In
>>>>http://www.dicosmo.org/Papers/lics02.pdf we proved that isos are non
>>>>finitely axiomatisable in BiCCC, but we know nothing on decidability.
>>>>
>>>> As for deciding beta-eta equality, the work of Marcelo Fiore and
>>>>Vincent Balat http://www.cl.cam.ac.uk/~mpf23/papers/Types/tdpe.pdf
>>>>shows how to compute normal forms in full BiCCCs, and I kind of
>>>>remember that we were convinced at some point that these normal forms
>>>>must be unique up to some sort of commuting conversions (you can see
>>>>this in the examples of the paper), but we did not prove it at that
>>>>moment, and it is true that one would like to see this last step done
>>>>(or a proof that it cannot be done).
>>>>
>>>> I wonder whether Marcelo, Neil and Thorsten might shed some light on
>>>>this (Vincent and I started working on quite different areas a while
>>>>ago)
>>>>
>>>> By the way, out of curiosity, would you mind telling us how this
>>>>result would be useful your research?
>>>>
>>>> --
>>>> Roberto
>>>>
>>>> P.S.: for those interested in type isomorphism,
>>>>http://www.dicosmo.org/Paper/mscs-survey.pdf provides a concise
>>>>overview, even if no longer up to date.
>>>>
>>>>
>>>>
>>>> 2013/9/20 Ryan Wisnesky <ryan at cs.harvard.edu>
>>>> [ The Types Forum,
>>>>http://lists.seas.upenn.edu/mailman/listinfo/types-list ]
>>>>
>>>> Hello,
>>>>
>>>> I am trying to find a reference that states whether or not the free
>>>>bi-cartesian closed category is decidable.  That is, I am wondering if
>>>>beta-eta equality of the simply typed lambda calculus extended with
>>>>strong 0,1,+,* types is decidable.  (In particular, I am interested in
>>>>the full calculus, including the eliminator for 0, introduction for 1,
>>>>and eta for all types).
>>>>
>>>> So far, my search through the literature says that the answer to this
>>>>question is "almost" or "maybe" :
>>>>
>>>> - the STLC with strong 1,+,* types is decidable (omits 0, the empty
>>>>type): http://www.cs.nott.ac.uk/~txa/publ/lics01.pdf
>>>>
>>>> - the STLC with strong 0,1,+,* types has normal forms, but equivalent
>>>>terms may have different such normal
>>>>forms:http://www.cl.cam.ac.uk/~mpf23/papers/Types/tdpe.pdf
>>>>
>>>> - work on mechanized proofs of correctness for deciding co-product
>>>>equality may or may not include the "absurd" eliminator for the empty
>>>>type: http://www.cis.upenn.edu/~sweirich/wmm/wmm07/programme/licata.pdf
>>>>
>>>> I feel like the answer to this question is probably well-known, but I
>>>>can't seem to find it in the literature.  Any help would be appreciated.
>>>>
>>>> Thanks,
>>>> Ryan
>>>>
>>>>
>>>>
>>>> --
>>>> Roberto Di Cosmo
>>>>
>>>> ------------------------------------------------------------------
>>>> Professeur               En delegation a l'INRIA
>>>> PPS                      E-mail: roberto at dicosmo.org
>>>> Universite Paris Diderot WWW  : http://www.dicosmo.org
>>>> Case 7014                Tel  : ++33-(0)1-57 27 92 20
>>>> 5, Rue Thomas Mann
>>>> F-75205 Paris Cedex 13   Identica: http://identi.ca/rdicosmo
>>>> FRANCE.                  Twitter: http://twitter.com/rdicosmo
>>>> ------------------------------------------------------------------
>>>> Attachments:
>>>> MIME accepted, Word deprecated
>>>>       http://www.gnu.org/philosophy/no-word-attachments.html
>>>> ------------------------------------------------------------------
>>>> Office location:
>>>>
>>>> Bureau 320 (3rd floor)
>>>> Batiment Sophie Germain
>>>> Avenue de France
>>>> Metro Bibliotheque Francois Mitterrand, ligne 14/RER C
>>>> -----------------------------------------------------------------
>>>> GPG fingerprint 2931 20CE 3A5A 5390 98EC 8BFC FCCA C3BE 39CB 12D3
>>>
>
>
>
>
>
> This message and any attachment are intended solely for the addressee
> and may contain confidential information. If you have received this
> message in error, please send it back to me, and immediately delete it.
>
> Please do not use, copy or disclose the information contained in this
> message or in any attachment.  Any views or opinions expressed by the
> author of this email do not necessarily reflect the views of the
> University of Nottingham.
>
> This message has been checked for viruses but the contents of an
> attachment may still contain software viruses which could damage your
> computer system, you are advised to perform your own checks. Email
> communications with the University of Nottingham may be monitored as
> permitted by UK legislation.
>


More information about the Types-list mailing list