[TYPES] effect vs. coeffect expressiveness

Neel Krishnaswami neelakantan.krishnaswami at gmail.com
Thu Apr 30 17:59:03 EDT 2020


Hello,

I sent most of this message to Jonathan privately, but after Vikraman's
comment I realized that most of it is surely appropriate here, as well.

First, I want to distinguish between two sorts of effect systems,
descriptive effect systems and synthetic effect systems. Descriptive
effect systems start with the idea that the language is describing an
underlying semantics or abstract machine, and the effect annotations
describe which subset of the effects are allowed to happen. On the
other hand, synthetic effect systems are about building custom
effects/monads for a program -- effect handlers are an example of
this kind.

I am going to restrict my remarks to the first kind of effect system,
focusing particularly to capabilities. In this setting, a machine may
have many possible effects -- for example, we can imagine count
writing each file a program to write to as a different effect.

An effect system in the classic sense is basically a family of monads,
one per subset of allowed effects. That is, we can imagine a family
of monads

     T(C, X)

where C is the set of channels that the computation is permitted to
write on, and X is the return type. If we take the indexing by allowed
write set seriously, we get a *graded monad*. That is, the monadic
operations get the refined types:

     return : X ≃ T(∅, X)
     join   : T(C, T(C', X)) → T(C ∪ C', X)

The first says that `return` produces a computation that will not
write to any channels, and the fact that I wrote an isomorphism
means you can escape computations which don't perform any effects.
The type of `join` says that if you have a computation
that could write to `C`, and returns a computation that could write
to `C'`, then when you sequence the computations to get the `X`, you
could have writes on `C ∪ C'`.

To get a precise type for the `write` operation, you need a little
bit of indexed typing (Dominic Orchard & co's Granule language could
certainly express this, for instance):

     write : ∀c. Channel(c) → String → T({c}, unit)

This says that the `print` function takes the channel `c` and a
string, and performs a computation that may write on the singleton
set of channels `{c}`.

Now that we have a type of channel values, it becomes natural to
think of values as *owning* capabilities. If you have a type
`Channel(c)` and `Channel(c')` then we expect the linear pair
type `Channel(c) ⊗ Channel(c')` to represent the ownership of
two *distinct* channels. This is very much like the `P ∗ Q`
separating conjunction in separation logic. Similarly, there
should be a linear function space `X ⊸ Y` representing a function
receiving an argument with capabilities disjoint from the
function, much like the magic wand of separation logic.

How should tensor products and computations interact? Well, we
know that the monad and tensor should satisfy the property of
having a *strength*:

     strength : T(C, X) ⊗ Y ⊸ T(C, X ⊗ Y)

Surprisingly, having a linear strength corresponds to the *frame
property* of separation logic. Intuitively, if you think of

     f : P ⊸ T(C, Q)

as representing a computation that has a precondition `P` and
postcondition `Q`, then having a strength is what you need to
derive

     g : P ⊗ R ⊸ T(C, Q ⊗ R)

You can see it with the following linear term:

     frame : (P ⊸ T(C, Q)) ⊸ (P ⊗ R) ⊸ T(C, Q ⊗ R)
     frame f (p, r) = strength(f p, r)

I documented this relationship in my POPL 2015 paper, *Integrating
Linear and Dependent Types*, but I think it was folklore before that:
Bob Atkey mentioned to me that he knew it as well.  In addition, I
should note having a strength with the tensor is what is needed to
make the linear version of do-notation work out the way that you
expect.

Now, once you have a linear type discipline for capabilities, it is
natural to ask what the exponential !A should mean. Brunel, Gaboardi,
Mazza and Zdancewic suggested in their 2014 paper *A Core Quantitative
Coeffect Calculus* that coeffects can be understood as *graded
exponentials* -- just as an effect system can be seen as graded monad,
graded comonads can model coeffects.

This is a bit abstract, so it is worth asking what kinds of graded
comonads arise in the specific context of capabilities. One interested
one is the *safety comonad*. This is, we introduce a family of
graded comonads:

     Safe(C, X)

which represent expressions of type `X` which **do not** own any
capabilities in the `C`. Notice the duality here -- an effect type
`T(C, X)` says that the computation *may use some* of the capabilities
in `C`, and the safety comonad `Safe(C, X)` says that the expression
*must not own any* capabilities in `C`.

There satisfy the comonad laws for the evident operations:

     extract : Safe(C, X) → X
     duplicate : Safe(C, X) → Safe(C, Safe(C, X))

which means that if you have an `X` which owns nothing in `C`, it is
still an `X`, and if you have a value which does not own `C`, and that
a value which does not own `C` does not own `C`.

The `Safe` comonad also satisfies the properties you might expect
given the intuition about denial of ownership:

     iso  : Safe(∅, X) ≃ X
     dist : Safe(C ∪ C', X) ≃ Safe(C, Safe(C', X))

I should note that this is not exactly the definition of graded
comonad given in the literature, but IMO that's not enormously
surprising -- the exponential in linear logic is not canonical (you
can have multiple non-isomorphic exponentials for the same linear
language), and so you have to let the application guide you.

Now, two surprising things happen.

First, in this setting, there is a natural notion of nonlinear arrow
which **is not** the linear decomposition `!A ⊸ B`. Instead, there is
a free-standing function space `A → B` which represents a function
which can *share* capabilities with its argument, just like the
ordinary implication in separation logic permits sharing resources
between hypothesis and consequent.

Indeed, this notion of function type corresponds to *capability-safe
functions* -- these are functions which can only access capabilities
either through their argument or their free variables. That is, they
are functions which *lack ambient authority*.

So we have two notions of function: those which can transfer ownership
and have a linear type discipline, and those which are capability-safe
and have a nonlinear type discipline. (Together, they form a
categorical model of bunched implications.)

Second, the comonad and the monad have an interaction:

     cancel : Safe(C, T(C', X)) ⊸ Safe(C, T(C' - C, X))

The type of the cancellation operation `cancel` says that
if you have a computation which does not own any capabilities
in `C`, then the computation cannot perform any writes
to channels in `C`.

In particular, if `C` is bigger than `C'`, then we get

     cancel : Safe(C, T(C', X)) ⊸ Safe(C, T(C' - C, X))
            = Safe(C, T(C', X)) ⊸ Safe(C, T(∅, X))      // because C
            ≃ Safe(C, T(C', X)) ⊸ Safe(C, X)

which says that you can "escape the monad" if you know that you don't
own any of the capabilities the computation may use.  In my view, this
cancellation property is the essence of how effects and coeffects
interact in the setting of capabilities.

Interestingly, this cancellation law does not appear to be an instance
of the kinds of distributive law that Marco Gaboardi, Shin-ya Katsumata,
Dominic Orchard, Flavien Breuvart, and Tarmo Uustalu studied in their
ICFP 2016 paper, *Combining Effects and Coeffects via Grading*. (As
Dominic and Flavien remarked earlier in the thread, there is a lot
of unexplored territory in this space.)

Specifically in the context of capabilities, neither effects nor
coeffects supplant the other, and neither do ownership transfer nor
capability-safety supplant the other! But put together they give you a
very fine-grained ability to control and reason about how capabilities
are transferred and used.

Best,
Neel

On 30/04/2020 19:24, Vikraman Choudhury wrote:
> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ]
>
>
> Hi Jonathan,
>
> I'd like to point out a recent paper[0] by Neel Krishnaswami and I,
> "Recovering Purity with Comonads and Capabilities" which provides a
> different take on this topic.
>
> The ability to perform an effect can be encoded by the use of
> permission/capability variables. A comonad/coeffect modality can be used
> to control access to capability variables. Using this idea, we can
> encode capability-safe and pure functions in an unsafe/impure calculus
> -- a pure function is a capability-safe function with no capabilities!
>
> We show that it is possible to use this comonad to embed the pure CBV
> lambda calculus into an impure calculus, while preserving the full
> beta-eta equational theory. This is not unlike the embedding of linear
> logics into intuitionistic logic.
>
> [0] https://arxiv.org/abs/1907.07283
>
> Jonathan Aldrich writes:
>
>> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ]
>>
>> Dear Types,
>>
>> I am curious about the relative expressiveness of effects and
>> coeffects.  Has this been studied?
>>
>> The cleanest distinction I've seen is that effects capture the impact
>> a program has on its environment, i.e. what it produces.  Coeffects
>> capture the requirements that a program puts on its environment: what
>> it consumes.  This is discussed, for example, in Gaboardi et al's ICFP
>> 2016 paper, "Combining Effects and Coeffects via Grading" (and
>> elsewhere).
>>
>> There is some useful intuition in this distinction, and it describes
>> the different structure of checking rules in effect and coeffect
>> systems.  However, I don't find this distinction very helpful in
>> thinking about expressiveness.  It seems like many examples can be
>> expressed in either an effect or a coeffect system.  For example, an
>> exception is a classic example of an effect (e.g. in the paper
>> mentioned above, and many others).  However, it seems to me that
>> exceptions can also be modeled as coeffects: code that might throw an
>> exception requires the caller to pass a handler for that exception to
>> it--or perhaps an abstract "permission" to throw that exception.  So
>> in what sense are exceptions an effect, rather than a coeffect?
>>
>> Is this true of all the kinds of things that are typically expressed
>> with effects and coeffects--that they could just as easily be
>> expressed in the other style?  If so, what are the benefits of one
>> style vs. the other?  Or are there examples that can only be expressed
>> in one style--or for which expression in the other style is much more
>> awkward?
>>
>> Perhaps these questions have been written about, but I haven't been
>> able to find it.  I would love to get some pointers.  I am
>> particularly interested in a practical explanation of the differences
>> in expressiveness, or theoretical results that have a direct and
>> explicit relationship to practice--with practical examples in either
>> case.  (co-)Monads and/or anything categorical are not a very helpful
>> starting point for me, but effect systems and/or linear types are
>> (I've done research on both).  I am also more interested in the
>> descriptive view of effects/coeffects (in the sense of Filinski,
>> ICFP'11) than the prescriptive view.
>>
>> Thanks,
>>
>> Jonathan
>
> --
> Vikraman


More information about the Types-list mailing list