I still honestly haven't given this enough time to digest—my Java-fu is quite rusty so I'm sort of reading between the lines on what I know about delimited continuations generally.
First I want to state that delimited continuations form a monad and (I believe) that in order to get the Java implementation working you need to implement that monad in the "ambient monad" of Java.
Second, I want to dissect the notion of "monad" as used here into two pieces. First, there is the "PFP Notion" of monad which tends to be represented as a data type and some associated operations. Actually, I mean this even more specifically as being aimed precisely at the standard library monads of Haskell. Secondly, there's the mathematical "monad" which is just a qualifier assigned to any structure which satisfies the monad laws.
Delimited continuations are definitely a mathematical monad. They are represented by one PFP monad, but are generally eschewed as a style there. Why? Because delimited continuations are very "large" and PFP tends to favor the idea of having your data "forbid invalid state" so... large types are disfavored.
So now, to the third point: I think this is all just "implementing a specific overlarge monad in Java and Clojure". It turns out that this is really convenient because overlarge things always are. They are of course more convenient in PFP, too. But this convenience has the price of making code difficult to treat as data, difficult to analyze statically.
All that said, it doesn't mean at all that this isn't a useful technique for implementation of monads in Java/Clojure.
Four, what about composability? (Specifically, here, we mean functor composition.) Turns out that delcont composes nicely compared to the PFP-style monads. The reason why is again because of lack of restrictions on state space. Non-composability of monads is a good thing because it relates to non-commutativity of certain effects.
You can get better composability in three ways: throw out all the non-commutative effects, work with subsets of effects a particular ambient monad (see Purescript's impl of Eff), split the specification of effects from the implementation of effects and leave the decision of ordering up to the implementer (see Haskell's mtl).
If you don't want any of the above, just realize that non-composition is telling you something and forcing you to make choices. One choice is to build a new kind of composition (monad transformer composition) which does work the way we want by being more sensitive to ordering.
Fifth, and finally, all the above is not to say that there's any lack of merit to using delcont for effects. It's a great technique. It may fit the "algorithmic mind" better, too.
I am trying to say, however, that it's not some grand trick that frees the user from thinking about monads. You still are, you just trade one set of implementation concerns for another. But since things are still monads then the same mathematical techniques will translate. Which is a great thing—it means there are more tools to think about, understand, and become confident in the application of this technique.
> I am trying to say, however, that it's not some grand trick that frees the user from thinking about monads.
This brings us back to the discussion of what it means to be "thinking about monads". When a cashier hands me change, is he "thinking about groups"? Is a Rubik's Cube whiz thinking about groups? Sometimes people are better at something by thinking about it in a completely different way. That is the exact meaning of duality: two things are the same, but different circumstances may make looking at it in one way more useful than the other. A "push" API (of which "PFP monads" or the "monadic style" as I call it, is an example) and a "pull" API are also duals, yet I believe that when writing concurrent code in imperative languages, the pull shape is style preferable.
As discussions about cognition might make some PFP people uncomfortable -- we all should just think "in math", right? (I'm kidding. Mostly ;)) -- we can take this back to theory and say that switching between different representations requires computation. In fact, it can be argued that all computation is simply about changing representations and encoding things. 4 is 2 x 2, but going from representing that same thing as "2x2" to "4" is what computation does (and the lambda calculus computes just by substitution -- namely, by "doing nothing" from a mathematical point of view). All this is to say that going from looking at something one way to looking at it another way actually requires some computational power and is not free.
I don't think we're disagreeing at all in principle. I'm mostly trying to point out that your change is a group whether you think about it or not (actually, several). Then, let the best representation win.
Monads aren't a representation, they're a functionality.
Oh, I know we're not disagreeing in principle, but I do think monads are a representation of a functionality -- effects -- in an algebra that does not have impure functions. Once effects are introduced as fundamental constructs, there is no need to be "thinking in monads" anymore.
Imagine if we defined our algebra in such a way that there were no functions -- i.e. nothing that can take parameters and return values -- just sequences of instructions that read and assign global variables (and then define -- outside the calculus -- other effects, such as IO, as changing certain variables). We could then define pure functions in terms of this calculus, rather than doing the opposite and defining change to state in terms of pure functions (i.e. monads). We could then say that a pure function is really a subroutine that performs a transformation on variable addresses which we'll call "variable renaming", and writes its output in some well-known address that the "caller" then renames.
In fact, that calculus exists in the Turing Machine, or in our actual computers, and the work of the Haskell compiler is to transform the "high level" concept of pure functions to the "fundamental" representation of instructions and effects... We could then say that even when working with functions you still
have to think in terms of instructions (this is actually true if you care about complexity or its earthly form -- performance).
Of course, there is no objective way to decide which of these representations is more fundamental.. They are just different, requiring a compiler -- or a mind -- to switch from one to the other. It is, however, possible that one of them results in more efficient programs on current hardware than the other, and one of them might be more natural for human cognition than the other. Or the same one could be both ;) but actually I think reality is more interesting than that.
I think both human cognition and compilers might not prefer either but a combination of both. Messy from a mathematical perspective, but that's the nature of computation -- in neurons or silicon. When we need to formally reason about the program, the formal verification tool can then transform the messy representation into one or the other "robust" algebras and do its thing there.
First I want to state that delimited continuations form a monad and (I believe) that in order to get the Java implementation working you need to implement that monad in the "ambient monad" of Java.
Second, I want to dissect the notion of "monad" as used here into two pieces. First, there is the "PFP Notion" of monad which tends to be represented as a data type and some associated operations. Actually, I mean this even more specifically as being aimed precisely at the standard library monads of Haskell. Secondly, there's the mathematical "monad" which is just a qualifier assigned to any structure which satisfies the monad laws.
Delimited continuations are definitely a mathematical monad. They are represented by one PFP monad, but are generally eschewed as a style there. Why? Because delimited continuations are very "large" and PFP tends to favor the idea of having your data "forbid invalid state" so... large types are disfavored.
So now, to the third point: I think this is all just "implementing a specific overlarge monad in Java and Clojure". It turns out that this is really convenient because overlarge things always are. They are of course more convenient in PFP, too. But this convenience has the price of making code difficult to treat as data, difficult to analyze statically.
All that said, it doesn't mean at all that this isn't a useful technique for implementation of monads in Java/Clojure.
Four, what about composability? (Specifically, here, we mean functor composition.) Turns out that delcont composes nicely compared to the PFP-style monads. The reason why is again because of lack of restrictions on state space. Non-composability of monads is a good thing because it relates to non-commutativity of certain effects.
You can get better composability in three ways: throw out all the non-commutative effects, work with subsets of effects a particular ambient monad (see Purescript's impl of Eff), split the specification of effects from the implementation of effects and leave the decision of ordering up to the implementer (see Haskell's mtl).
If you don't want any of the above, just realize that non-composition is telling you something and forcing you to make choices. One choice is to build a new kind of composition (monad transformer composition) which does work the way we want by being more sensitive to ordering.
Fifth, and finally, all the above is not to say that there's any lack of merit to using delcont for effects. It's a great technique. It may fit the "algorithmic mind" better, too.
I am trying to say, however, that it's not some grand trick that frees the user from thinking about monads. You still are, you just trade one set of implementation concerns for another. But since things are still monads then the same mathematical techniques will translate. Which is a great thing—it means there are more tools to think about, understand, and become confident in the application of this technique.