> and (c) requires a language feature (sequence points) to disambiguate in which order side effects get executed. With Haskell, we just don’t have to care.
Reading up to this point, I had to chuckle a bit. I have struggled with the Haskell type system more than I care to admit; It's almost never "we just don't have to care" when comparing to most other popular languages.
That being said, this article does a nice job of gently introducing some of the basics that will trip up somebody who is casually looking through code wondering what *> and <*> and <* do. As usual, there is a steep learning curve because of stuff like this all over the codebase. If I walk away for a month, I need to revisit >>= vs >> and other common operators before I can be productive. It probably doesn't help that I never actually speak to a human about these concepts so in my head it's always ">>=" and not "bind."
I try to avoid >>= and >> (or *>) because I know it trips people up; do-notation is more than fine. The exception is when parsing with one of the parsecs where you get a lot of <* and *> usage and all the tutorials use those symbols.
But I like <|> , it feels very clear that it has a sort of "or" meaning.
Elixir doesn’t even need a special syntax — it gets Haskell’s LambdaCase as a natural consequence of case being just another function, and the pipe operator always chaining by the first argument.
Haskell’s >>= is doing something slightly different. ‘getThing >>= \case…’ means roughly ‘perform the action getThing, then match the result and perform the matched branch’
Whereas ‘getThing |> \case…’ means ‘pattern match on the action getThing, and return the matched action (without performing it).
The >>= operator can also be used for anything satisfying the Monad laws, e.g. your actions can be non-deterministic.
Pedantry: >>= can be used for any instance of the Monad typeclass. GHC isn't sufficiently advanced to verify that a given Monad instance satisfies the monad laws; it can verify that the instance satisfies the type definition, but for e.g. the identity law "m >>= return === m" the instance can still return something violating the law (not something substitutable with m).
Being able to pipe into def, defp, defmodule, if, etc. is fantastic! But apparently you say at least in the case of case, it's not a macro, and it's just a function which returns a monad—I guess it's the same effect afterall? Is that why people say Haskell can get away with lack of Lisp-style macros because of its type system and laziness?
I was wrong about this. Case is a macro (a special-cased macro even, defined at https://github.com/elixir-lang/elixir/blob/4def31f8abea5fba4...), not a function. It works with pipes because in the AST it's a call, unlike in Haskell where case is its own thing.
> I try to avoid >>= and >> (or *>) because I know it trips people up; do-notation is more than fine
Interesting. Probably it's just me, but I first learned monads using >>= (in OCaml), so at the beginning I found the Haskell do notation more confusing (and indentation rules didn't help). >>= is just a function and I understand its signature well. On the other hand, "do" is a syntactic sugar that I sometimes had to "mentally desugar" in some cases.
> It's almost never "we just don't have to care" when comparing to most other popular languages.
Struggling with Haskell type system is not an experience of somebody who has developed an intuition about Haskell type system. Granted, it is not a binary thing, you can have good intuition about some parts of it and struggle with others.
I think they way you put it is, while technically true, not fair. Those "most other" languages are very similar to one another. It is not C# achievement, that you don't struggle with its type system coming from Java.
This is like people struggling with Rust because of burrow checker, well, they have probably never programmed with burrow checker before.
Good intuition is a gift from god. For the rest of us, notational confusion abounds. Types are great. Syntax (not semantics) and notation can be a major bummer.
I think use makes master: the more you use it, the more mastery you will have and the better intuition will follow.
I struggled with the borrow checker because I’m smarter than it is, not because I haven’t worked with one before. Mainly I say “I’m smarter” because I’ve worked on big projects without it and never had any issues. Granted, I’ve only gotten in a fight with it once before giving up on the language, mainly because it forced me to refactor half the codebase to get it to shut up and I had forgotten why I was doing it in the first place.
Let's be realistic, rust is great, especially compared to C.
But it shifts the need to memorize undefined behavior with the need to memorize the borrow checker rules.
If you are dealing with common system level needs like double linked lists, rust adds back in the need for that super human level memory of undefined behavior, because the borrow checker is limited to what static analysis can do.
IMHO, the best thing Rust could do right now is more clearly communicate those core limitations, and help build tools that help mitigate those problems.
Probably just my opinion, and I am not suggesting it is superior, but zig style length as part of the type is what would mitigate most of what is problematic with C/C++
Basically a char myArray[10];, really being *myArray is the main problem.
Obviously the borrow checker removes that problem, but not once you need deques treeps etc...
If I could use Rust as my only or primary language, memorizing the borrow checker rules wouldn't be that bad.
But it becomes problematic for people who need to be polyglots, in a way that even Haskell doesn't.
I really think there's ways for Rust to grow into a larger role.
But at this point it seems that even mentioning the limits is forbidden and people end up trying to invert interface contracts, and leak implementation details when they're existing systems are incompatible with the projects dogma.
It is great that they suggest limiting the size of unsafe code blocks etc....but the entire world cannot bend to decisions that ignores the real world nuances of real systems.
Rust needs to grow into a language that can adjust to very real needs without and many real needs will never fit into what static analysis can do.
Heck C would be safer if that was the real world.
I really do hope that the project grows into the role, but the amount of 'unsafe' blocks points to them not being there today, despite the spin.
> But it shifts the need to memorize undefined behavior with the need to memorize the borrow checker rules.
With one massive difference: the borrow checker will tell you when you're wrong at compile time. Undefined behaviour will make your program compile and it will look like your program is fine until it's not.
> But it shifts the need to memorize undefined behavior with the need to memorize the borrow checker rules.
You have a choice: Fight the borrow checker on the front end until your code compiles, or chase down bugs on the back end from undefined behavior and other safety problems. No single answer there, it depends on what you're doing.
Cognitive load is a big issue. In Rust I find this comes in a couple of phases, the first being figuring out what the borrow checker does and how to appease it. The second is figuring out how to structure your data to avoid overusing clone() and unsafe blocks. It's a bit like learning a functional language, where there's a short-term adaptation to the language, then a longer-term adaptation to how you think about solving problems.
You mention doubly-linked lists, which are a good example of a "non-star" data structure that doesn't nicely fit Rust's model. But: for how many problems is this the ideal data structure, really? They're horrible for memory locality. Those of us who learned from K&R cut our teeth on linked lists but for the last 20 years almost every problem I've seen has a better option.
> IMHO, the best thing Rust could do right now is more clearly communicate those core limitations, and help build tools that help mitigate those problems.
rust-analyzer is the tool that does this. Having it flag errors in my editor as I write code keeps me from getting too far down the wrong road. The moment I do something Rust thinks is goofy, I get a red underline and I can pause to consider my options at that moment.
In my particular case, I was absolutely sure it was safe without having to test it. Would it remain so forever? Probably not because software changes over time.
In those cases what you are supposed to do is write your logic using unsafe, and wrap it with lifetime safe APIs. That way your "smartness" joins efforts with rusts, and not-so-smart people can use your APIs without having to be as smart.
Funny, but I remember the difference between `>>=` and `>>` even though I haven't written Haskell in a couple of years.
`>>=` passes to the right the value coming from the left, while `>>` drops it.
To give an example, you use `>>=` after `readFile` to do something with the contents.
You use `>>` after `putStrLn` since `putStrLn` doesn't return a meaningful value.
In a nutshell, first class effects and built in set of patterns for composing them get rid of boilerplate code. Combine that with type safety and you can churn out relatively bug free code very fast.
I always maintain that this is just familiarity, Haskell is in truth quite a simple language. It's just that the way it works isn't similar to the languages most people have started with.
I believe there's a strange boundary around the idea of simple vs easy (to quote rich hickey) and I don't know how to call it.. (or if somebody named it before)
functional and logical languages are indeed very simple, small core, very general laws.. (logic, recursion, some types) but grokking this requires unplugging from a certain kind of reality.
Most people live in the land of tools, syntax and features .. they look paradoxically both simpler than sml/haskell so people are seduced by them, yet more complex at the same time (class systems are often large and full of exceptions) but that also makes it like they're learning something advanced, (and familiar, unlike greek single variables and categ-oids :).
People intuitively expect things to happen imperatively (and eagerly). Imperativeness is deeply ingrained in our daily experience, due to how we interact with the world. While gaining familiarity helps, I’m not convinced that having imperative code as the non-default case that needs to be marked specially in the code and necessitates higher-order types is good ergonomics for a general-purpose programming language.
> People intuitively expect things to happen imperatively (and eagerly).
Eagerly? Yes. Imperatively? Not as much as SW devs tend to think.
When the teacher tells you to sort the papers alphabetically, he's communicating functionally, not imperatively.
When the teacher tells you to separate the list of papers by section, he's communicating functionally, not imperatively.
When he tells you to sum up the scores on all the exams, and partition by thresholds (90% and above is an A, 80% above and above is a B, etc), he's communicating functionally, not imperatively.
No one expects to be told to do it in a "for loop" style:
"Take a paper, add up the scores, and if it is more than 90%, put it in this pile. If it is between 80-90%, put it in this pile, ... Then go and do the same to the next paper."
You’re talking about what vs. how, but imperative vs. pure-functional is both about the how, not the what.
When you’re explaining someone how to sort physical objects, they will think in terms of “okay I’ll do x [a physical mutable state change] and then I’ll have achieved physical state y, and then I’ll do z (etc.)”.
> Understanding the map signature in Haskell is more difficult than any C construct.
This is obviously false. The map type signature is significantly easier to understand than pointers, referencing and dereferencing.
I am an educator in computer science - the former takes about 30-60 seconds to grok (even in Haskell, though it translates to most languages, and even the fully generalised fmap), but it is a rare student that fully understands the latter within a full term of teaching.
Are the students who failed the pointer class the same ones in the fmap class?
I didn’t say “using map” I said understanding the type signature. For example, after introducing map can you write its type signature? That’s abstract reasoning.
Pointers are a problem in Haskell too. They exist in any random access memory system.
Whether pointers exist is irrelevant. What matters is if they're exposed to the programmer. And even then it mostly only matters if they're mutable or if you have to free them manually.
Sure, IORef is a thing, but it's hardly comparable to the prevalence of pointers in C. I use pointers constantly. I don't think I've ever used an IORef.
If you have an array and an index, you have all the complexity of pointers. The only difference is that Haskell will bounds check every array access, which is also a debug option for pointer deref.
That’s an unfair comparison because these are two unrelated concepts. In many languages, pointers are abstracted away anyway. Something more analogous would be map vs a range loop.
And I'd say the average React or Java developer these days understands both pretty well. It's the default way to render a list of things in React. Java streams are also adopted quite well in my experience.
I wouldn't say one is more difficult than the other.
IMO `map` is a really bad example for the point that OP is trying to make, since it's almost everywhere these days.
FlatMap might be a better example, but people call `.then` on Promises all the time.
I think it might just be familiarity at this point. Generally, programming has sort of become more `small f` functional. I'd call purely functional languages like Haskell Capital F Functional, which are still quite obscure.
I suppose an absolute beginner would need someone to explain that Haskell type signatures can be read by slicing at any of the top level arrows, so that becomes either:
> Given a function from `a` to `b`, return a function from a `list of as` to a `list of bs`.
or:
> Given a function from `a` to `b` and a `list of as`, return a `list of bs`.
I find the first to be the more intuitive one: it turns a normal function into a function that acts on lists.
Anecdotally, I've actually found `map` to be one of the most intuitive concepts in all of programming. It was only weird until I'd played around with it for about 10m, and since then I've yet to be surprised by it's behavior in any circumstance. (Although I suppose I haven't tried using it over tricky stuff like `Set`.)
`fmap` is admittedly a bit worse...
fmap :: Functor f => (a -> b) -> f a -> f b
But having learned about `map` above, the two look awfully similar. Sure enough the same two definitions above still work fine if you replace `list` with this new weird `Functor` thing. Then you look up `Functor` you learn that it's just "a thing that you can map over" and the magic is mostly gone. Then you go to actually use the thing and find that in Haskell pretty much everything is a `Functor` that you can `fmap` over and it starts feeling magical again.
You and I have a math part of our brain that appreciate the elegance from the algebraic structure.
I’m saying that thing you did where you start representing concepts by letters which can be populated by concrete objects is not a skill most people have.
Maybe at its core, but Haskell in the wild is monstrously complex because of all the language extensions. Many different people use different sets of extensions so you have to learn them to understand what’s going on!
Not really, the vast majority of extensions just relax unnecessary restrictions. And these days it's easy to just enable GHC2021 or GHC2024 and be happy.
Accessibility is not an issue. It takes only a little bit of effort to get productive with a Haskell codebase. I think it's more of a mental block because the language is different from what one might be used to. What Haskell needs, and doesn't have, is a compelling reason for people to make that small effort (i.e. the killer usecase).
"Relatively bug free code very fast" sounds like a killer use case to me.
So why hasn't it happened? Some possibilities:
1. People are just ignorant/unenlightened.
2. Haskell is too hard to use for most people. I think that different programmers think in different ways, and therefore find different languages to be "natural". To those whom Haskell fits, it really fits, and they have a hard time understanding why it isn't that way for everyone, so they wind up at 1. But for those who it doesn't fit, it's this brick wall that never makes sense. (Yes, this is about the same as 1, just seen from the other side. It says the problem is the language, not the people - the language really doesn't fit most people very well, and we can change languages easier than we can change people.)
3. Haskell isn't a good fit for many kinds of programming. The kind of programs where it fits, it's like a superpower. The kinds where it doesn't, though, it's like picking your nose with boxing gloves on. (Shout out to Michael Pavlinch, from whom I stole that phrase.)
What kinds of programs fit? "If you can think of your program like a pipe" is the best explanation I've seen - if data flows in, gets transformed, flows out. What kind of program doesn't fit? One with lots of persistent mutable state. Especially, one where the persistent mutable state is due to the problem, not just to the implementation.
The reasons are going to vary depending on who you ask. I personally don't agree with any of your reasons. In my opinion, as a long time user of Haskell, the practical reasons are the following -
1. Tooling has historically been a mess, though it's rapidly getting better.
2. Error messages are opaque. They make sense to someone familiar with Haskell, but others cannot make the leap from an error message to the fix easily.
3. It's a jack of all trades. The resulting binaries are not small. Performance can be very good but can be unpredictable. It doesn't compile nicely to the web. Doesn't embed well. There is basically no compelling reason to get into it.
4. The ecosystem is aging. You can find a library for almost any obscure usecase, but it would be many years old, and possibly require tweaking before it even compiles.
Off the top of my head, memory safety challenges for junior Haskellers (laziness footguns), State monad being fundamentally flawed: there is an inability to get at and log your application state just before a crash, bloated tooling, GHC frequently breaking existing code. Laziness and monadic code makes debugging painfully difficult.
I acknowledge that those things can be challenging, however I'd like to respond to some of the specific issues:
- Space leaks due to laziness are a solved problem. I explain the technique to solve it at: https://h2.jaguarpaw.co.uk/posts/make-invalid-laziness-unrep... This technique has not completely percolated throughout the community, but I am confident that it does actually resolve the "laziness causes space leaks issue"
- Flawed state monad: well, you point out the analysis of its flaws from the effectful documentation. That's correct. The solution is: just use effectful (or another similar effect system. I recommend my own: Bluefin)
- GHC breakage: I've been keeping an inventory of breakage caused by new GHC versions, since GHC 9.8: https://github.com/tomjaguarpaw/tilapia/ There has been very little! The Haskell Foundation Stability Working Group has had a massive effect in removing breakage from the ecosystem.
- Laziness and monadic code makes debugging painfully difficult: I mean, sort of, but if you're using monadic code in the style of a decent effect system like effectful or Bluefin this is a non-problem. It's hardly different from programming in, say, Python from the point of view of introducing debugging printfs or logging statements.
Thanks, I've followed along with a lot of your posting on the Haskell discourse. One thing regarding this matter:
>well, you point out the analysis of its flaws from the effectful documentation. That's correct.
Thinking deeper about this, that there is essentially no way to fix this issue with StateT, because of the type choice, the monad, the composability requirement, all conspiring together to not be undone, does that signal something deeper that is wrong with the flexibility of Haskell, that we can progressively paint ourselves into a corner like this. Could it not happen again, but with another late breaking requirement, for effectful, or bluefin?
> I've followed along with a lot of your posting on the Haskell discourse
Ah, that's great to know, thanks. It's rarely clear to me whether people read or are interested in what I say!
Well, yes, in principle even a design that is perfect according to some spec could be completely wrong if the spec needs to change. and impossible to tweak to match the new spec. This is true of any language or any system. This raises a few important questions:
1. How easy does a language make it to "unpaint" yourself from a corner?
In Haskell it's easier than in any other language I've experienced, due to its legendary refactoring experience. For example, if you "incorrectly" used the State monad and got stuck, you can wrap it up in an abstract type, change all the use sites, check that it still compiles and passes the tests, then change the definition to use new "uncornered" implementation, again check it compiles and passes the tests, then unwrap the abstract type (if you like, this stage is probably less important), then add the new feature supported by the new implementation.
2. How likely is it to paint yourself into a corner in the first place?
In Haskell, again, less likely than any other language I've experienced, because the constructs are so general. There is far more opportunity to tweak a design when you have general constructs to work with. (That said, I've met many Haskell behemoths that couldn't be easily tweaked so, particularly contorted type class hierarchies. I recommend not designing those.)
3. Why won't effectful or Bluefin lead to "corners"?
Because they're just Haskell's IO, wrapped up in a type system that gives fine-grained control over effect tracking. Anything you can do in Bluefin and effectful you can do in IO, and vice versa. So to really paint yourself into a corner with IO-based effect systems it would have to be something that you can't do in IO either, and at that point we're talking about something that can't be done in Haskell at all. So there's no real downside to using IO-based effect systems in that regard.
Basically StateT uses Either to model either the application state or an error. So if your application throws, you lose the current state forever. They sort of painted themselves into a corner with this choice of type, there's no real way out now.
I agree loosely with what haskman above says about creating relatively bug free applications, the guard rails are so robust. But those same guard rails mean you can paint yourself into a corner that it is harder to get out of without imperative state, case in point above.
Most functional languages give you so much more choice in managing persistent mutable state than the typical imperative languages. In the latter everything can be made into persistent mutable state so you have both those that are due to the problem and those that are due to the implementation.
Haskell gives you a wide range of tools, from simulated state like the State monad, to real ones like the ST monad and IORef inside the IO monad. For synchronization between threads you have atomic IORef, MVar, and TVar.
If you problem requires you to have persistent mutable state, Haskell helps you manage it so that you can truly separate those persistent mutable state that's due to the problem from those that's due to the implementation.
I generally don't like Haskell, but the view of non-Haskellers (myself included) regarding state does remind me a bit of resisting structured programming because gotos are easier to program.
That's a good analogy and I like it! If someone is really used to simply using goto for all kinds of flow control purposes, there could be some resistance if there's a coding style guide that enforces using if/else/while/do and such. It reminds me of similar arguments about recursion is too powerful in Haskell and similar languages and good style means using map/filter/foldr and the like; or the argument that call-with-current-continuation is too powerful. When a single tool is too powerful, it can be used for a large number of purposes so it ends up being the code reader's job to figure out which purpose the code writer has intended.
Good question. I haven't revisited it in over 10 years. I think it was just too much "new" stuff to learn at once, making it feel too difficult to do anything in it at the time, especially considering I wasn't learning it full time.
Maybe now that I'm older and wiser (debatable) a revisit is in order, but lately I prefer dynamically typed languages (like Clojure and Elixir) to statically typed ones. I'll probably add it to my TODO list, but the list is long and time is short.
Well if you like Clojure you probably also appreciate how it doesn't just give you mutable variables everywhere but instead gives you different tools for different purposes. You have transients, atoms, agents, volatiles etc for different use cases.
Oracle influenced / bought academia into teaching Java for a generation. See Dijkstra’s criticisms[1] from the time, from when his department was forced to stop teaching Haskell to undergrads for political reasons. Note that Haskell had not been too hard for Dijkstra’s undergrads.
Later, Python took its place, since people realized the Java ecosystem was way too complicated and was turning off would-be CS students. Python directly targeted the academic use case by having similarities to C, Java, and Bash——it was not a better language, it just made existing imperative and object-oriented assignments easier for classroom environments. Believe it or not, a lot of programmers and even academics sort of give up on exploring significantly unfamiliar directions after graduating.
He's asking for something that is really inappropriate. He lost in the CS department, and he wants the Budget Council to decide on what languages should be taught? Like they know anything about it!
He lost a political battle, and he's appealing it to the only place he can, and he's buttering them up to do it, but the people that actually know something about the topic decided against him already.
And, you're quoting only one side of the battle. One vocal and eloquent side, but only one side. Maybe look into why the UT CS department made that change? (And not why Dijkstra says they did.)
He mentions the expensive promotional campaign that was paid towards Java.
> the people that actually know something about the topic decided against him already
Matters of pedagogy often involve value judgements and trade-offs, with knowledge alone being unable to provide definitive answers.
However, Sun/Oracle did know that more money would flow their way if undergraduates were to learn Java. The letter suggests that one or both of them decided to act accordingly.
It’s questionable to assert that every knowledgeable faculty member thought that pivoting to Java was the best option. (Dijkstra himself is a counter-example?) From the looks of it, just a single department chair——not the full CS department——had the decision-making authority on this curriculum change.
> inappropriate
Would inappropriateness make his arguments any less true?
Why in an academic context would it be inappropriate to request input from additional stakeholders? If the letter was unconventional, remember that a purpose of the tenure system is to protect unconventionality.
The Budget Council is not a stakeholder in the CS curriculum. The Budget Council does not have the knowledge or expertise to say anything relevant about the matter - and Dijkstra should know that. That's why it's inappropriate.
I mean, look, if you had a letter signed by the majority of the department, complaining about the chair's decision, then the Budget Council might consider reversing the chair, on the authority of the expertise of the majority of the department. But overrule the chair on the basis of disagreement by one professor? No way. You can't run a university that way, because there's always at least one professor who disagrees with a decision.
An academic knows well that grants and other forms of financing are their lifeblood. It’s also how the government chooses its priorities within academia.
Admittedly, I don’t know anything about who was on the budget committee to which Dijkstra wrote this letter. But it is just ordinary for academics to write proposals outlining their priorities in hopes that the grant/budget/financing committee will bite.
I don't think that's it. I know plenty of people who were taught lisp first thing at university, and as soon as someone handed them an imperative language, they never looked at lisp again. And lisp is way easier than haskell IMO as IO is just a function and not a pilosophical concept
I wouldn’t assume that your colleagues were less capable as undergrads than the students that Dijkstra encountered at UT Austin.
Imperative languages do offer many advantages over Haskell, in that most coursework and industry jobs use them and that, consequently, their ecosystems are much further developed. These advantages are a consequence of university programs' alignment with the imperative and object-oriented programming paradigms, to Oracle's benefit.
Your colleagues having never looked back at lisp is hardly evidence that Haskell would have been too difficult for them or that Oracle didn’t have a hand in this.
I don't think that holds water. We've had functional programming for longer than oracle or java have existed, and for far longer than oracle has owned java. Haskell itself has been around for longer than java or oracle-owned java.
Functional programming just seems harder for people to get into. Perhaps it's bad for everyone that people don't make that effort, but it doesn't seem like a conspiracy
My mistake, at the time, Java was being promoted by Sun Microsystems, which only more recently became a part of Oracle.
The promotional campaign that Dijkstra mentions was perhaps orchestrated by Sun Microsystems, though perhaps not since Oracle was indirectly strategically aligned with Java, as the eventual acquisition shows.
Yes, it is more difficult to get into FP. However, asking the question why it became more difficult, when historically the opposite was true, is certainly worthwhile. Surely there was some cause.
Nah man, Oracle… here is a personal story. I was teaching at a Uni, introductory programming course. Dean hits me up and asks if I can teach introduction to web development as then current professor was going on maternity leave. I was like “heck yea, that sounds like fun.”
before the first class I get an email from one student asking if they must purchase the book for the class since it $275 (this is years ago) and I was taken aback, what kind of book costs $275 - even for a college textbook that was nuts. I told him to not purchase it until we meet for the first class. I go to the office and see my copy of the book, it is programming the web with oracle forms from oracle press!!!! I talked to Dean and he was like “yea, that is what we need to teach!” needless to say that, none of the kids bought the book, and I did NOT teach oracle forms, and I was never given that class again :)
My first CS class was in Scheme (R6 iirc), and the year after they switched to python. Then a thousand cries in failure to understand python metaclasses. They are garbage at the repl, and you have a distinct set of folks that edit their editors.
Most Python programmers don't really have to understand metaclasses or other advanced concepts like descriptors. The main metaclass they'd use would be to create abstract classes, and these days you can just subclass ABC.
I wish I'd have had either of those. C++ was the fad when I was going through, so freshmen learning how to write linked lists got a fast introduction to (and a hatred of) the STL.
Haskell is superb at handling persistent mutable state. The only problem is that you may have analysis paralysis from choosing between all the choices (many of them excellent).
4. History. In those types of discussions, there are always "rational" arguments presented, but this one is missing.
> One with lots of persistent mutable state.
You mean like a database? I don't see a problem here. In fact, there is a group of programs large enough, that Haskell fits nicely, that it cannot be 3; REST/HTTP APIs. This is pretty much your data goes in, data goes out.
No, I mean like a routing switcher for a TV station. You have a set of inputs and a set of outputs, and you have various sources of control, and you have commands to switch outputs to different inputs. And when one source of control makes a change, you have to update all the other sources of control about the change, so that they have a current view of the world. The state of the current connections is the fundamental thing in the program - more even than controlling the hardware is.
Thanks. This does sound like a state machine, though, but the devil is probably in the details. Yes, here Haskell is probably a bad choice, and something where direct memory manipulation is bread and butter should do better. Which is completely fine; Haskell is a high level language.
But in your example, PHP is also a bad choice, and alas, it dwarfs Haskell in popularity. I can't really think of where PHP is a great fit, but Haskell isn't.
Haskell can be taught as recursive programming, learning about accumulator parameters and high order functions. A background in logic (which most programmers have to some degree) is more useful than an approach in math in that regard.
Okay, might be definitional, but when I think of 'first class', I think of something baked in to the language. So in Haskell's case, in the Prelude I suppose.
That's kind of the opposite of what functional programmers mean by "first class", or at least orthogonal. "Functions are first class" means that they don't have any special treatment, relative to other entities. They're not really a special "built-in" thing. You can just pass them around and operate on them like an other sorts of values.
The generalized version of ‘traverse/mapM’ that doesn’t just work for lists, but any ‘Traversable’ type is absolutely amazing and is useful in so many cases.
‘traverse :: Applicative f => (a -> f b) -> t a -> f (t b)’
And you can derive it for free for your own datatypes!
The amount of code I’ve manually written in other languages to get a similar effect is painfully large.
I had a generic report class that essentially fetched a bunch of database rows, did some stuff for each row, and then combined the results together into a report. (This was in Scala, I know Haskell doesn't have classes, but presumably similar situations can happen)
For one client, we needed to accumulate some extra statistics for each. For another, we needed to call their web API (so async I/O) to get some of the data used in the report. By making the generic superclass use a generic Applicative type, we could keep the report business logic clear and allow the client-specific subclasses to do these client-specific things and have them compose the right way.
Wanting custom applicative types is rarer than using a standard one, but it can be a good way to represent any kind of "secondary effect" or "secondary requirement" that your functions might have. E.g. "requires this kind of authorisation" or "must happen in a database transaction". But a lot of the time you can implement custom things using reader/writer/state, or free, rather than having to write a completely from-scratch applicative.
Note that deriving Traversable for you own datatypes mean changing the structure to map effect over, the `t` variable. Not the effect `f`, which is generic and the Monad/Applicative in this case.
Besides Maybe/Either `t` could represent anything, like container types Lists/Trees/Hashmaps etc, but also more complicated structures like syntax trees for programming languages or custom DSLs. Or (my favorite use case!) recursive command types for robot control. I'm doing this mostly in Rust, but borrow all the ideas from Haskell.
I like Haskell, but I think it suffers from the same issues preventing most lisps to gain any traction: every codebase is different and reinvents its own DSL or uses different extensions.
Lispers hailing macros and metaprogramming cannot understand that power is also the very thing that makes jumping from project to project difficult, I have no intention of relearning your cleverly designed DSL or new extensions.
There's a reason why Java or PHP have plenty of widely used killer software, while monocle-wielding lispers and haskellers have very little to show after many decades.
It's not FP being the issue, it's just that their power attracts crowds interested in code more than the features/business/product.
I don't blame them, I love Haskell and Racket but I think very few teams can scale and make the compromise worth it.
The upside for haskell is that whenever you come to some new code, or return to your own old DSL-y code, the types are there as spikes in a cliff to help you move onward. With, e.g elisp, I always get a mild headache when I need to add some new feature to a codebase I myself wrote.
Perl was like this, though it did have wide adoption at one point. But people got sick of having 20 ways to do the same thing so Python was born. Is there an equivalent in the FP world?
I've used Haskell several times for implementing isolated 'maths business logic units in commercial backend applications.
In one such system I built had the main (REST API exposing) backend implemented in Kotlin with a separate application in Haskell doing a complex set of maths driven business rules against GIS data to calculate area specific prices.
The amount of IO on the Haskell side was fairly minimum and abstracted away quite nicely.
Haskell allowed expressing all complexity in a way that was easy to audit and translate from business/data analyst requirements.
Would do again :-) But only with the correct amount isolation so you can lean into Haskell's strong sides.
Everyone else is responding with FOSS, so I'll respond with some companies:
Co-Star, the astrology SaaS, is apparently written with a Haskell backend. I'd love to have seen the casting call for that.
I believe the Mercury bank also runs most of their backend stuff on Haskell. Functional languages in general are surprisingly common among financial investment firms.
Some of Target's stuff is written in Haskell. I think there was at least one big Facebook project that was written in Haskell, but they may have moved away from it by now. Awake Security does some Haskell stuff.
One thing which might be surprising is Haskell is apparently quite strong for general backend web dev.
> Haskell is apparently quite strong for general backend web dev
Yep. Mostly because of the https://www.servant.dev/ framework (but see also IHP, Yesod, and other frameworks). Servant lets you declare your HTTP API at the type level, and then it will infer the correct types for your endpoint handlers. You can also extract OpenAPI specs from it, generate clients for Haskell or other languages, etc.
My current employer, Bellroy, uses Haskell for pretty much all new code and Servant for all new HTTP APIs. https://exploring-better-ways.bellroy.com/our-technology-sta... is an older post discussing the shift to Haskell. We've found Haskell code to be much more compact than the equivalent Ruby, and significantly more robust.
Going from Ruby to Haskell is, itself, quite a good signal of quality for me. Start strong and end stronger. Sounds like you've got a good thing going!
I maintain Haskell code for five different customers, some large projects, some smaller, projects of varying ages up to over a decade. All the projects do "server backend" stuff, some web frontend too. I love how secure I feel making changes to things I haven't touched in a while.
It does one of the things I find most annoying about Haskell programmers, which is that they think the language is magic just because it has ADTs.
> HASKELL MAKES ILLEGAL STATES UNREPRESENTABLE.
That's not true!
It's especially not true for numeric programming, Haskell really doesn't provide good support for that. Basically only Ada and dependent-type languages do.
It is most definitely true for numeric programming (and Haskell is _somewhat_ dependently typed). For example, look at this finite bit width concatenation operation:
By "support" I don't mean that you can do it. Rather I meant the opposite - it doesn't stop you from doing it incorrectly. There isn't a lot of thought put into ranged integer types, floating point modes, etc.
I see. You're probably correct about that. I guess I got thrown off by your suggestion that dependently typed provide good support for numeric programming, because I doubt any dependently typed language that put a lot of thought into floating point modes (and I'm not aware any has particularly good suppose for ranged integer types either, but I could just be uneducated in that regard).
How could I forget Serokell, too! An Estonian software development firm that uses Haskell and Nix as basic building blocks.
I think they were using Agda or something too for a while, but it appears I can't find what I'm thinking of on their site anymore. Really interesting guys if you're located in the Baltic states.
I've said for awhile that Pandoc is one of the very few Haskell programs that isn't exclusively used by Haskell programmers.
There are plenty of interesting Haskell programs (e.g. Xmonad, Darcs), but a lot of people explicitly use them because they're Haskell.
Pandoc, on the other hand, is useful to pretty much anyone who has ever needed to convert documents. It's one of the first things I install on most computers.
There are stronger reasons to avoid wayland. It's a protocol that manages to have all implementations slightly different, and creating an EGL context per the glxgears results in a hard crash, both with AMD and NVIDIA cards. I assume I messed up the EGL context, but why does my entire desktop need to crash? Xkill is a much better UX, and that's kinda sad. An Xorg app failing dramatically doesn't murder my desktop session
If you want to stay in the land of monads there is https://github.com/SimulaVR/Simula?tab=readme-ov-file "a VR window manager for Linux". Should've been called MetaMonad ;) but I guess that was already taken by the phylum metamonada, don't want to get on their bad side.
I think Pandoc too, but yeah it's a fairly accurate meme that Haskell isn't really used to make anything except Haskell compilers and tutorials. Last time I checked there was actually an exhaustive list of software written in Haskell somewhere, which they meant as a "look how successful it is - all these projects!" but is really "it's so unsuccessful we can actually write down every project using it".
Am I the only one who never tried Haskell but who when reading discussion about it ends up thinking real-world (with GHC extensions) Haskell has way too much (sometimes historical) cruft? It really detracts me from it.
It's a thirty year-old language, it's bound to have cruft. However modern codebases tend to showcase a pretty efficient combination of language features, oftentimes optimised for productivity rather than research in lazy FP. Such codebases are https://github.com/flora-pm/flora-server or https://github.com/change-metrics/monocle
In practice it only detracts a little bit. You can enable GHC extensions project-wide and there are alternate standard libraries (preludes) that are more modern.
If you want a language that is very Haskell-like without the historical baggage or the laziness, PureScript is very good. It’s main compile target is JavaScript, so it’s built for different use cases.
Am I right in thinking that there are efforts to provide a better out-of-the-box experience, with some of that cruft dealt with for people who don't need the backwards compatibility? For myself, I found long prologues of extensions/options/whatever massively off-putting.
Thanks for the report. I don't see any broken links to gitlab. I tried the first three and a random one. Could you point out the ones that are broken for you?
- map is easier on newcomers. Once you you understand functor, you'll remember to use fmap anyway. Also, I still use map for lists after 10+ years.
- I don't fully understand this, but do you mean that every `IO a` function would be better off being `IO (Maybe a)`?
- AFAIK, there are no dependent types in Haskell yet, but say that the type level programming you can do today is what you mean then you are already in quite advanced territory. Yeah, I guess it could be more polished.
I mean that `head` should be `List a -> Maybe a` rather than `List a -> a`. If you really really want an `a`, you should provide `NonEmptyList a` or something.
“atrocious” ? I don’t think that’s any Haskeller’s experience. We can point to a few things we’d like to be different for modern Haskell dev, which are slow to improve because of legacy etc, and a certain fragmentation across different packages, but relatively speaking, base and other core libs provide a wealth of high quality, well documented, principled utilities.
I once had a hard to track down bug in some code making use of conduit[0], which is introduced using examples like `main = runConduit $ (yield 1 >> yield 2) .| mapM_C print`.
Dutifully replacing every occurrence of (>>) with (>), because it was more modern, suddenly changed the semantics somewhere, due to the fact that (>>) is defined with fixity `infixl 1 >>` and (>) as `infixl 4 >` - i.e. both are left-associated operators, but (*>) binds tighter than (>>) and some of the myriad of other operators you may encounter.
This is a bit surprising. Why replace `>>` with `*>` "because it's more modern"? Can't you just stick with what's working? The former hasn't been deprecated.
You are absolutely correct and I was an idiot without a proper understanding of premature abstraction or unnecessary complexity.
If I were to start a new side-project using Haskell today (I probably won't), I would just stick to do-notation and even concrete types (instead of mtl-style or what have you) where possible.
"This seems rather … procedural. Even though we get all the nice guarantees of working with side effectful functions in Haskell, the code itself reads like any other procedural language would. With Haskell, we get the best of both worlds."
Working with the IO monad is much more complex, especially if you want to use other monadic types inside that code.
> Working with the IO monad is much more complex, especially if you want to use other monadic types inside that code.
Yes, mixing other monadic types with IO can be a challenge. One possible solution to that is not just not use other monadic types inside the code. Just use a general-purpose effect system instead. I recommend effectful or Bluefin (the latter my own project).
Haskell dilettante here… The “IO a” vs “a” reminded me of async vs sync - where the first one returns a promise/future to be awaited on, rather than a result.
Is there any parallel there, or is it an entirely wrong perception I got ?
Of course. Promise is a monad, .then is more or less equivalent to the >>= operator and await makes it look more imperative-ish just like <- in Haskell.
Note that in JS you'll need to be inside an async function to use await, just like in Haskell you'll need to be inside the do notation to use <-. Otherwise, you'll need to play with .then just like you would need to play with >>= in Haskell.
About other languages, one way to achieve asynchronous behavior without having it implemented in the language is using ReactiveX (https://reactivex.io/). It's hard to understand at first, but it if you understand the IO Monad it becomes easy, as it's basically the same.
Yes, except in a lot of languages, a Promise is the representation of the result of a computation that may already be happening, whereas an IO is a computation that will have data injected into it by some interpreter. But it's a very close comparison, in the sense that both represent contexts that future computation can be appended on. Also, in some languages, the Promise type is truly monadic.
There’s a parallel because Promises in a language like JavaScript are “monad-like”, so they’re similar to the IO Monad here. I am not a functional wizard so I’m sure that was not a fair comparison in some way, but it’s how I have thought of it. They’re both a representation of a side effect and require that effect be respected before you can get to the value inside it
Not so much a representation of a side effect (after all List is a Monad as is addition on integers no side effects anywhere in sight) as the reification of a particular category.
JavaScript's `Promise` is particularly interesting because it is a Monad over "all JS values which do not contain a method called `then` in their prototype chain" or as Dominic put it 12 years ago:
> Indeed, I like the way @medikoo phrases it. There's, practically speaking, nothing wrong with being a monad on the category of non-thenables.
The way I think about effectful computation and IO in Haskell is as a kind of enforced metaprogramming. The actual language comprises a strongly typed meta-layer used to assemble an imperative program from opaque primitives. Like all powerful metaprogramming systems, this allows doing things you cannot easily do in a first-order imperative language. On the other hand, this makes all the 99% of simple stuff you don't have to even think about in an imperative language necessarily roundabout, and imposes a steep learning curve. Which contributes to the empirical fact that not many people are rushing to use Haskell, the superior imperative language, in real-world projects.
> The way I think about effectful computation and IO in Haskell is as a kind of enforced metaprogramming
If you think of effectful computation in Haskell to be a sort of metaprogramming, then indeed it will seem very complex, absurdly complex actually.
If you think of it as just the same as programming in any other language, except functions that do effects are tagged with a type that indicates that, then I think things will appear much simpler.
I prefer thinking of it as the latter, and it's been very effective (pun half intended) for me.
When you express it has "Haskell has the IO monad" and "Haskell has the Maybe monad", it's no biggie because other languages have them. All Java Objects might or might not be there (so they all implement Maybe). And all Java methods might or might not do IO, so they all implement IO.
The real sell is: Haskell objects which are not Maybe are definitely there. Functions which are not IO will always give the same input for the same output.
I think as long as the code sticks to the discipline of never actually doing I/O but only manipulating functions that perform them it would basically be doing the same thing as IO monads in Haskell.
So print(s) returns a function that when called prints s. Then there needs to be function that joins those functions, so print(a); print(b) evaluates to a function that once called prints out a and then b.
What makes Haskell special in my opinion is 1) it generalises this way of achieving "stateful" functions, 2) enforces such discipline for you and makes sure calling functions never produces side effects, and 3) some syntactic sugar (do, <-, etc) to make it easier to write this kind of code.
Also note that the above example only does output which would be the easier case. When it comes to code with input, there will suddenly be values which are unavailable until some side effects have been made, so the returned functions will also need to encode how those values are to be obtained and used, which complicates things further.
> Is this really so different from IO everywhere in Haskell?
It's a starting point. You'll have to somehow chain (e.g. you want to print "def" after "abc", how would you do that?). You'll also need to put a value into your IO, e.g. if you are inside that IO, all the computations will need to produce IO. If you solve those things, you'll have a Monad.
But, wait! You're in JS, right? You have IO Monads there, perhaps only don't know yet. They are the JS Promises! Look this:
// assuming that you have a sync input function, that receives
a string from stdin and returns it
let input_str = new Promise((resolve, reject) => resolve(input());
let input_int = input_str.then(s => parseInt(s));
let square = input_int.then(n => n * n);
square.then(sq => console.log(sq));
If you want to proceed with your poor man's IO, you'll need to implement your "then". Going further, the equivalent in Haskell (not the best code, though) to that is:
Ok, but Haskell has a syntax for cleaning that, called do notation:
main = do
input_str <- getLine
let input_int = read input_str :: Int
let square = input_int * input_int
putStrLn (show square)
And JS? JS has also its syntax, using await. Just like Haskell that it only works inside the do notation, in JS it will only work inside async functions:
let input_str = await new Promise((resolve, reject) => resolve(input()));
A function from a fixed input is a monad (called the “Reader” monad). If you constrain all IO to happen underneath that function then it will be deferred until you evaluate the result.
In those senses, these are the same. You can emulate the IO monad in this way.
Haskell provides guarantees that make this stronger. There is no way to perform IO-like side effects except via the IO monad. And no way to “evaluate” the IO monad except to declare it as your main entry point, compile, and hit play.
(Except unsafePerformIO, of course, the highly discouraged escape hatch).
These restrictions are actually what makes the use of the IO monad powerful. As you show, it’s easy to “add” it to a language, and also almost valueless. The value is in removing everything that’s not in the IO monad.
That IO is roughly the same. The point of Haskell is not to "have IO". It's to know that when something is _not_ in IO, it's pure. _That_ is what other languages don't have. (Although they could, as far as I know. It's a _restriction_ on exposed primitives, not a feature.)
It's a very good explanation but I'd welcome more humility. Instead of using phrase lie-to-kids, I could say that all the dance around effects in Haskell is just implementation detail and/or leaky abstraction.
Reading up to this point, I had to chuckle a bit. I have struggled with the Haskell type system more than I care to admit; It's almost never "we just don't have to care" when comparing to most other popular languages.
That being said, this article does a nice job of gently introducing some of the basics that will trip up somebody who is casually looking through code wondering what *> and <*> and <* do. As usual, there is a steep learning curve because of stuff like this all over the codebase. If I walk away for a month, I need to revisit >>= vs >> and other common operators before I can be productive. It probably doesn't help that I never actually speak to a human about these concepts so in my head it's always ">>=" and not "bind."
reply