Hacker News new | past | comments | ask | show | jobs | submit | gtf21's comments login

I think this is being taken as me saying “therefore you can write any programme in Haskell” which, while true, was not the point I was trying to make. Instead I was trying to reduce the possible interpretation that I was suggesting that Haskell can write more programmes than other languages, which I don’t think is true.

> computability and programming just aren’t that related

I … don’t think I understand


> > computability and programming just aren’t that related

> I … don’t think I understand

That's such a Haskell thing to say!

Ok, I'm teasing a bit now. But there's a kernel of truth to it: a good model of the FP school which forked off Lisp into ML, Miranda, Haskell, is as an exploration of the question "what if programming was more like computability theory?", and fairly successfully, by its own "avoid success at all costs" criteria.

Computability: https://en.wikipedia.org/wiki/Computability_theory

> Computability theory, also known as recursion theory, is a branch of mathematical logic, computer science, and the theory of computation that originated in the 1930s with the study of computable functions and Turing degrees.

Programming: https://en.wikipedia.org/wiki/Computer_programming

> Computer programming or coding is the composition of sequences of instructions, called programs, that computers can follow to perform tasks.

Related, yes, of course, much as physics and engineering are related. But engineering has many constraints which are not found in the physics of the domain, and many engineering decisions are not grounded in physics as a discipline.

So it is with computability and programming.

> “therefore you can write any programme in Haskell” which, while true

It is not. That's my point. One can write an emulator for any programme in Haskell, in principle, but that's not at all the same thing as saying you can write any programme in fact.

For instance, you cannot write this in Haskell:

http://krue.net/avrforth/

You could write something in Haskell in which you could write this, but those are different complexity classes, different programs, and very, very different practices. They aren't the same, they don't reduce to each other. You can write an AVR emulator and run avrforth in it. But that's not going to get the blinkenlichten to flippen floppen on the dev board.

Haskell, in fact, goes to great lengths to restrict the possible programs one can write! That's one of the fundamental premises of the language, because (the hope is that) most of those programs are wrong. About the first half of your post is about things like accidental null dereferencing which Haskell won't let you do.

In programming, the tools one chooses, and ones abilities with those tools, and the nature of the problem domain, all intersect to, in fact, restrict and shape the nature, quality, completeness, and even getting-startedness, of the program. Turing Completeness doesn't change that, and even has limited bearing on it.


> For instance, you cannot write this in Haskell:

> http://krue.net/avrforth/

From what I understand about the link, you likely meant that one cannot write an interpreter for avrforth in Haskell which reads avrforth source code and executes it on bare metal, because such an interpreter will need to directly access the hardware to be able to manipulate individual bits in memory, access registers, ports, etc. and all of this is not possible in Haskell today. If this is not the case, please feel free to correct me.

However, if my understanding is correct, I don't see how this is a problem of Haskell being a functional or "leaning more towards computability theory" language rather than a mismatch of model of computation between the language and the hardware. Haskell can perform IO just fine by using the IO monad which uses system calls under the hood to interact with the hardware. If a similar mechanism is made available to Haskell for accessing the hardware directly (e.g. a vector representing the memory and accessible within the IO monad), it should be possible to write an interpreter for avrforth in Haskell. This means that the current constraint is a tooling/ecosystem limitation rather than a limitation of language itself.


Oh ok I get what you mean now, I thought you were being a bit more obtuse than that.

So my original intent with that paragraph was very different, but you're right that I was not very precise with some of those statements.

Thanks for taking the time to explain, you've definitely helped expand the way I've thought about this.


Nicely said, this in particular;

> In programming, the tools one chooses, and ones abilities with those tools, and the nature of the problem domain, all intersect to, in fact, restrict and shape the nature, quality, completeness, and even getting-startedness, of the program.

Language shapes thought and hence once the simpler Imperative programming models (Procedural, OOP) are learnt it becomes quite hard for the Programmer to switch mental models to FP. The FP community has really not done a good job of educating such programmers who are the mainstay in the Industry.


> It is a shame that the article almost completely ignores the issue of the tooling.

Mostly because while I found of the tooling occasionally difficult, I didn’t find Haskell particularly bad compared to other language ecosystems I’ve played with, with the exception of Rust, for which the compiler errors are really good.

> The syntax summary in the article is really good

Thanks, I wasn’t so sure how to balance that bit.


If I may, as the author of "such sloppily constructed prose" (which I think might be a little unfair as a summary of all 6.5k words):

In this syntax note, I was not trying to teach someone to write Haskell programmes, but rather to give them just enough to understand the examples in the essay. I did test it on a couple of friends to see if it gave them enough to read the examples with, but was trying to balance the aim with not making this section a complete explainer (which would have been too long).

Perhaps I got the balance wrong, which is fair enough, but I don't think it's required to define _every single_ term upfront. It's also not crucial to the rest of the essay, so "The article lost me at following sentence" feels a bit churlish.


Yeah I also really don't understand the point that's being made here: it looks like a great way to introduce more errors.


Nope: I think the laziness aspect is very interesting, but it's not something that makes Haskell (for me) a great programming language. Or, at least, it's not in my list of the top reasons (it is in there somewhere).


> I find some code easier to express with procedural/mutable loops than recursion

This is what I was talking about in the section "Unlearning and relearning". While there are _some_ domains (like embedded systems) for which Haskell is a poor fit, a lot of the difficulties people have with it (and with FP in general) is that they have been heavily educated to think in a particular way about computation. That's an accident of history, rather than any fundamental issue with the programming paradigm.


How does one go from years and decades of imperative programming to becoming fluent in Haskell's style of functional programming? It feels like a steep price to pay when most over languages don't require such substantial re-learning.


I don't think it's specifically Haskell's style of functional programming. It's just functional programming.

Any paradigm shift requires re-learning I think. I don't actually think that's particularly hard, nor do I think it means the paradigm isn't a good one, it's just an inevitable consequence of a paradigm shift. Some shifts are easier than others, if the paradigms are closer together, but functional and imperative programming are quite distant in my view.

Nevertheless, I've seen some people find this easy, others find it hard. YMMV I guess.


> That's why, if you like the Haskell philosophy, why would you restrict yourself to Haskell?

In the essay, I didn't say "Haskell is the only thing you should use", what I said was:

> Many languages have bits of these features, but only a few have all of them, and, of those languages (others include Idris, Agda, and Lean), Haskell is the most mature, and therefore has the largest ecosystem.

On this:

> It's not bleeding edge any more.

"Bleeding edge" is certainly not something I've used as a benefit in this essay, so not really sure where this comes from (unless you're not actually responding to the linked essay itself, but rather to ... something else?).


Sorry to hear that. I built it that way because I prefer reading narrower columns of text (maybe because I read a lot of magazines and newspapers, who knows).


Fair enough, if that's what you prefer than you might as well, it's more of a personal preference. My main monitor is an ultra-wide, so it ends up using less than 20% of the total width. Though I can see it being tough to have a good solution that works everywhere.


The general recommendation is the keep the measure of the page fairly narrow since studies show that reading text that is very wide is harder than reading a narrower column of text. So looking at the layout from a best practices point of view the author made the right call.


I write about this at some length in the essay, perhaps you can help me by telling me why the section on "Make fewer mistakes" _doesn't_ satisfy?


I think one of the big takeaways from Haskell for me was that errors don't always need to be explicitly handled. Sometimes returning a safe sentinel value is enough.

For example, if the function call returns some data collection, returning an empty collection can be a safe way to allow the program to continue in the case of something unexpected. I don't need to ABORT. I can let the program unwind naturally as all the code that would work on that collection would realise there's nothing to do.

Debugging that can be a pain, but traces and logging tend to fix that.


You wrote

> Haskell solves the problem of the representation of computations which may fail very differently: explicitly through the type system.

In my experience this is very hit and miss. Some libraries use exceptions for lots of error states that in Go would be a returned error value. I'm therefore left to decipher the docs (which are often incomplete) to understand which exceptions I can except and why and when.

Last library I remember is https://hackage.haskell.org/package/modern-uri

From their docs: > If the argument of mkURI is not a valid URI, then an exception will be thrown. The exception will contain full context and the actual parse error.

The pit of success would be if every function that can fail because of something reasonable (such as a URI parser for user supplied input) makes it a compile time message (warning, error, whatever you prefer) if I fail to consider the error case. But there's nothing that warns me if I fail to catch an exception, so in the end, in spite of all of Haskell's fancy type machinery, in this case, I'm worse off than in Golang.


> Some libraries use exceptions for lots of error states that in Go would be a returned error value.

This just seems like bad libraries, I'd agree that this is bad and sort of defeats the point. I haven't actually encountered this with any libraries I've used, and we tend to avoid MonadThrow / Catch except in particular circumstances.

> in this case, I'm worse off than in Golang.

Having (unfortunately) had to write some Golang, I don't think this is true -- I've encountered plenty of code in Golang in which it seems idiomatic to return things like empty strings and empty objects instead of error values which, I think, it's still possible to mishandle.

Perhaps this can be summarised as: you can still write bad Haskell, but I don't think it's particularly idiomatic looking at the libraries I've spent most of my time using, and the machinery you are provided allows you to do much, much better.


It is possible to write bad code in any language. Haskell tries really hard to eliminate a whole class of problems but the type system doesn’t encompass/express things like thrown exceptions. This seems like a hole in the type system… and yes there are always better ways of doing things but many libraries are written to standards beneath those best practices or written to much older best practices.

The issue isn’t what the language is capable of in the best case. The issue is what the community has produced for future members of the community to consume. See my other comment about pcap and gnuplot for two specific examples.


> Some libraries use exceptions for lots of error states that in Go would be a returned error value

Yes. This is a very very bad aspect of the design of many Haskell libraries. They just throw away the whole point of doing Haskell.


There are real dragons handling AsyncException vs Exception, with extremely poor ecosystem understanding about how to deal with them properly.

There's also the huge performance divergence between IO exceptions (fast) vs a mtl stack built around Either which will have huge problems successfully inlining and thus be slowwww.

Indeed this is a great example of how Haskell can have serious performance issues in areas that would never occasion a second look in other mature GP langs. Who ever heard of well-modelled error handling having performance problems? Only In HaskellTM


I'm not sure that's entirely true (I wrote the examples): the point I'm trying to make is that you can precisely describe what `doSomething` consumes and produces (because it's pure) and you don't have to worry about what some nested function might throw or some side-effect it might perform.


> I'm not sure that's entirely true (I wrote the examples)

Which part?

> the point I'm trying to make is that you can precisely describe what `doSomething` consumes and produces (because it's pure)

I think you failed to demonstrate it, and more or less demonstrated the opposite of it: the type signature of doSomething does not show its implicit dependence on getResult.

In Haskell you can do

  foo :: Int
  foo = 5

  bar :: Int
  bar = foo + 1
(run it: https://play.haskell.org/saved/hpo3Yaef)

which your example does. In this example bar's type signature doesn't tell you anything about what bar 'consumes', and it doesn't tell you that bar depends on foo, and on foo's type. Also you have to read the body of bar, and also it is bad for code reuse.


> Which part?

This part: "the type information only tells you how you can use doSomething. To know what is doSomething, you actually have to read the code :\" I think we're disagreeing on something quite fundamental here, based on "it doesn't tell you that bar depends on foo, and on foo's type. Also you have to read the body of bar, and also it is bad for code reuse."

(Although I am certainly open to the idea that "[I] failed to demonstrate it".)

A few things come up here:

1. Firstly, this whole example was to show that in languages which rely on this goto paradigm of error handling (like raising exceptions in python) it's impossible to know what result you will get from an expression. The Haskell example is supposed to demonstrate (and I think it _does_ demonstrate it) that with the right types, you can precisely and totally capture the result of an expression of computation.

2. I don't think it's true to say that (if I've understood you correctly) having functions call each other is bad for code re-use. At some point you're always going to call something else, and I don't think it makes sense to totally capture this in the type signature. I just don't see how this could work in any reasonable sense without making every single function call have it's own effect type, which you would list at the top level of any computation.

3. In Haskell, functions are pure, so actually you do know exactly what doSomething consumes, and it doesn't matter what getResult consumes or doesn't because that is totally circumscribed by the result type of doSomething. This might be a problem in impure languages, but I do not think it is a problem in Haskell.


> In this example bar's type signature doesn't tell you anything about what bar 'consumes'

Yes, it does: `bar` in your example is an `Int`, it has no arguments. That is captured precisely in the type signature, so I'm not sure what you're trying to say.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: