I appreciate the need some people feel to make modern-looking web pages, but the format of this really is just a triumph of style of substance:
* I have no idea how far through the content I am
* The lack of verbosity means I'm constantly having to apply guesswork to their statements (ie an explained point will tell the readers why Haskell gave them better communication. A slide show like that with just headings mean I have to guess the reason Haskell had those benefits)
* It's scales terribly (at least on Opera, some of the content fell off screen - so I had to maximize the browser. But this might just have been an Opera bug. In either case, this wouldn't have been a problem with a more traditional blog layout).
* The navigation requires guesswork
* And most annoying of all, it doesn't actually save the reader any time by displaying the content like that than it would have done with blog post as the jarring navigation presented with slides require the reader to pause before each page to understand how to read the slide, and then pause after each page to apply the aforementioned guesswork. With a blog post, the information is presented in a more natural communicative way which makes it quicker to digest the information.
It's a great pity I'm having to pick post a negative comment about the presentation of the submission rather than comment about the point being raised - particularly as I've been very interested in Haskell lately. But I found even my interest in the subject wasn't enough for me to put up with this slide show layout and thus I ended up closing the browser tab before the end. :(
In terms of how far through the content - there seems to be a little marker (in yellow) that runs along the bottom of the screen. I suspect, depending on the browser, YMMV.
Good presentations do not put every single thing in the slides. They are there to support the speaker. The better the presentation, the more worthless it is to click through on a web page, and therefore I generally avoid them in favor of something I can actually read.
Doesn't work with my version of chrome (Version 26.0.1410.64 m) either.
Edit: Ooooh, found a button that actually does something. Apparently you have to use the scrollwheel, space or the arrow keys. Just clicking on things doesn't seem to work. Hooray for modern UX design.
It also does not appear to be working on my nexus 7 as well. I find cases like this frustrating because I was very interested in the content, because I might start learning Haskell, but it is unlikely I will remember to check it later on my desktop.
I am beginning to see what you mean by the easy-negative-up-vote effect.
I hit that presentation and saw the first two pages - Mark Hibberd, and REST API documentation with Haskell. You could slideshow the phone directory at half speed and I would still be doing exactly what I did - go look up more info.
On the other hand, the responsiveness of that deck did feel like a phone book at half speed. So no downvoting from me.
> I am beginning to see what you mean by the easy-negative-up-vote effect.
I'm someone who normally hates such posts as well. However the reason I did leave such a comment this time round was out of disappointment rather than wanting to be negative for the sake of "internet points". The topic was one that I had a particular personal interest in and the content looked like it should have been really juicy. But sadly the item failed to offer any substance and that was entirely down to the choice of layout.
However it now seems like this presentation was designed to accompany a talk and likely wasn't expected to be viewed on it's own and without explanation. In which case my post was unfair and I genuinely wouldn't have left those points had I known. In fact the only reason I haven't deleted my post is because I think that would now do more harm than good due orphaning the replies.
So I do actually agree with your disappointment about seeing such a post - even in spite of being the author of it.
I think i'm missing something. On slide 4 it says (the code) it has 0 exceptions and is robust, but then on slide 7 it says that (the system) it crashes without stack traces? Is that right?
What did the OP meant by being robust, when the code crashes unexpectedly?
The cynic in me says that unless Haskell has a magical "do the right thing in every situation, with automatic full knowledge of your application's business requirements" operator, if there's no specific logic in the code that responds to errors, they're just ignoring them (or else they're building a very bare bones app - any nontrivial piece of software will have potential error conditions out of its control that it has to react to in very specific ways that depend on the app itself).
In which case I could produce an equally "robust" Java program by wrapping everything in a try/catch and not logging anything. Except that in Java, or just about any other language, with a single method call I could obtain that elusive stack trace that OP is sorely missing. Which, in real applications (which, to be fair, I've never been masochistic enough to write in Haskell), I always do, and it significantly improves our ability to track down the remaining unanticipated conditions that our error handling logic hasn't already handled in well defined ways. And that sort of work should consume most of anyone's time spent developing, because the happy path is the easiest piece of the puzzle - engineers handle all code paths, hackers handle the most important ones, and hacks only handle the best case scenarios.
I'm not really getting what Haskell has helped here, apart from making the article more upvote-able.
That said, the presentation slides don't give me even a vague sense what the real thrust of the presentation was, so I'm probably missing the substance. I take issue with this being posted without any additional context, but I'll grant the benefit of the doubt and assume that there may actually have been something worth listening to if we didn't just have the slides to look at.
Haskell doesn't need a magical "do the right thing" operator. What it does instead is constantly nag you, saying, "but you haven't considered this case".
This causes you to think about you code in much more depth and generally leads to good results without the need to constantly debug.
I find that things written in Haskell that compile generally work first time more often than the law of averages would seem to allow.
Exception in Haskell is more specific than in other languages. There are many ways to have "unexpected circumstances" such as Maybe, Either, Error, Mplus, Alternative, and the transformer variants thereof. Exceptions refer explicitly to "asynchronous" exceptions—the kind in most other languages, that imply global changes in control flow that are difficult to reason about.
People tend to avoid this behavior except in rare circumstances. That said, it's easy to replicate locally with the Cont monad. These local reflows are easier to reason about as well. You can even build it atop delimited continuations for more control. These are great for covering early stopping in searches, for instance.
>Exceptions refer explicitly to "asynchronous" exceptions
This is not technically true. You can throw exceptions from pure code, either explicitly or with something like an incomplete function definition (which the compiler should warn you about). However, it is considered very bad practice to do so.
Sure, and if you treat them all as `undefined`/bottom then you get CPO semantics. Usually "Exception" still refers to `Control.Exception.SomeException` and `error` is bottom.
Catching "undefined" from pure code is the danger that leads to massively confusing semantics—it breaks down the "value" concept badly. The `spoon` library is a great example of this and it's scary to see it in places. That said, it's not terrible for wrapping up foreign code that isn't quite unsafe enough to need "IO" treatment.
The best usage of "error" is to mark completely impossible situations. These show up easily when you do dependently typed stuff with GADTs, but can also exist due to various algorithm invariants.
Well written Haskell code pushes as much of the business rules as possible into the type system. It uses the type system as a tool to keep you from forgetting to check corner cases or prevent you from creating states that make no sense in the context of business rules.
The canonical example here is using a distinct type that can only be produced by conversion functions to ensure that user input is always escaped correctly on it's way to the database or a web page. Code that fails to perform this escaping won't just send dangerous data somewhere, it will fail to compile. With care, the same can be done for ensuring business rules are also observed. In many ways the distinct-types-with-conversion-functions pattern is the tip of the iceberg here.
It's obviously possible to not do this, but that's a bit like using a database by shoving all of your data into one giant string then complaining that the database doesn't help you with anything.
There is a big difference between static and dynamic checking. In a statically checked language like Haskell the error is guaranteed to be caught at compile time rather than maybe being caught runtime.
It can be, yes, but it's easier to do so in Haskell. Some of the compiler extensions allow you to push truly crazy invariants through type checking.
Also, for most dynamic languages, checking the type at runtime goes against the grain of the language. Python or Ruby code that is littered with type checks is often unnecessarily brittle.
In Haskell one typically eschews exceptions in favor of for instance the Error monad, as you don't get a typical stack trace in a lazily evaluated language.
Probably they have experienced some exception during development, but have had no exeptions in productin.
Likely that is referring to the fact that when it crashes, it crashes in a way that can be interesting to debug. I took it as a comment about development vs one about production.
Lazy languages don't have equivalent call stacks as strict languages. However, GHC lets you get a stack trace via profiling. E.g. +RTS -xc flag gives a stack trace.
Not only I have no clue about the slides, but I went to their site (apiengine) and I have no idea what their site does.
Interestingly, both the slides and their site (apiengine) has clean and beautiful designs, however neither one of them are a bit informative.
Does anyone have some insight to what is going on in the Z monad bit? It looks like an Either monad with a bunch of error states instead of just the one.
There are two independent parts. The Z monad is nothing more than a ReaderT monad giving access to some config and environment information. Since it's a transformer they can execute actions in higher monads (like IO) pretty easily from Z, but it can also be used in pure code if you just need the configuration information.
Parallel to that is the ZResultT monad. It's built on the ZResult monad (it should be easy enough to write a Monad instance for it anyway) which looks like an Identity monad with some "Z-specific" error handling layered in. ZResultT just lets you have a transformer stack interleaved with the ZResults.
Then you combine those two stacks in ZZ which gives you both "Z-related" configuration and "Z-related" error states.
It looks like the whole thing is specialized to make it easy to build functions pure functions which only need a handful of the effects and then weave them together into the ZZ monad which is likely to be run at the top level, right below Yesod.
Z isn't a monad from what I can see... ZZ is the monad. It's an IO action that returns a ZResult (which is either a value or an error state as you said) and has access to an environment for logging, a database connection and a configuration.
I'm not a Haskell expert though - so don't take that as truth until confirmed by someone else :-)
Not alien, but not like ordering a sausage mcmuffin for breakfast. You'll recognize ideas taken from haskell (list comprehensions, especially) in other languages but the function definitions (multiple pattern matching clauses/"equations"), may be alien, unless you've looked at erlang, F#, ocaml, scala or some such.
I have been programming Haskell for the last few month for a sideline project, and really enjoying the experience so far. This has been the second or third time I revisit Haskell as a programming language, and this time I do feel that I am "getting" it in Haskell and could be productive ;) and I do feel very safe to write in Haskell, usually it works if it compiles.
That Reddit post is great. For me, in learning Haskell, the biggest challenge has been libraries. Many of them just seem badly designed, or unfinished.
My first project I wrote a simple scraping tool that scrapes a certain web site and organizes the information in a database behind a front end and API. Immediately I encountered several problems that would have been a cakewalk in, say, Ruby:
* The HTTP library (Network.HTTP) is not encoding-aware. It ignores Content-Type and returns a String with some undefined encoding. So if you grab a resource which is, say, ISO-8859-1, anything that works with UTF-8 will potentially blow up.
* (And if you do this yourself, there is no built-in way to parse MIME charsets into Haskell encoding names except by writing it yourself, I think.)
* The string libraries are confusing. There is String, but also Text. Both are supposed to be Unicode-aware, but many of the operations you want are in Text, requiring conversions back and forth.
* I could not, and still haven't, figured out how to convert a ByteString containing ISO-8859-1 to an UTF-8 string. Data.Encoding apparently exists for that, but I could not get it to work like I wanted. I ended up with this lovely mantra (which you wouldn't think had anything to do with ISO-8859-1, but it does) and left it like that just because it works:
* There are multiple libs for HTML/XML parsing, and no clear winner. HaXml was buggy and did not parse my HTML correctly. TagSoup and HandsomeSoup had weird APIs I did not like. I ended up with HXT, which uses Haskell arrow syntax extensively and is therefore incomprehensible, even when I understand what and how it does it. It feels a lot like people who write ambitious libs in Ruby with too much "magic". Haskell is a functional language and should be wonderful for parsing HTML/XML using XPath or CSS selectors, but it's a nightmare. HXT's parser is impure by default (!) and you have to google some tutorials to find out how to use its pure version, which the tutorial writers (in the same breath) dissuade you from using.
* For the frontend I ended up using Happstack Lite because it's small and about as easy to get started with as Sinatra. Unfortunately, the first templating system it recommends, Blaze, is crap. With Blaze you write the template inline, building an XML tree, similar to Ruby's "Builder" gem, but the syntax is ugly and unnecessarily complicated. The other recommended templating systems, HSP and Hamlet, are like ERB or PHP, so a serious step backwards from HAML. In the end, I could not find anything like HAML.
* I haven't delved very deeply into Happstack yet, but like many Haskell libs it seems to make simple things more complicated than they ought to be (eg., the routing system). One annoyance I remember is the fact that HTTP verbs aren't IO, which makes code less intuitive, since obviously in a modern web app, HTTP verbs are going to be doing IO (things like talking to databases), and so you have to use liftIO a lot.
* Perhaps the biggest issue for me was the general lack of high-quality documentation (Happstack itself has almost none). Sure, there are tons of searchable machine-generated docs, which is great, but that documentation lacks the bits that tie everything together. Since Haskell is mainly about applying functions to values of types, it is very decentralized; knowing what function to use is only part of the game. It's just as important to know how its return value plays with the rest of the function space, because so much of Haskell revolves around composing functions in idiomatic ways. This is very different from, say, Ruby, where you know a method returns an object, and an object only has the methods its class provides; this centralizes, clusters and constrains the information into easily digestible pieces.
A veteran haskeller would probably not struggle too much with these things. But as a novice, I expected the path to be slightly easier. The language was rarely a problem, the libraries definitely were.
> * The HTTP library (Network.HTTP) is not encoding-aware. It ignores Content-Type and returns a String with some undefined encoding. So if you grab a resource which is, say, ISO-8859-1, anything that works with UTF-8 will potentially blow up.
Agreed that the HTTP package is rough around the edges. It also doesn't support HTTPS, which seems like a major shortcoming to me. Fortunately, there are several better-designed alternatives (which have the added benefit of being more efficient), e.g. http-streams and http-conduit.
> * The string libraries are confusing. There is String, but also Text. Both are supposed to be Unicode-aware, but many of the operations you want are in Text, requiring conversions back and forth.
This pain can be mitigated somewhat by using `{-# LANGUAGE OverloadedStrings #-}`, at least when you're working with literal Strings, ByteStrings, and/or Texts. The main problem with the proliferation of string types is when using multiple libraries, each of which expect a different type of string (or one expects a strict ByteString and the other a lazy ByteString). Then the conversions can get irritating.
Generally speaking, you should use Text for textual strings, ByteString for binary data, and String in simple cases when the convenience of using Prelude or list functions trumps performance concerns. They all have reasonably well defined uses.
> * There are multiple libs for HTML/XML parsing, and no clear winner. HaXml was buggy and did not parse my HTML correctly. TagSoup and HandsomeSoup had weird APIs I did not like. I ended up with HXT, which uses Haskell arrow syntax extensively and is therefore incomprehensible, even when I understand what and how it does it. It feels a lot like people who write ambitious libs in Ruby with too much "magic". Haskell is a functional language and should be wonderful for parsing HTML/XML using XPath or CSS selectors, but it's a nightmare. HXT's parser is impure by default (!) and you have to google some tutorials to find out how to use its pure version, which the tutorial writers (in the same breath) dissuade you from using.
Have you tried the `xml` package [1]? It's not as sophisticated as something like HXT, but for a lot of uses, it does the trick and is far easier to work with. HXT is intimidating for sure, but apparently very powerful once you learn it (I haven't bothered).
> * For the frontend I ended up using Happstack Lite because it's small and about as easy to get started with as Sinatra. Unfortunately, the first templating system it recommends, Blaze, is crap. With Blaze you write the template inline, building an XML tree, similar to Ruby's "Builder" gem, but the syntax is ugly and unnecessarily complicated. The other recommended templating systems, HSP and Hamlet, are like ERB or PHP, so a serious step backwards from HAML. In the end, I could not find anything like HAML.
Funny, I personally think blaze-html is terrific. I like having a Haskell EDSL for HTML generation instead of having to drop into a specialized and restricted "templating language" for that purpose.
If you want something like HAML, check out `hamlet` [2]. It's directly inspired by HAML and is the main templating engine of the Yesod web framework.
I found Snap to be somewhat easier to work with than Happstack, after trying them both. It occupies about the same level of abstraction. Yesod is a more full-stack Rails-like framework, which some people prefer. You can pretty easily mix and match components of the different frameworks however you like; they are nice and modular. I often use snap-core, acid-state, digestive-functors, and blaze-html together for simple web apps, even though they are not all part of any one framework.
> * I haven't delved very deeply into Happstack yet, but like many Haskell libs it seems to make simple things more complicated than they ought to be (eg., the routing system). One annoyance I remember is the fact that HTTP verbs aren't IO, which makes code less intuitive, since obviously in a modern web app, HTTP verbs are going to be doing IO (things like talking to databases), and so you have to use liftIO a lot.
Frequently having to use `liftIO` is a sign that the library author overspecialized their functions to IO. If more people would write libraries using `MonadIO m => m a` instead of `IO a` that problem would basically disappear. Polymorphism is wonderful, when you are able to apply it.
> * Perhaps the biggest issue for me was the general lack of high-quality documentation (Happstack itself has almost none). Sure, there are tons of searchable machine-generated docs, which is great, but that documentation lacks the bits that tie everything together. Since Haskell is mainly about applying functions to values of types, it is very decentralized; knowing what function to use is only part of the game. It's just as important to know how its return value plays with the rest of the function space, because so much of Haskell revolves around composing functions in idiomatic ways. This is very different from, say, Ruby, where you know a method returns an object, and an object only has the methods its class provides; this centralizes, clusters and constrains the information into easily digestible pieces.
I don't disagree that more documentation is a good thing, and some libraries are under-documented. But I can also point to examples of Haskell libraries with superior documentation to nearly anything else I've encountered. Some authors really go the extra mile; it's great.
My experience has been that the type system plus Haddock docs are frequently enough for me to figure out a library even without any prose documentation at all. This, to me, is a huge advantage of Haskell. The types only fit together one way, and that way is the correct one. When using a library with well-designed types, you basically can't get it wrong.
I certainly understand why this would not be so easy for a newbie. It gets much, much easier with experience, though.
> My experience has been that the type system plus Haddock docs are frequently enough for me to figure out a library even without any prose documentation at all. This, to me, is a huge advantage of Haskell. The types only fit together one way, and that way is the correct one. When using a library with well-designed types, you basically can't get it wrong.
This is (a) exactly true and (b) a huge problem. For the expert Haskeller it's usually fairly trivial to figure out the semantics and operations of a library by exploring the types. This means that few experts are incentivized to explain the concepts of their libraries well. This is doubly compounded by the fact that so many Haskell paradigms are uniform across all libraries so it's easy to say "it's got a Alternative-Bifunctor-Semigroup interface" and assume the user will figure out what that means elsewhere.
Now, genuinely new abstractions like `pipes` get very thorough documentation because it pays to teach new abstractions just once. That stuff really ought to be decorating all of the stable, introductory libraries, though. Without it the entirety of Hackage looks terribly uninviting.
> Fortunately, there are several better-designed alternatives
Do those you mention handle content type and encodings correctly?
> Have you tried the `xml` package [1]?
Yeah, it doesn't do selection based on CSS selectors or XPath expressions. I don't want to have to manually write tree-traversal code when those already exist.
Take a look at my current scraping code [1] if you want to see how I use HXT. (Just promise not to laugh at my novice work.) Basically, the arrow syntax gives you a weird portal into a different world where you select stuff from a tree using a predicate syntax. I say weird because within the proc and the "-<" arrows, it seems you can only deal with XmlTree operations.
> Funny, I personally think blaze-html is terrific.
It's a bit much, but not awful for generating XML programmatically. The main problem is the need to use "toValue" and "toHtml" all the time. I guess I don't understand why the element building functions don't accept strings.
But for templating, I just don't like embedding templates in the actual source code of the HTTP verb it's for. The main controller code should be about preparing data for the UI, and the UI should be separate from the controller code. You could put it in a separate file and function and import it, of course, but it's still Haskell code, which must be compiled along with the entire app, which slows down the development cycle a lot.
I haven't looked closely at the various web frameworks to see if they support Rails-style reloading (ie., recompiling and reloading the app on each page load). Do you know?
> If you want something like HAML, check out `hamlet`
Hamlet is like someone looked at HAML and didn't grasp why it's so good, because they decided they had to make it look like HTML. Here is HAML:
div#foo
ul.list
- items.each do |item|
li= item.title
Why invent something that looks like HTML but isn't? HAML syntax is basically (almost) CSS selectors, that's the whole point.
And again, it's apparently meant to be placed inline.
The Snap guys (I think) also invented their own versions of SASS, called Cassius and Lucius. Cassius corresponds to the indented SASS syntax, and uses CSS selectors, unlike Hamlet.
> Frequently having to use `liftIO` ...
Oh, interesting. I don't know about MonadIO yet. Will definitely read up on this.
> My experience has been that the type system plus Haddock docs …
Haddock is great, but a lot of it is non-trivial to understand without having to read the docs very closely. The "xml" library is an example of a simple library where, if you browse around, you can piece together how it works, although it's so bare-bones that if you just want to know how to parse some XML, you have to hunt down the right function, which happens to be in Text.XML.Light.Input; nowhere is there a "how to" introduction which illustrates use cases.
With more complex libraries, the machine-generated docs become increasingly obtuse. For example, I would never, ever have figured out how to use HXT with Hackage alone. I mean, look at it! [2] Is the parser in Text.XML.HXT.Arrow.ParserInterface? Nope. In Text.XML.HXT.Arrow.ReadDocument? Curiously, no. The function to parse a document, runX, is apparently not even in the Hackage database. And good look finding out that this is the function you need to use, never mind the semantics of XML arrows.
> Do those you mention handle content type and encodings correctly?
I honestly can't say for sure, but I've never encountered such a problem in my own usage. Maybe that just means I've been lucky. I just know from firsthand experience that they do a lot of things better than the HTTP package does.
> The main problem is the need to use "toValue" and "toHtml" all the time. I guess I don't understand why the element building functions don't accept strings.
First of all, if you're writing string literals and explicitly converting them with `toHtml`, you're missing out. Instead, enable the OverloadedStrings extension, and then you can do this sort of thing:
html $ do
head $ title "My Web Page"
body $ do
h1 "A Heading"
h2 "A Subheading"
hr
p "A paragraph of text"
p $ "Another paragraph, with an " <> (a ! href "http://example.com/") "embedded link"
p $ em "Emphasized text" <> " and " <> strong "strongly emphasized text"
Notice the total lack of `toHtml` there.
The HTML DSL functions don't accept strings because String and Html are two fundamentally different types of data. Enabling the type system to differentiate between the two lets you do some pretty nifty things. For instance, if you were to write
"<script>alert('XSS!');</script>" :: String
you'd be in trouble, but you can't do that since blaze-html doesn't accept plain old strings. Instead, you'd write
"<script>alert('XSS!');</script>" :: Html
Note how only the type is changed, but now, the string will be transparently converted to "<script>alert('XSS!');</script>" before it's added to the document, and thus you're safe, without having to actually do anything different. The type signatures I gave are superfluous of course, since they'd be inferred automatically. The type system makes this kind of flaw impossible.
Plus, since blaze combinators are just Haskell, and the Html type is a monad, you can use all the usual Monad functions with Html:
ol $ mapM_ (li . toHtml) ['A'..'Z']
That generates a 26-item list of letters from A to Z. Yes, you do have to do the explicit conversion to Html there, but I think that's a small price to pay for all the flexibility afforded by templating HTML with full-fledged Haskell code.
> But for templating, I just don't like embedding templates in the actual source code of the HTTP verb it's for. The main controller code should be about preparing data for the UI, and the UI should be separate from the controller code. You could put it in a separate file and function and import it, of course, but it's still Haskell code, which must be compiled along with the entire app, which slows down the development cycle a lot.
Yeah, I just put "view code" like the above examples into a separate module and import it. I don't find the slowdown to be significant, but maybe that's just me. In any case, true template languages have to be compiled too, either with the rest of the program or at runtime, so one way or another you incur that cost no matter what you're using.
> I haven't looked closely at the various web frameworks to see if they support Rails-style reloading (ie., recompiling and reloading the app on each page load). Do you know?
Yesod does this by default, and Snap can, if you use `snap-loader-dynamic` for your development builds. I believe they both use the awesome `hint` package for runtime eval and compilation of Haskell code. I don't think Happstack has any auto-reloading capability though.
> Why invent something that looks like HTML but isn't? HAML syntax is basically (almost) CSS selectors, that's the whole point. And again, it's apparently meant to be placed inline.
It doesn't seem that different from HAML to me, but it's clearly a subjective judgment. :) And it doesn't have to be inline. It uses the QuasiQuotes language extension, which means you can do inline snippets or write it in a separate file, however you like.
Hamlet, Cassius, Lucius, and Julius (collectively, the "Shakespearean" template languages) are developed by the Yesod framework guys, not Snap, by the way. Snap uses a template language called Heist by default, which I don't particularly care for (I just use blaze-html instead).
> Oh, interesting. I don't know about MonadIO yet. Will definitely read up on this.
If you use liftIO, you're using MonadIO. That's what liftIO does, it's an adapter between plain IO values and polymorphic MonadIO values. But if library authors use MonadIO instead of IO, you can omit the explicit conversion. This isn't something you really have control over as the user of a library though.
> With more complex libraries, the machine-generated docs become increasingly obtuse. For example, I would never, ever have figured out how to use HXT with Hackage alone. I mean, look at it!
Fully agreed. HXT is a huge, complex beast! I don't think it's fair to judge all libraries by that standard, though. HXT is clearly at an extreme end of that continuum.
I have been cursing the string handling and now you tell me OverloadedStrings exists. This makes sense. Thanks for the help, that actually makes Blaze a little closer to HAML for me.
I appreciate the input on the other things. I will check out Snap, I think.
The slides are admittedly fairly obtuse, with very little context. Presumably there's an accompanying talk we are missing.
Haskell is really not alien. The stuff that usually comes up, such as discussions about monads, make the language seem weirder and more esoteric than it really is.
The base language, if you disregard the more advanced, complicated topics (for example, the mathematical basis for monads), is very straightforward, although some of the concepts (typeclasses, lazy evaluation, partial application/currying) can seem very novel and strange and daunting if you are not used to ML-based languages. I was not, and I had no problem grasping the basics, although I am still not fluent in the more complicated monad-based stuff.
The best intro, by far, is Learn You A Haskell For Great Good! [1]. It starts very gently, and is tailored to people who are more familiar with current imperative languages.
It's pretty weird. It's a very strongly statically typed lazily evaluated pure functional language with a type system that is actually Turing-complete.
It's not that different from Scheme, really.
If you are only using lists of one element type, then most scheme code translates fairly naturally to Haskell.
Sure you use a `case _ of ...` instead of a `cond`,
but Scheme is more similar to Haskell than it is to Python for example.
Took me a second until I noticed that the scroll wheel seemed to take me to a different page. As soon as I determined it was a slideshow, I tried the arrow keys, as they work on every HTML slideshow I've seen in the last 5 years.
* I have no idea how far through the content I am
* The lack of verbosity means I'm constantly having to apply guesswork to their statements (ie an explained point will tell the readers why Haskell gave them better communication. A slide show like that with just headings mean I have to guess the reason Haskell had those benefits)
* It's scales terribly (at least on Opera, some of the content fell off screen - so I had to maximize the browser. But this might just have been an Opera bug. In either case, this wouldn't have been a problem with a more traditional blog layout).
* The navigation requires guesswork
* And most annoying of all, it doesn't actually save the reader any time by displaying the content like that than it would have done with blog post as the jarring navigation presented with slides require the reader to pause before each page to understand how to read the slide, and then pause after each page to apply the aforementioned guesswork. With a blog post, the information is presented in a more natural communicative way which makes it quicker to digest the information.
It's a great pity I'm having to pick post a negative comment about the presentation of the submission rather than comment about the point being raised - particularly as I've been very interested in Haskell lately. But I found even my interest in the subject wasn't enough for me to put up with this slide show layout and thus I ended up closing the browser tab before the end. :(