In a comment below the main post they say that the app they wrote is approximately 45,000 lines of code over 170 Elm files.
That's one of the largest (and apparently most useful) Elm apps in the wild that I've heard of, and the fact that it exists and they had an overall good experience with it inspires me even more to try to learn and use Elm.
> In a comment below the main post they say that the app they wrote is approximately 45,000 lines of code over 170 Elm files.
One thing I've found a bit weird about the Elm community is how common it is to highlight or even brag about the size of the codebases.
One of the reasons I have come to particularly enjoy Clojurescript on the front-end is because of how much code I don't have to write. I once ported an Elm app to Clojurescript and it was only one-third the line count for the same functionality, and generally a smoother experience.
I think some developers just love building up these huge, complex worlds and living inside them, and then telling others how big those worlds are.
I think you should read this as being an otherwise 150kloc project in Java.
Seriously, at this point IBM should fork a couple hundred thousand dollars just to know how anyone on their staff can recreate this in idiomatic java (or whatever OO language they want) and have you do it in idiomatic clojure/script. I'd be surprised if you managed less than 30k and very surprised if the OO implementation managed less than 100k.
Hard data on this kind of debate being hard to come by, we'd learn something from the experiment.
An apples and oranges comparison. Any difference would surely be made up by all the tests one would have to write to be sure the clojure code was free of basic errors.
That's a completely baseless assumption. I have not found the need to write any more tests in Clojure than I have in statically typed languages. The only tests my team ends up writing tend to be end-to-end specification tests, and you would want to have those in any serious project regardless of type discipline.
Tests prove the presence of bugs not their absence. You should be writing more tests if you have no static type system, in order to find the bugs that you would otherwise prove don't exist.
A static type system like Elm doesn't fix logical bugs like wrong indexing, wrong predicate etc, it just finds issues with types. Types do not contain the real logic of your program. Types are somewhat useful for verifying data is passed around in the correct shape in a program, but to say it prevents most obvious errors is naive. Not to mention in Elm you will waste tons of time doing useless tasks like writing encoders/decoders and code that could be moved to macros in a more powerful language.
> A static type system like Elm doesn't fix logical bugs
Static type systems can absolutely catch logical bugs! Proof that a list is non-empty or that a reference is never null are simple examples.
> Types do not contain the real logic of your program.
Types can completely determine the logic of many parts of your program.
In Haskell, I often don't have to write custom traversal code, it's selected for me based on the types.
> Types are somewhat useful for verifying data is passed around in the correct shape in a program
This is only the start of what types can do, try learning Haskell.
> but to say it prevents most obvious errors is naive.
No, naive is dismissing static type systems to the point that not even extra testing is done to compensate.
> Elm you will waste tons of time doing useless tasks like writing encoders/decoders and code that could be moved to macros in a more powerful language.
Elm's aim is to be a basic easy-to-learn language (no operator overloading etc). Personally, generic (polytopic) programming feels like a more elegant approach than macros. Haskell offers both.
Haskell was my first FP language, I used it for about a year, and worked with Scala briefly after. My experience is that you're really trading one set of problems for another in practice. Static typing can guarantee that you avoid a certain class of errors, but it often results in code that's longer and more difficult to understand opening opportunities for different kinds of errors. There is absolutely no empirical evidence to suggest that the trade off is strictly superior in practice.
Static typing can catch inconsistencies at the cost of structuring your code in a way that the type checker can understand. This is often at odds with making it readable for the human. A proof only has value as long as the human reader can understand it. Here's a concrete example of what I'm talking about: https://github.com/davidfstr/idris-insertion-sort/blob/maste...
An insertion sort written in Idris has over 250 lines of code that you have to understand to know that it's implemented correctly. A python version would have about 10 lines or less. I'd have much easier time guaranteeing that the 10 lines do exactly what was intended than the 250 lines of type specifications. Of course, you could relax the type guarantees in Idris as well, but at that point you accept that working around the static checker has benefit and it's just a matter of degrees of comfort.
In general, the more constraints you specify via types the more of your program logic moves into type specifications. What this really means is that you're writing a metaprogram that emits your logic. However, there's no automated checker for that metaprogram that you wrote using the types. You still have to understand it to be correct in order to know that it's doing what you want it to. At this point you're basically living in a programmer version of the Plato's Cave.
The real question is not whether you can do something using a static type system or not. The discussion has to center around how that compares to alternative approaches such as testing, gradual typing, and runtime contracts.
Sorting (TimSort) was broken for many years in Python. It was good old-fashioned logic and theorem proving, not tests or runtime assertions, that got it fixed. There is merit in proving properties of any critical implementation, no matter how difficult. However the gist you referenced looks to be someone's learning effort, so hardly a model example.
Your view on the role of types is unfortunate if you think term logic is simply mirrored at the type level (Plato's Cave). The Curry-Howard correspondence tells us to think of types as logical propositions with terms as their proofs.
Sure, you can use formal methods to prove properties that are hard to test. The point you seem to have missed is that it takes a lot of effort to do that.
The reality is that in most cases there's a cost benefit analysis regarding how much time you can spend on a particular feature and the strength of the guarantees.
>The Curry-Howard correspondence tells us to think of types as logical propositions with terms as their proofs.
Hence my point that you end up writing a metaprogram that emits the logic. Ensuring that the metaprogram is correct is a manual process. The more complex the proof, the harder it becomes to understand.
Consider Fermat's conjecture. It's trivial to state it, it's trivial to test it to be correct for a given set of inputs. However, proving it for the general case is quite difficult, and only a handful of people in the world can follow that proof.
> onsider Fermat's conjecture. It's trivial to state it, it's trivial to test it to be correct for a given set of inputs. However, proving it for the general case is quite difficult, and only a handful of people in the world can follow that proof.
This is a strawman, conventional static type system can't prove all general cases either and no body is saying that they do that. Yet, they being a superset of dynamic typing, they allow to have the same expesiveness as such by providing a bypass like 'Object' or 'any'.
I’d say it’s more “look how many lines of code there are vs how many ways this could break in production”.
This will always be very different from your average ClojureScript/Reframe/Reagent app, which may or may not have fewer lines of code (in my experience they’ve been roughly equivalent), but vastly more opportunity for runtime errors.
Meanwhile, my team has been working with ClojureScript for years, and runtime errors are a really rare occurrence in my experience. Especially if you use Schema or Spec around the API between components.
There's absolutely no empirical evidence to support the notion that static typing has a significant impact on errors https://danluu.com/empirical-pl/
Of course, if it makes you feel better personally that's great for you and you should keep using it. However, if you're going to make wide sweeping claims regarding that, then they have to be rooted in more than just your personal experience and rationalizing.
There is also a big difference between spending one hour debugging runtime errors on any given day or week vs. the time it takes to write 3x or 4x the code and handle compiler errors.
Ultimately there are tradeoffs in how time is spent in both languages. Each developer will prefer different tradeoffs. I think in terms of net time spent on the whole dev cycle, it's hard to beat Clojurescript.
The problem with static typing confidence is it suggests no runtime errors, but really all it does is handle a certain class of runtime errors. To say that Elm has no runtime crashes is one thing, but it still suffers from all the runtime problems of any language when logic isn't written properly to account for different and unexpected values. Does your compiler guarantee that you don't have a "cents" value that is greater than 99? Or, for example, consider division by zero, which is another interesting case.
Clojurescript, on the other hand, has a novel system in place for handling runtime issues of any kind (types, values, whatever), because this is where it excels -- runtime dynamism.
There is a cost associated with eliminating this class of errors using static typing. The cost is that you're restricted to a set of statements that the type checker can verify to be correct. Writing code for the benefit of the type checker is often at odds with writing it in a way that conveys the meaning best to the human reader. This is necessarily less expressive than the dynamic approach. Code written in dynamic languages tends to do a better job of expressing its intent because it can be written in a more direct fashion.
Here's a concrete real world example of what I'm talking about:
>When I first wrote the core.async go macro I based it on the state monad. It seemed like a good idea; keep everything purely functional. However, over time I've realized that this actually introduces a lot of incidental complexity. And let me explain that thought.
>What are we concerned about when we use the state monad, we are shunning mutability. Where do the problems surface with mutability? Mostly around backtracking (getting old data or getting back to an old state), and concurrency.
>In the go macro transformation, I never need old state, and the transformer isn't concurrent. So what's the point? Recently I did an experiment that ripped out the state monad and replaced it with mutable lists and lots of atoms. The end result was code that was about 1/3rd the size of the original code, and much more readable.
>So more and more, I'm trying to see mutability through those eyes: I should reach for immutable data first, but if that makes the code less readable and harder to reason about, why am I using it?
So really what you're doing with static typing is trading one set of problems for another. This is perfectly fine if those are the kinds of problems you prefer to deal with, but it's important to recognize that you are making a trade off as opposed to getting something for free here.
In my opinion you get more than just "eliminating a class of bugs", in my (arguably limited) forays into functional programming languages I really liked the type-guided programming.
One aspect is "I refactored the code, fixed all the type-errors and everything works", another is "I don't know, what should I write here, compiler, tell me!" with typed-holes, along-side some nice search, such as hoogle (or elm's fancy search [1]) In simmilar fashion, I remember Elm was enforcing a version bump, if you break package public api.
On the other hand, you definitely are replacing one set of problems for a different set, and it is up to you to decide what kind of problems you like solving better.
For me, access to fast immutable data-structures seem to have the best return-on-investment, and easiest to introduce (i.e. even Javascript or Python have somewhat decent libraries for these).
With Clojure the approach is to use REPL driven development. It's tightly integrated with the editor, and any time you write a function you run it to see that it's doing what you intended. Because you're evaluating code as you're writing it, there's generally no confusion regarding what the code is doing. [1]
Meanwhile, immutability as the default makes it natural to structure applications using independent components. This helps with the problem of tracking types in large applications as well. You don't need to track types across your entire application, and you're able to do local reasoning within the scope of each component. You make bigger components by composing smaller ones together, and you only need to know the types at the level of composition which is the public API for the components.
Spec [2] is a contract system that's often used to define API boundaries in Clojure. I find it provides an advantage over static typing because it directly focuses on semantic correctness. For example, consider a sort function. The types can tell me that I passed in a collection of a particular type and I got a collection of the same type back. However, what I really want to know is that the collection contains the same elements, and that they're in order. This is difficult to express using most type systems out there, while trivial to do using Spec.
Nice, if I ever will have the pleasure of working in clojure, I will remember the re-find thing you mentioned here :-)
But to show more of I really liked when developing with types, take a look at this slightly contrived example [1]. I once tried to refactor a medium size haskell code-base, and ability to just ask the compiler "Hey what should I put in here?" really made my life easier :)
Yeah, there are nice aspects of having static typing as well. Haskell was the first FP language I've used actually, and it was a lot of fun. Eventually I ended up working with Clojure professionally, and I don't find that one approach or the other is strictly better. There are pros and cons to each, and they're both productive.
I never had that impression when I was building projects with Elm, in fact, it felt like I was focusing on the app itself instead of the language and boilerplate. I compared it with JS mess though.
Encoding and decoding of JSON usually adds huge amounts of boilerplate (in some small-to-medium-sized apps it might account for more than half of the LoC), and separating an app into modules requires you to add some boilerplate in your update functions.
Elm code is hard to measure because you encode HTML in Elm as well. I'm currently working on a form which takes 1.5K code in just one module. But there's a lot of HTML generated :)
That's one of the largest (and apparently most useful) Elm apps in the wild that I've heard of, and the fact that it exists and they had an overall good experience with it inspires me even more to try to learn and use Elm.