I always have great respect for Scala because itself and the community tried to adopt and push the novel idea in the industry. Take a look at the changes, they're no joke to design and implement. At the same time, they're also trying to make the language consistent and simple. (Yes, I mean simple, not easy or single-paradigmed)
They are taking the hard way, and not only there are not enough people to appreciate the effort, but also attract some hatred from time to time. I hope Scala 3 would bring us and itself a bright future, and inspire more languages to move forward.
(For example, there are a lot of type features added to TypeScript in most releases because it needs to do the typing on existing dynamically typed languages, it may also be a hint that dependent type actually makes sense in mainstream languages).
I'm not referring to some specific changes. It's just there are many type operators already and still not enough to express JavaScript.
In the earlier versions of TypeScript, I often stuck with something natural in JavaScript but cannot express the same thing with type annotations - It's like opening a pandora box, once some advanced type operators have been introduced, whenever I played with them I would quickly found more type operators to be desired.
And so it did. Things were getting better over years, there were a lot more type operators, mapped types, the infer keyword, etc. I feel it points to a direction: There is still a lot of existing JavaScript functions' type depend on the input which cannot be well-typed. For example, there are functions similar printf in most languages, or return type T when 0 is passed otherwise return type U. To satisfy those cases, we'll basically reinvent a whole new programming language in the type system - if the new language happens to be JavaScript itself it would be similar to other dependent type systems.
Someone claimed the type system of typescript is turning complete. This example shows how to derive types from string based sql statement: https://github.com/codemix/ts-sql
[edit, not quite what GP was referring to, but] At the minute, via [depending on context] a combination of generics/conditional types, unions/literals, type constructors, and [more recently] string templating you can get close in certain circumstances. It tends to be very fiddly though, and [IME] it's quite easy to cause TS to stop narrowing types properly. Someone else with more experience may be able to weight in -- despite using it for a few years I'd not really bothered to exercise some of the more expressive parts of the system until string templating experiments started popping up last year.
This issue thread on the TS repo may be of interest:
I'm so psyched about this. I've followed the development of Scala 3 for pretty much the whole voyage, and I think they've done an incredible job bringing the community along through some pretty radical changes to the language. It has definitely not been an easy process herding a community with very big, informed, divergent opinions on the language.
In the end, I think they did an awesome job. Many, many sacred cows of old Scala were confronted. I think the result is a much more approachable language, where simple things are easier.
I totally agree. I love the direction of Scala 3. This quote says it all I think:
"Scala 3 takes a slightly different approach [from Scala 2] and focuses on intent rather than mechanism. Instead of offering one very powerful feature, Scala 3 offers multiple tailored language features, allowing programmers to directly express their intent"
So rather than something like implicits, that can be used N different ways, only 2 of which are practical, Scala 3 adds those 2 features explicitly.
They should have never been a language feature before IDEs were capable of working with them and exposing them. Luckily they do now...but Scala 3 makes it so that understanding them with a simple text editor is much easier. The fact that they have to be imported in a way that acknowledges their usage is a simple but important way to ease that burden, and the distinction between `given` and `using` is extremely helpful for understanding them.
I don't believe that this is true. Because, one can just write simple Scala and has pretty much everything that Kotlin has (minus its nullability magic) and never deal with these edgecases. Even compiletimes are decent then.
But with Kotlin, it is actually enforced that the more advanced features of Scala are not available, so it makes sure that compiletimes etc. stay short. It also simply reduces the burden of the IntelliJ devs, because it's not just "Language -> IDE" but IDE capabilities impact how the language is designed too. And having a good IDE is very important for productivity.
The thing is as an IDE author you can't just "write simple scala" because you aren't the author, you are the IDE. You have to be able to offer your refactoring tools and other tooling for all of it and cover all the edge cases.
LSPs do remove this problem to a large extent though.
I think I understand your posting now - what you are saying is exactly what I mean, but it's of course only true for IDE developers.
And actually, I'm not sure if LSPs help so much. To my knowledge, IntelliJ still uses its own compiler to support the user at least partially of the code is incorrect and doesn't compile.
I will never forget the hours I lost debugging an issue where I had not realised that if you declare any variable in scope that coincidentally has the same name as an implicit that you're relying on, the implicit will be shadowed and unable to resolve. Of course, it 'makes sense' when you consider that you would be unable to refer to it by name explicitly... But if you're not using the name, the fact that it can have an impact on visibility is pretty confusing
It was a brilliant concept, but the ergonomics weren't great. It was like working with the lambda calculus. Sure, you can literally model anything, but it's not intuitive to most people.
What's an example of something they made more approachable?
One of my first impressions of Scala, which I have barely seen, and do not know, is that there is a lot of syntax. Has anything been done to help with that? I don't think syntax matters much, but it's one of the only impressions I have of the language and wanted to ask about it.
> Scala is one of the smallest typed languages in terms of syntax.
Eh, that slide isn't really worth looking into. None of the languages there have a uniform way to represent the grammar. If you look at the C# and Scala grammars you'll find that the Scala one is highly compacted, whereas the C# grammar is intentionally not compact. They're encoded in different formats.
The fact that you can make any instance callable like a function by adding an `apply` method to it isn't just "syntax sugar", it's a pretty fundamental mechanic of FP-OOP fusion that Scala is built around. Same for `unapply`.
At any rate none of this "sugar" was a problem to anyone I know who learned Scala. There are harder things about it than learning basic syntax.
It’s a long time since I used scala and none of this was ever a direct problem, but it was all extra complexity. I think there’s a split between people that appreciate sugar and people that find it not worth the additional complexity.
That problem does lessen with familiarity, but knowing a lot of complexity makes me wary of unknown complexities. It adds an overhead which takes energy that could be better utilised elsewhere.
If you don't need that "complexity" (I wouldn't call it that), you can use a pure FP or a pure OOP language that will be simpler, smaller, but more restrictive and less expressive. That's a matter of preference of course.
But Scala's syntax and language features are great when you actually make use of them to accomplish your goals, especially so with Scala 3. The syntax isn't excessive or frivolous or nonintuitive. It's a pretty straightforward encoding of the desired feature set of the language.
Why do you consider it fundamental? Are there examples where it fundamentally does more than saving you from typing out the canonical method name? (This is for `apply`, I agree that match with unapply is actually language syntax and not sugar.)
Because it enables an equivalence between functions and objects. Not a technical one that's hidden deep in the implementation, but a practical one that's apparent to library authors and users.
If you want to make any object `foo` callable with first-class application syntax, i.e. `foo(bar)`, you add an `apply` method to that `foo`. You don't need to bother extending Function, you don't need to come up with a method name and call that. When you call `foo()`, you don't know if `foo` is a function or an object, and you don't care. You can find out easily with a half decent editor of course, but the point is, you're free to mix objects and functions as you wish, the language does not get in your way.
And for all that it provides, it's a complete non-issue in terms of learning curve. It's a basic language feature that you learn once and are never impeded by thereafter.
I agree. But Scala still does certain things different, so to a Java or python dev it _looks_ as if there is a lot of syntax. Just like Haskell looks strange when you look at it for the first time.
I don't think people object to the language syntax itself, but rather the unfamiliarity and randomness of language conventions. Haskell/Perl/K/Scalaz is full of line noise I can't be bothered to learn, for example
Amazing!!! Great to see Scala improving at such a fast pace!
I'm especially excited about union/intersection types. I think we could all witness how well they work out in typescript and I believe that every language with the concept of subtyping should have this feature. It is one of the things that make working in a statically typed language feel much more dynamic/lightweight without giving up on safety guarantees.
Amazing! That's the word that popped up in my mind too.
Scala 3 is my on shared 1st place favorite language
Union/intersection types — indeed, I use all the time, in Typescript.
And Opaque Types looks lovely too. All across my code base, I have type aliases like "UserId" or "PageId" or "PostNr" or "DraftNr" etc, and it'll be nice that this can now be type safe for real, not just type aliases. (Just waiting for opaque types to arrive to Typescript too.)
> After 8 years of work, 28,000 commits, 7,400 pull requests, 4,100 closed issues – Scala 3 is finally out. Since the first commit on December 6th 2012...
I mean it is lot of work for sure. But is it such a fast pace?
There's some weird history there. For most of the project's existence, Scala 3 was known as Dotty. It was basically a research project, and it was recognized from the beginning that it was a different language than Scala, largely due to the type system semantics...the goal being a formally defined, fully sound type system (Scala had some edge cases that made it not sound, even if it was more sound than most languages).
However, the language resembled Scala, was created by the same person, had some cool capabilities that a lot of Scala users longed for, and it was a chance to "start over" with some of the more questionable design decisions that had been made previously, so there was a lot of questions and pressure to make it Scala 3.
So it was decided to no longer treat Dotty as a second language, but rather to find a path to migrate the existing Scala ecosystem to this new language. This transitioned the project from a relatively slow-paced research language into a full fledged and funded project with an urge for engineering not just tool and ecosystem compatibility, but also bytecode compatibility with Scala.
So for most of those 8 years, there was no objective but to create a better language than Scala. Then very rapidly (maybe the last two years), it went from "this is a completely new language" to "these language ideas will be incorporated into Scala 3", to "this will actually become Scala 3 and we're working on a transition path".
And from the perspective of someone who was aware of Dotty and a regular user of Scala, that felt like an extremely fast transition. Especially for a language that took years to make much smaller changes in the past (2.10->2.11->2.12 felt like an eternity).
You can dig through the git history of the readme for Dotty, it's actually pretty enlightening how Odersky's perception of it has evolved over time. For example, in 2014 he referred to dotty as an experimental compiler for a dialect of scala.
More of "we make this condensed core, based on Scala - and if we are lucky and it turns out that the current Scala core can be made sound easily, then it will become Scala 3, else it will be a completely separate language".
E.g. if it would have turned out that you can't make implicits sound, then I doubt that dotty would have become Scala 3. With existential types however, they decided to remove this feature (which is a bit sad imho) and sacrifice it to make dotty become Scala 3.
Scala does not use semver. The last major release was 2.13.0, not 2.0.0. That's a bit confusing I guess.
What that means is, they haven't just added some features, they redesigned/rewrote to language from ground up to simplify it and make formally sound. I actually don't know any other language that does that (though I'm sure there are others, maybe Haskell is one of them).
In addition, a lot of features are very impactful and unprecedented, such as match types. To me, for a mainstream language (even though a smaller one), it certainly feels fast pasted.
I'm familiar with VS code and have a set up that I'm familiar with (shortcuts & settings) and I've only briefly looked at IntelliJ, so I'm not qualified to answer this question. What is the reason you're using IntelliJ rather than VS code?
Worksheets are a simple concept: you write lines of code in them, and the IDE runs the code automatically as you type, and displays the return value of each line. If you're familiar with the concept of Jupyter notebooks, it's a bit like that.
I tend to not view it as a replacement for an ordinary code file, but as a replacement for a REPL session. It has several advantages:
* Keeping track of your previous commands in a file is much more convenient than keeping track of it in your input history of the REPL.
* You can keep re-editing previous lines easily, and you can see the output of all lines.
* There is no hidden state (unlike a REPL). Everything is run from scratch every time.
* It's easier to define helper functions than in a REPL.
* It's easier to turn a worksheet into a unit test than it is to turn a REPL session into a unit test.
That said, it's also possible to start writing a module in a worksheet, and then turn it into an ordinary code file later.
Worksheets aren't perfect yet; I can think of many improvements, but they're already very useful to me. I think the end game will be to merge the concept of worksheet, unit test, debugger, printf debugging, and ordinary code file into one system, like Sean McDirmid's usable live programming research work (which in my opinion remains one of the most underrated research projects).
I grinned a bit looking at that blog post and seeing how the highlighting of contributor names is completely messed up, presumably because of a bad regex.
Feels ironic, since Scala has a very advanced type system that would prevent such bugs, but the page contains the most classic issue of not handling non-ascii characters properly :)
I was waiting for this comment. Every single thread about Scala has someone pushing this narrative that Kotlin is the real arbiter of change when it really has been Scala the entire time pushing boundaries on the JVM.
Kotlin is great and if you want a cleaner Java experience, by all means use it but Scala is real champion in pushing many, many things into the mainstream.
Java has not picked up a single feature from Kotlin yet (maybe nullability types someday?), and, in fact, opted for very different ones (e.g. contrast records vs. data classes, virtual threads vs. syntactic couroutines), although I guess Scala has provided some inspiration for some features, as Java's are closer to Scala's than to Kotlin's. ML is probably the biggest influence, with some Haskell flavour. But I find it amusing that people think that Java is trying to compete with other languages on the Java platform, no matter how small, rather than with bigger, more relevant competitors.
Java records and sealed types are its interpretation of product and sum types, and, true, this interpretation draws inspiration from Scala's. Deconstructing patterns are coming soon, but Java now does have real syntax for ADTs:
sealed interface Expr {}
record ConstantExpr(int i) implements Expr {}
record PlusExpr(Expr a, Expr b) implements Expr {}
record TimesExpr(Expr a, Expr b) implements Expr {}
record NegExpr(Expr e) implements Expr {}
For non-record classes, deconstructing patterns will be something like a dual to methods, and almost a first-class citizen. Among other things, they will allow patterns to support API designs that predate records and sealed types.
Ok, it's perhaps a true Scotsman argument but saying that Java as a syntax for ADT is like saying JS (pre 2015) has classes because it can be simulated by function + prototype or that Java (pre 8) has lambdas because it can be simulated by anonymous classes.
Java misses the syntactic sugar to define ADT in one place like in Standard ML, OCaml or F#.
The closest you get is to declare records inside a sealed interface, but once you add generics in the mix, it starts to get messy.
I was not able to define a classical list with cons and nil in Java.
sealed interface List<T> {
record Cons<T>(T car, List<T> cdr) implements List<T> {}
record Nil() implements List {}
}
The compiler give you warnings, Cons<T> or Nil are not a top level type and you can create several instances of Nil.
Pattern matching will work fine with an additional explicit nil() pattern. I agree that in this particular case it's not quite as succinct as in more functional languages, but virtually all cases in ordinary business applications are not like that, and the additional code, if any, is O(1).
Of course, Java could add rules for the special case of zero-argument records, making them more enum-like, but I really think the value in doing that is negative. Other than being able to directly translate ML examples, it would add complexity that isn't worth it.
In fact, in this particular case I'd do it like so:
record List<T>(T car, List<? extends T> cdr) {
public List { if (car == null) throw new IllegalArgumentException("null element"); }
}
Which would give you:
switch(list) {
case List<String>(var car, var cdr) -> ...
case null -> ...
}
It might not be exactly what people who like Scala might prefer, but it is very much a good representation of ADTs for those who prefer Java.
What are you talking about? Java release notes look like a list of Kotlin features these days.
Sealed classes, data classes, multiline strings, coroutines -> project loom, switch statement improvements to resemble Kotlin when statement, improvements to Streams to match Sequences.
You do realize there are plenty of languages out there, and most novel features appeared first in some minor research languages? Like, there is hardly anything original to kotlin on your list, sealed classes have been in scala forever, and are basically just algebraic data types. Data classes are.. data classes, multiline strings is a cool but very basic syntactic sugar available in millions of languages, coroutines have been around, and kotlin’s couroutines are nothing like project loom. The latter is more similar to go’s threads. Switch expressions have been in scala forever, but a pretty basic thing in FP languages.
That’s irrelevant. It’s not like the language team didn’t know about these features. The change to the shorter release cycle and increased funding is much more behind the speed up of updates regarding Java.
You keep claiming that the language team's knowledge of a feature drives that feature's adoption.
That's not accurate.
Popularity of a feature and enterprise demand drive it's adoption.
It doesn't really matter what the language team knows, it matters what the average developer knows since their collective wants are in large part what drives the language team.
There are hundreds of thousands of Java devs who only ever wrote Java and suddenly had to write Kotlin, and are now clambering for these feature improvements in Java.
1. That's not really right. You're correct that "readiness" is important, but Python and JS are much bigger forces than Kotlin, and the main reason Java is getting features faster is because the investment in the platform has grown (and Kotlin was created when it was at an ebb). But so far, almost all linguistic features added after 1.0 -- generics, lambdas, type inference, ADTs, patterns -- are inspired by ML.
2. Java addresses those topics (where relevant) pretty much in the opposite way from Kotlin. Java rejected coroutines and is going down a completely different path with virtual threads; Java rejected data classes and chose to go down algebraic data types. So if you think Java gets inspiration from Kotlin, then it is in the form of what not to do.
I would agree with you, Java routinely uses the "last-mover's advantage" tactic, but I think you overestimate the relative sizes of the two languages. These are all very basic FP "features"/language primitives, and Java is steering strongly towards becoming an ML-inspired FP-enhanced language.
If anything, the acceptance of more parts of the FP paradigm into all major languages (like C#, JS/TS, C++) starting with lambdas likely have pushed Java, but to single out the all-around very niche Kotlin language as a source is imo a non sequitur.
These aren’t ‘Kotlin features’ they’re just normal language features.
I really struggle to see Kotlin as being part of the debate here. These features are being driven by enterprise Java user pain points. And those people aren’t looking at Kotlin.
I agree that this would probably happen in the Java world, regardless of kotlin. However, I disagree with the statement that enterprise Java users are not looking at kotlin. From my view (in a large, global consulting business), we see large enterprise Java users adopting kotlin.
>These features are being driven by enterprise Java user pain points
By Java devs who only ever wrote Java, and suddenly had to write Kotlin for Android. They're now aware of these features as a result and are demanding them in Java.
Scala was probably the major factor here. However, the fact that Kotlin picked up these features from Scala (or at least inspired it) also proves that they are mature/stable and this has always been important for Java - so Kotlin did have an impact as well.
It doesn't matter "who did the trick". It is well known that Java was influenced by ideas in other programming languages from the beginning. And also other languages - including Scala and Kotlin - are similar in this respect... So yes: Kotlin influenced Java and Java influenced Kotlin. Let's compare final products with final products and not "how did we get there"...
Kotlin doesn't sound like a good language to bet on since most suitable(for jvm) features it has will be integrated by Java now that it has picked up pace.
I guess it only has decent marketshare because of google's android support(they are throwing stuff on the wall to move away from Oracle IP i guess) and Jetbrains being very popular among java devs.
That story regarding Kotlin and Oracle doesn't sell because unless Google rewrites Android toolchains in Kotlin/Native, there is plenty of JVM infrastructure to deal with, and so far there haven't been signs to fully replace Android Java.
The sour grapes of the Kotlin/Android marriage is that going forward one will need KMM for code between JVM and ART, or be happy to just use what ART undestands.
I don't know about betting on it, but the type before identifier syntax has always been a pet peeve of mine, and so I'd never willingly use java if I have a choice. And null handling. Fuck Java's ridiculously verbose and error prone null handling.
I'll be sticking with Kotlin at least until Java can do better at killing null pointers than @NotNull. Optional is good, but not widely used enough to handle most cases.
I was recently evaluating Kotlin vs Java for a new Android app. Two factors made me choose to bet on Kotlin.
1) Jetpack Compose is very loudly Kotlin-first
2) I did a survey of Android developer job postings. Literally ALL of them were for Kotlin, with Java mentioned as a nice-to-have roughly 50% of the time.
On a technical note, I don't see Java fixing NPEs, checked exceptions, or general verbosity any time soon. Java gets to live with its legacy of bad choices, Kotlin is able to stay clean while still providing backward support by integrating with old Java code .
One either gets to say where the platform goes, or keeps playing catch-up with its features, requires additional tooling, wrapper libraries, care about FFI, IDE plugins that understand all of that in a coherent way.
What they probably meant is that both Scala's and Kotlin's improvements pushed Java to improve. Both languages had many different features that back then weren't available in Java but since then got added to the Java language, and that made Java better, too.
I still don't see how Kotlin is involved. People drafting JEPs are well aware of how modern languages, including the ones running on the JVM, have evolved in the past decades. Kotlin is pretty late to the party and hasn't really brought anything new to the JVM (by choice), except for coroutines maybe.
And they were in Scala, Groovy, C# ... before Kotlin was even a thing. There's zero evidence people on the OpenJDK governing board care about Android at all, rather the opposite.
Except Scala brought along some improvements and a whole managerie of other features to manage, Groovy with its gradual typing was simultaneously too far away from Java and yet too similar, and C# means opting into a whole different platform and perpetually living downstream of whatever MS wants to do.
Kotlin absolutely nailed the "meaningfully better than Java while staying pedantically true to the existing semantics" in a way that none of the alternatives did even if they did come first.
Here's a quote from Reynolds, one of the creators of Spark:
> The primary issue I can think of comes from the lack of pattern matching. Kotlin’s language designer left out pattern matching intentionally because it is a complex feature whose use case is primarily for building compilers. However, modern Spark (post Catalyst / Tungsten) look a lot like compilers and as a result the internals would become more verbose if built using a language that doesn't support pattern matching.
Most new Java language features seem to chase groovy features more than others, but then again many of the jvm langs do similar features to extend ahead of java's limitations.
I like to refer to it as Haskell with an extra chromosome: slow, hard to comprehend, and incredibly strong.
I'm lucky enough to write it in my day job and I feel like I'm living life on easy mode because of it. It'd take a lot of money to lure me back to garbage like Python/JS at this point.
It has one advantage compared to Haskell though: as a Java/Kotlin (or even Python or javascript) developer, you can start with it an be productive immediately. You can then slowly absorb new concepts until you are pretty much ready to write full-fledged Haskell.
That's why some call it Haskellator. :)
This also means that many codebases have different styles though, whereas in Haskell you have one style (FP).
Then again, considering the JVM's disastrous recent trajectory (Loom, JPMS, renewed wasted focus on Java-the-language, and so on) that doesn't seem like such a great upside anymore.
The primary problem with threads isn't their performance, but the way that they encourage bugs and race conditions by decoupling data access from flow control.
They're fine as a low-level API for implementing a more usable API, but not as the primary API that you expect application developers to use.
Java/the JVM provides a low-level API for threads on top of which abstractions can be built. How do you think clojure implements their parallelism?
And Loom will be absolutely killer feature, it’s not about performance, it’s about blocking. Abstraction built on top of them will be easier to reason about, no callback hell, etc.
> Java/the JVM provides a low-level API for threads on top of which abstractions can be built. How do you think clojure implements their parallelism?
As I already wrote? But doing M:N scheduling at the JVM level (which is ultimately what Loom is) won't help your future (/promise) runtime which already implements a thread pool internally.
> And Loom will be absolutely killer feature, it’s not about performance, it’s about blocking.
If you like the blocking model and don't care about performance then you can go and use native OS threads in the same way right now.
M:N scheduling (including Loom) is ultimately about making threads cheaper to create, which means that theoretically you can have the "best" of both worlds (performance of futures, API of threads). My point is that this is a nonsensical goal, because the API of threads is awful.
> Abstraction built on top of them will be easier to reason about, no callback hell, etc.
Until you try to introduce concurrency, at which point you're back to the hellscape that is manual multithreading.
A big moment for Scala. I've used it professionally for 5 years and am still in love with it. The expressiveness, the power, everything about it is a joy. This just takes it to another level by removing a few warts and simplifying it whilst adding some amazingly useful new features. A really tricky balance needed to be struck.
I think they've pulled off an unlikely and amazing feat with this release.
I get paid to write TypeScript, it has come a long way to a relatively enjoyable experience compared to how expressive i feel with Scala. Looking forward to giving Scala 3 a spin to learn whats new
As someone who has spent a lot of time diving deep into Scala (I worked on compiler related semantic tooling for scalameta, worked on a experimental parallelizable Scala compiler, and even have a single commit in this release from almost four years ago LOL), and who more recently has been working with TypeScript, I find this really interesting and agree in some ways.
Scala 2.x already had path-dependent types, which combined with implicits and type inference were technically sufficient to implement versions of of the dependent function types we have now in 3.x. In my opinion, how you did this previously could get really clunky (See e.g. hlist mapping in https://github.com/milessabin/shapeless/blob/8933ddb7af63b8b...). Lots of things that frankly felt like casting magic spells, and there were sometimes fragile and weird bugs especially with how implicits were resolved, see e.g. https://github.com/scala/scala/pull/6139.
The native type-level meta-programming coming in 3.x along with a lot of streamlining of the underlying type system and implicit resolution rules (e.g. https://github.com/lampepfl/dotty/pull/5925) could help a lot.
This is just one example of a more advanced feature, but there are lots of things like this that changed in Dotty (maybe the most interesting is the DOT calculus that makes a better backbone for the type system). I don't know what tooling for Scala 3.x looks like, but even Intellij's frankly amazing Scala plugin broke down sometimes for 2.x.
After using TypeScript, I just personally enjoyed the experience better. Technically Typescript's type system is equally as powerful as Scala's, but the primitives for expressing more complicated types just seem much easier compared to the path-dependent stuff you would need to do in Scala.
I was amazed to find stuff like this https://github.com/aws-amplify/amplify-js/blob/dedd5641dfcfc... in mainstream wide usage. I'm not even sure how you could write these type of transformations as well in any version of Scala. I suppose maybe records work, or in this specific case there are functional patterns that are preferred.
Caveat, I haven't followed Dotty for a couple years now, so I could be missing a lot. Regardless, I have really enjoyed the tooling around Typescript as well as the design. The real-world usage of more complicated dependent types (and there are good uses IMO) have been more fluent and easier to understand, and less dependent on tricky behavior like for implicits.
VS Code's tools are really magical, and I hope with some of the amazing underlying work done in Dotty the Scala developer experience could look more like that.
I think TS's typesystem still stays stronger than Scala's in some areas (and vice versa). But I believe the links you posted (like https://github.com/aws-amplify/amplify-js/blob/dedd5641dfcfc...) can now done in Scala 3, mostly to the introduction of match types.
In general, Scala focusses a bit more on theoretical foundation and sound features (which makes progress slower), whereas TS focusses more on practicality. That's pretty exciting because it also means that TS leads to more "experimentation" and if something works out well (and can be proven to scale into the future and interops well with other features) then it can be added to Scala in a more general/abstract way - which also makes it easier for other languages (like Java) to pick it up.
Also there has been movement in the user land space with libraries like ts-pattern that make use of new features in typescript 4.x to provide basically the full pattern matching experience:
I'll give some brief context first for why it could be useful, then explain what DeepWritable does. AWS Amplify JS has a library called DataStore that takes GraphQL schema and generates ActiveRecord-like model classes (https://github.com/dabit3/amplify-datastore-example/blob/mas...). Then you can create, query, update, and delete these models in your web app client. The generated classes have are immutable and have read-only fields by default. This means none of the model objects will change under you before they've actually been persisted, which is a good guarantee to have IMO.
To make updates, you create a new model object with the same ID and whatever changes you need, and persist it back. DataStore uses an immer-like mechanism where each model object has a copyOf method, to which you pass a function that takes a mutable version of the model and makes whatever mutations.
Then the copyOf method returns a copy of the model object that has all the new field values, leaving the original object the same.
The DeepWritable type expresses the "mutable version" of the model by making all fields writable, so for example the TypeScript compiler won't yell at you for writing to fields on the mutable version of the model. It also does this recursively for any fields whose type is an object that isn't a primitive.
I was looking at dotty/scala 3 earlier in the year and I was quite surprised to see familiar friends from Typescript in the form of literal types and union types.
This creates two types with quotes in their type names, and then creates a union type of the two? Then the compiler needs to determine if the token's context requires a type or a value, and disambiguate accordingly?
In the example above, the created type "Scala" is distinct from a type Scala, correct?
Literal types quite simply means that values are types. For example in TypeScript the literal type "Foo" is a special string type, where the only valid value is the literal "Foo" – assigning "Bar" to a variable with the type "Foo" is a compile time type error. Literal types can be combined into union types like any other type.
It can be used as an enum and might be more convenient in places where you don't need to bother converting between string data outside the system and enum data types.
It also plays nicely with union types, so that you can very quickly define as hoc enums directly in your function declarations, which might be useful if there's only one or two usages of a particular enum, or a section of code where the allowable values very quickly changes.
You basically get a lot of expressiveness without boilerplate, which can be pretty convenient.
Conceptually, it's very much like an enum. Except that you can't enumerate them at runtime because there is no runtime representation. This can be very frustrating.
There's a hack, it looks like:
export const VERBS = ['GET', 'POST', 'PUT', 'DELETE'] as const;
type VerbTuple = typeof VERBS;
export type Verb = VerbTuple[number];
A real enum would be better, but would violate the "typescript is just javascript" rule.
This is just making clear and easy to use something that people did using some type level magic (see shapeless Witness).
Something that people used through ugly hacks becoming part of the language natively in a easy to use and understand (and faster to compile) way sounds very much like pragmatism to me.
I can think of plenty cases where I would want to limit the input space of a function to a known set of literals.
Just as an example; the `document.getContext` [0] api takes a string argument to select the context canvas.
Without literal types, you cannot enforce this to a set of valid values. Literal types gives you the power to tell the compiler, which values are valid.
Since there are other programming languages that support literal types, it's not just Scala sophistry. :-P
Interesting, and I presume it works for singletons of any type, correct?
val x: 1 = 1
val y: 2.718281828 = 2.718281828
That's kind of cute. I presume its main purpose is for overrides in order to get something similar to template specialization. C++ templates can be specialized by types and by values, but Scala method overrides are only by type, and literal types allow you to present a value as a type, so if the value can be statically proven to match, then the override is called?
In TypeScript perhaps the most important use case for literal types is the ability to to create discriminated unions, or algebraic datatypes (ADTs). Many modern languages have a separate feature for this (such as enums in Rust), but in TypeScript they can be constructed using literal and union types. Here's an example:
interface ProductOffer {
kind: 'productOffer'
eans: Ean[]
discountPercentage: number
}
interface GroupOffer {
kind: 'groupOffer'
groups: GroupId[]
discountPercentage: number
}
type Offer = ProductOffer | GroupOffer
I think it is important to emphasize that there is actually an important difference.
Union types don't allow to model GADTs, so they are less powerful than what some other languages offer. But on the other side, they allow for much easier composition. E.g.:
type Citrus = "Orange" | "Lemon"
type Prunus = "Plum" | "Apricot"
Now in typescript you can just do:
type Fruit = Citrus | Prunus
But in many other languages it's not so easy and you have to jump through hoops and e.g. either redefine the types and/or refactor the code (if you can even do that - if the code is from 3rd party libraries you are out of luck).
It's not really about specialization (types in Scala tend to be parametric most of the time) so much as just being able to have values in types, so that you can have types like "ListOfLength[1]" or "MapContainingKeys["Foo" :: "Bar" :: Nil]" without having to do some horrendous encoding of those things as types.
Literal types ("Scala" instead of String) existed before typescript to my knowledge. But union types are such a great thing. I'm very happy to see Scala catching up with typescript, as it has proven to work out really well.
I played with it a bit because I really like TS's union, literal types and smart-casting, unfortunately it doesn't seem quite there yet and I found it only usable for T | null types.
Syntax is all there so maybe it'll get better in 3.1+
I encountered them quite soon, e.g. something like this [1]:
val result: Int | "INTERNAL_SERVER_ERROR" = 1
val m: Int = result match {
case "INTERNAL_SERVER_ERROR" => throw new Exception("Uh oh")
case t => t
}
It does work with a ClassTag and t: T, but then it thinks the match is not exhaustive. Compare that to Typescript [2].
In this specific language feature TS is kinda the gold standard. For example they can automatically narrow the type based on fields existing or not and their type, which can be used to provide a pretty decent implementation of sum types [3]
Someone did some tests on Scalaz codebase and Scala 3 is around 2.5x faster. While Scalaz (and Cats) isn't exactly like your typical Scala production code base, I think we should see noticeable improvement across all kinds of Scala code.
Scala Native and GraalVM Native Image are projects with different goals, so I wouldn't say that one makes the other unnecessary.
Both projects aim to compile to native code, and have use-cases for projects where the JVM startup time is too high. However, one of the main goals of Native Image is to offer as much partial evaluation (PE) at compile-time as possible. Scala Native does also seem to do some PE, but my understanding is that it's less than what Native Image does. However, Scala Native has the advantage of working on a representation of the source code instead of the byte code, and may therefore be able to do certain Scala-specific optimizations that would be more difficult for Native Image.
I think different projects may find that either one or the other project may be more suitable to their needs, so I think both projects can coexist.
> However, one of the main goals of Native Image is to offer as much partial evaluation (PE) at compile-time as possible.
This is tangential, but I wonder if Native Image's focus on compile-time PE, and the overall design of GraalVM, would make it feasible to AOT-compile a sufficiently static subset of JavaScript to efficient native code. If so, that could influence my choice of language for new projects.
First AOT binaries are slower than JIT, there's nothing efficient about it, at least for high level languages.
Second you can already make javascript binaries thanks to GraalJs.
The biggest added value by far is the polyglotism/interop with the other language universes.
Not a fan of the optional brackets. It feels like a huge, tone-deaf step back. It's also being agressively pushed for by the founder. Also, it adds "if then end if" to the mix as well. Sigh.
I am excited about Scala 3. Scala is a powerful tool that can make teams hyperproductive. Here are the tooling shifts that can broaden the Scala 3 userbase:
* Community switching from SBT to Mill
* Community agreeing on automated code formatting and everyone using the same scalafmt settings. After using automated code formatting tools like black for Python and gofmt, programmers really don't want to talk about whitespace formatting on every PR. Scala has a lot of syntax, so this is more important than you might imagine. They need to change from this detailed style guide (https://docs.scala-lang.org/style/) to "use these scalafmt settings and don't talk about code formatting anymore"
* Make it clear what libraries are recommended for certain tasks. If you search for "scala json libraries" you'll arrive at these 16 options: https://stackoverflow.com/a/14442630. Devs don't want to do an extensive research project to figure out what JSON lib to use.
* Fill in "obvious library gaps" with good solutions. Scala needs a graph library that's usable like networkx for example. Li's libs are so popular cause he exposes beautiful Python-like interfaces (with crazy, performant Scala implementations under the hood).
* Embracing devs that want to use Scala as a "better Python" rather than giving them the message "if you use Scala, you need to do it the Scala-way". Scala is a multi-paradigm language and using the language in different ways should be encouraged.
* Getting the community to use the Principle of Least power is perhaps the hardest: https://www.lihaoyi.com/post/StrategicScalaStylePrincipleofL.... I find the doc to be cut-and-dry, but have found it less useful than I originally thought for resolving technical disputes.
The hard work has been done to build Mill, utest/munit, scalafmt, metals, etc. Scala just needs to make some opinionated choices so the ecosystem is appealing to ppl that aren't as into programming.
It'll be interesting to see if Spark is able to upgrade to Scala 3 or if they get stuck on Scala 2.13.
> Getting the community to use the Principle of Least power is perhaps the hardest
This hurts so much. The nerd in me wants to gush over the improvements in Scala 3, but that pleasure has been ruined by observing in my own coworkers that the people who are the most excited about Scala are terribly self-indulgent in their work. I know when my coworker says he's excited about @inline that our first Scala 3 service (which I'm guessing he will roll out next month) will have more than one use of @inline, for no other reason than his desire to learn it. I am a huge fan of Scala, but I am developing a knee-jerk Grinchiness about any kind of excitement about it.
I hate the way my feelings have changed, but still, I think we have to wrestle with the fact that in many if not most software organizations, the Scala advocates are people who enjoy having fun with technology and leveling up their skills and who cultivate an intentionally and stubbornly unexamined assumption that continually adding new programming powers to their skillset is the best way to serve the needs of the business and the needs of their coworkers.
If I could wish for one impossible thing for Scala, it would be widespread top-down forced adoption, so that lots of people who aren't particularly enthusiastic about Scala are forced to use it. I think people who are less enthusiastic about Scala write better code in Scala. I think they could improve Scala (and Scala could improve their programming as well.) What I'm seeing now in practice is Scala being avoided by those people who have the skepticism and discipline to use it well, and embraced by people whose engineering decisions are determined by their feelings of curiosity and boredom.
> Make it clear what libraries are recommended for certain tasks
I think no community has this problem solved, and I also think it is unsolvable unfortunately. The need to experiment seems to be human nature, and we are still really terrible at designing for code reuse, so small, subtle feature differences often require a totally different interface / library.
I don't disagree with the desire to solve this problem, I just think it's completely unrealistic and not rooted in reality. It reminds me of when people say "we should make this simpler."
Yes, that's a great goal, and one that is easy to say but not easy to achieve.
Speaking of Li's work, he's also done a lot to improve the REPL and scripting experiences for Scala. Combined with Scala 3, this is great for rapid prototypes, simple tools, and glue.
I'm in need of a functional language for a small part of my system and I'm considering Scala and Purescript[0] (mostly because of their ecosystems). Maybe Scala 3 is a good time to be boarding Scala's ship.
I'd appreciate if anyone has any advices of one vs the other.
I wouldn't jump on the Scala 3 ship just yet. Maybe if mostly a learning project but not for an important part. Whilst it was not a surprise release there will be long time until a majority of the libraries and frameworks will also work with it.
The big ones are mostly there, but there are lots of smaller ones that are not and there will be teething problems, even after a long alpha-beta-rc train. Scala 2.13 is still great.
Not sure what Play Framework is doing at the top of the list, will most likely be many months before a Scala 3 supported version is released.
Same for Spark, DB libraries, Akka, etc. Basically everything with large dependency graphs that depend on macros or removed Scala 2 features (e.g. abstract type projections, arghhhh) will take significant time to port over to Scala 3.
What has had Scala 3 support from early on in the release cycle are the FP/Typelevel projects, which are obsessively maintained by the I'll-sleep-when-I'm-dead FP crowd.
It probably is still the most used framework, probably by quite a margin. Whilst I mostly use Http4s these days I would guess the majority of new projects still pick Play. I would say 60% but would not be surprised if the actual number is 95%.
The silent majority just need to chuck out another internal app quickly and Play is great for that.
It also looks like it is still heavily developed as 2.9 is released soon.
Are you considering them together, or against one another? I'd have considered Purescript, which compiles to javascript, to compete with Scala.js, but I hadn't realized (but now do) that purescript has a serverside component as well.
Sure, but purescript has to exist as a node package, which it does! I just didn't know that. Last time I'd looked the landing page was way more browser-focused.
May I suggest OCaml? It's a pragmatic functional programming language that's often used for finance applications, e.g. derivatives analysis at Bloomberg: https://youtu.be/r6mYb2w_6po . OCaml's strong static type system makes it very easy to write correct code, and its fast compile speed and near-perfect type inference make it very fast to iterate on code.
It can also target JavaScript if you need to deploy something on browsers.
If it's a small part of your system take that one that integrates best: JS => Purescript, JVM => Scala, .NET => F# etc.
I wouldn't call Scala3 quite "start-a-project-with-it" ready because the ecosystem will need a couple of months at least to update, but Scala 2.13 should suit your needs more than fine.
I think the best evolution of a programming language is when some features are removed and replaced by a unifying concept which makes the language both smaller and and more expressive. Has that happened in Scala 3?
In some areas it is explicitly the opposite that happened, and in a way that is for the best — see implicits.
Scala 2 is a very flexible language and a lot of outsider complaints seem grounded in this flexibility.
Scala 3 is more opinionated in some ways:
"Opinionated: Contextual Abstractions
One underlying core concept of Scala was (and still is to some degree) to provide users with a small set of powerful features that can be combined to great (and sometimes even unforeseen) expressivity. For example, the feature of implicits has been used to model contextual abstraction, to express type-level computation, model type-classes, perform implicit coercions, encode extension methods, and many more. Learning from these use cases, Scala 3 takes a slightly different approach and focuses on intent rather than mechanism. Instead of offering one very powerful feature, Scala 3 offers multiple tailored language features, allowing programmers to directly express their intent
"
I think it is a very practical and grounded focus change.
It’s a shame it’s got less opinionated in other ways. Scala developers must be really looking forward to the endless debates over curly brackets vs white space.
I still can't believe they went for that. I wouldn't really care if the team wanted to switch to significant indentation, but it's absurd to provide two altogether different syntaxes just for the sake of it.
I don't think i will be much of a problem. Braces are already optional in Scala2 if you only use a single expression. The change just gives you the option to be more consistent and remove braces everywhere.
Scala 3 isn't such a language, most of the stuff people use is still there.
The biggest change touching your comment is replacing one feature (implicits) with a couple more tailored features (extension methods, conversions and context parameters). In practice they were very powerful and versatile, but also complicated, finicky and bug-prone. Feels a bit like pointers in that regard.
I totally agree with that sentiment. Not only about evolution but in general. A good programming language trues to use unifying concepts as much as possible.
Fortunately, Scala is one of the best (statically typed) languages in that regard that I ever used.
- If you're at least a little familiar with Java, Scala for the Impatient is pretty good, though a little dated (I think the last edition covers Scala 2.12).
Note: Scala 3 introduces quite a bit of new syntax that is specifically designed to make intimidating features less so. But at the beginner/intermediate level it won't matter too much, for instance if you learn how to implement typeclasses the old way, you'll easily understand the new syntax. Also the Scala Center has committed to providing a lot of training material along with the Scala 3 release.
The thing with Scala is it depends on the pattern you want to use. I am a big advocate of it, and the function pattern being used fully with something like Cats or ZIO. However I would not suggest starting there.
I would start by playing about with the syntax and writing standard OOP code, as its probably what you're most familiar with.
When you want to dive into the power of FP then I recommend http://eed3si9n.com/herding-cats/. Its basically a companion to the Haskell book 'Learn you a haskell' (http://learnyouahaskell.com/) that takes the theory coming up in that book and translates it to Scala. I found it the most direct explanation of the different types and theories (Monads, Functors, Applicative, etc).
If you know Java then you can start by writing very Java-like code using the same libraries you would in Java - there are a lot of intimidating functional techniques available but you don't have to use them from day 1 (indeed I'd say you probably shouldn't use them until you understand why you would want to). IMO you're much better off picking a small corner of a JVM project (e.g. maybe a migration tool or a couple of tests) and using a little Scala in practice, rather than trying to learn it all to start with.
One of the greatest things about Scala is that you can just start if you know Java and be productive. Just write your code like you would in Java, with a slightly different syntax. Then, when you use libraries, you will pick up new concepts step by step. And if you want it so, you can write pure functional code in Scala, similar to Haskell code.
If you start with Haskell, you are forced to do pure functional programming. That is much more intimidating and less productive, but also better if you want to force yourself to learn it.
I think the Scala 3 book [1] might be what you're looking for!
There's also an ongoing video series called "Let's talk about Scala 3" by the Scala Center and 47 Degrees [2], which are more in-depth on certain topics, but still quite beginner-friendly. Especially the video on setting up a development environment, and the video on Scala 3's data structures might be good introductions.
I think it's less true nowadays. You have beginner's books (Scala from Scratch, Hands-on Scala) that assume nothing, and more advanced libraries (in the Typelevel or ZIO ecosystems) that provide high-level abstractions for real world projects. You can't entirely avoid being exposed at some point to the JVM inner workings and/or Java libraries that don't necessarily play well, but this doesn't have to happen right away.
I mean "just better" at every use-case? It eliminates the cost of functions where JVM languages cannot due to its limitations. I really don't think I'm treating software like fashion - I'm speaking from years of practical experience here.
The main thing FP on the JVM does better is be permitted by your employer lol. So if we're talking runtime vs runtime, there's no use-case. In general, it's all soft/social reasons.
It is so bad, that in julia functions are not first class.
Instead, I would love to use scala 3 for machine learning since this is the perfect type system for such a task, but linear algebra support is not good enough.
Actually, Julia functions are all subtypes of Function. Every julia function has a unique type.
We don't parameterize a function by it's input and return types because functions are extremely polymorphic in julia. However, there's lots of tricks one can do to dispatch on whether or not a given function have methods that satisfy some sort of interface, see here: https://github.com/oxinabox/Tricks.jl#we-can-use-static_hasm...
Using this technique, you can do compile time dispatch on things like input and return types.
Functions are polymorphic in many languages that let you annotate the their types. The concern is not really for dispatch here, it's for readability/correctness.
N-dimensional arrays in Scala 3. Think NumPy ndarray, but with compile-time type-checking/inference over shapes, ndarray/axis labels & numeric data types.
Yes, it uses ONNX Runtime / MLAS (their native BLAS lib) under the hood. And yes, there is copy overhead but you can eliminate it internally to a single function/graph by compiling it down to a single ONNX model. The end result is within ~15% run time of PyTorch w/ MKL when training a reasonably-sized MLP. And ORT also provides support for CUDA and a number of other "execution providers".
Nd4j (another project) enable to use cuda/MKL sota backends. It does native calls but I don't think the overhead is high. Besides some of those overhead are being optimized in the JNI successor.
I largely agree. It was (and is) a contentious issue, but I think it helps a lot in removing textual noise, in most cases. And it's always possible to add the brackets in where clarity helps.
So, they removed implicits and introduced a number of other tools to replace common usecases for implicits.
Can we take a step back now, and reflect on what was the original problem implicits were designed to solve and how it is solved now in Scala 3? I've heard the original problem was that they wanted a chain of functional transformations to return the same type of collection (unlike, say, Clojure or Java 9 which return a Stream).
Implicits haven't been removed, they've just shuffled the keywords around to appease the 'implicit bad' crowd. Type classes are an incredibly important language feature, and implicit parameters are a generalization of the required functionality.
They didn't "remove" implicits in the absolute sense but they made it much more ergonomic to work with implicit-like behavior using the "given" keyword now.
"Scala 3 takes a slightly different approach [from Scala 2] and focuses on intent rather than mechanism. Instead of offering one very powerful feature, Scala 3 offers multiple tailored language features, allowing programmers to directly express their intent"
So rather than something like implicits, that can be used N different ways, only 2 of which are practical, Scala 3 adds those 2 features explicitly.
CanBuildFrom got killed in 2.13, but taking a step back is exactly the right thing to do:
The desire to return the same type of collection lead to overall slow performance; it turned out it's a design dead-end that Scala is still suffering from.
It should simply have been abolished and replaced, instead of applying some minimal fixes in 2.13 that preserved the wrong design choices.
Scala was a unique language a few years ago. I'm very happy I adopt it a few years ago and I owe it a lot. As a lot of other modern languages or features of mainstream ones do.
However, with the existing alternatives nowdays, Scala doesn't hold the same position. I have not coded in Scala for ~2 years and I'm not sure I'll ever need to.
Anyway, all the best for Scala's awesome team and its ecosystem.
Fascinating to see not just the breadth and depth of new features but also what has been targeted. Stuff like Python-style significant whitespace[0] was definitely not on my radar.
On the one hand, Scala is such a seductive language - there is literally nobody it doesn't cater to. Whether you're into FP, OO, data science, enterprise business logic, strict conservative style explicit semantics vs going wild with implicits and DSLs etc, and now whether you like significant whitespace or not ...
But on the other hand its the same story as always - all this crazy optionality of features mean its got such a huge surface area for people to understand. Its a language where I get what I feel is functionally competent and then dive into a codebase in github and I can barely even recognise the syntax, let alone feel confident to make a change.
Cool I guess. I just notice that say, unlike PHP, I don't know if there's a single thing in the scala 3.0 release I care about (as somebody who's preferred language as scala) [Edit: Except for fixing implicits, big win]. A lot of times it feels like it's a language with so much potential which then misses some basic real-world use-cases for the 99% who couldn't care less about covariant blah blah, and just want a clear, fast, reliable, reusable programming language as a means to an ends of delighting millions of users.
Maybe I'll write a blog-post about it, but as I practice leet-codes in scala, I miss relatively basic things like c-style for loop (so I can use a for loop on linked lists rather than a while), or things like a list that can have more than Int.MaxValue numbers.
It just so happens that "reliable, reusable" and "covariant blah" are interlinked — if you want any form of type inheritance, of course (which I usually don't).
With dependent types, each `e.key` has it's own and separate type. So, if you have two `e`s, the type of the first e's key will not match the type of the second e's key.
I've been using a lot of Swift lately, and they have a generic type roadmap that talks about "existential types" and being able to "open" an existential. This looks very similar.
It was my main language for 5 years, and there was always a variety of Scala jobs in Berlin: companies of different industries, sizes. Recently, I mainly used Python at work. I love Scala and wonder if it makes sense to invest in keeping up my Scala 3 skills.
Kotlin and Groovy both look pretty stable - both having core uses in their domains that I am guessing support continued traction as "the best tool for what they do". Scala I think peaked when it had the overlap b/w business use and data science.
I feel like Kotlin / Scala / Java are all after the same essential core market - a solid, type safe language to shoulder the huge broad need of enterprises to keep business software pumping away reliably. So they are essentially all part of the ebb and flow of the one giant "JVM" ecosystem.
Groovy sits a little outside of that since it specifically targets a different type of problem (providing an optimally integrated JVM scripting language for DSLs and rapid application development).
There are lots of Scala jobs in finance and its weird sibling cryptocurrencies. These pay very well, although I myself wouldn't touch those (anymore).
There are also lots of agencies using Scala, some machine learning and security stuff. Munich is a good place, Berlin, Zurich and Lausanne, if you want to stay in Europe.
I love the language and I'm very happy to get paid to do stuff in it. I don't think I ever learned that much using just one language, Scala is definitively a class of its own.
This looks a little like inline in F# where it allows more generic method cases than standard generics do in C#/Java. Looking at the documentation I'm wondering does it allow things like - "any input type that supports the divide operator"? It's a feature I've appreciated previously when writing high performance math code on the .NET side.
Not to denegrate or barf on anyone's birthday cake but IIRC there were some complaints of unfriendly community support interaction styles. Is this still the case?
This is correct, though the inline feature does also unlock some optimizations. For instance, see this example from the docs where a recursive method is optimized to a few sequential commands when some of its parameters are constants:
looking at the all time contributors, it further solidified my belief that a team of 2 to 3 solid engineers can almost take on any project that comes their way.
I always have great respect for Scala because itself and the community tried to adopt and push the novel idea in the industry. Take a look at the changes, they're no joke to design and implement. At the same time, they're also trying to make the language consistent and simple. (Yes, I mean simple, not easy or single-paradigmed)
They are taking the hard way, and not only there are not enough people to appreciate the effort, but also attract some hatred from time to time. I hope Scala 3 would bring us and itself a bright future, and inspire more languages to move forward.
(For example, there are a lot of type features added to TypeScript in most releases because it needs to do the typing on existing dynamically typed languages, it may also be a hint that dependent type actually makes sense in mainstream languages).