I don't think there's much in Fortran that's still unique or "specifically designed" for numerics compute. General purpose languages reached parity with Fortran a long time ago. And "general purpose" typically wins anyway because it has the larger and more diverse ecosystem.
Most of them did not reach parity, at least not in performance. The fast languages are C, C++, Fortran, and Julia. Those are the only ones used for teraflop computing. Fortran is a great tool for scientific computing, but for new projects, where you are not extending an existing code base, Julia will be much more fun in every way.
I don't think Julia belongs into the same list as C++, C and Fortran. It is true that for some algorithms it is almost the same speed as C++ out of the box, but for many others it is still factors of 10s or 100s of. Also it often requires significant tweaking to get to it's best performance (e.g. don't use abstract types), so it is almost like saying Python is the a fast language, because you can use Cython or Pythran. I really wish Julia fans would stop overstating the language capabilities, it really does a disservice to an otherwise great language.
There the "naive" Julia code, simply implementing the code like I would in Fortran is a factor of 10 or 15 slower than the optimised cython version (which would be the same as a regular C version), the optimised Julia version is still a factor of 5 slower than the cython and pythran version. Can you show me how to optimise it so that Julia performs on par with pythran or cython?
The naive Julia code made a few pretty fundamental mistakes (Complex vs Complex{Float64}, and row vs column major). The following is non-optimized Julia code that is roughly 6x faster (and much simpler) than the "optimized" code in the blogpost. Some further optimizations would give another 2-4x over this (like using StaticArrays), but I'll leave that as an exercise for the reader.
apply_filter(x, y) = vec(y)' \* vec(x)
function cma!(wxy, E, mu, R, os, ntaps)
L, pols = size(E)
N = (L ÷ os ÷ ntaps - 1) \* ntaps # ÷ or div are integer division
err = similar(E) # allocate array without initializing its values
@inbounds for k in axes(E, 2) # avoid assuming 1-based arrays. Just need a single inbounds macro call
@views for i in 1:N # everything in this block is a view
X = E[i*os-1:i*os+ntaps-2, :]
Xest = apply_filter(X, wxy[:,:, k])
err[i,k] = (R - abs2(Xest)) \* Xest # abs2 avoids needless extra work
wxy[:,:,k] .+= (mu \* conj(err[i,k])) .\* X # remember the dots!
end
end
return wxy, err # note order of returns, seems more idiomatic
end
These comments seem out of date. There hasn’t been an issue, really, with breaking code for quite a while, since 1.0. They also show a lack of understanding in some respects, for example complaining about the hello world memory usage. Running a Julia program in the usual way loads the runtime.
For another take, Jack Dongarra was just awarded 2021 ACM Turing prize. He’s the creative force behind LINPACK and BLAS, and says that Julia is a good candidate for the “next computing paradigm”: https://www.zdnet.com/article/jack-dongarra-who-made-superco...
Some of the comments (in the original link in your parent comment) are pretty weird:
> I sometimes wish [Julia] had just appeared one day as version 1, and made periodic updates every year or three.
> Its “multi-paradigm” nature. Yeah, I know other languages have this too, but it does tend to bloat a language.
And some are just funny:
> The language is not stable, and by that I mean it seems to have a lot of bugs [links to github issues with label:bug]. You can’t really call something that has bugs stable.
Yeah. Of course there are bugs, but none of them are serious. Julia is still a young language, but it’s rock solid and used in high stakes, high performance computing in industry, government, and academia. Like I said, there are only four high-level languages that can perform at this level. Julia is certainly the most advanced and interesting of these, and, yes, it’s just a lot of fun to use.
Seems highly subjective and not in line with the experiences a lot of the rest of us have. Stuff just suddenly breaking in old code with new compiler?!
No, that is not a common thing post Julia v1.0.
A lot of newer libraries are not well documented in Julia, but the core stuff is much better documented than Python IMHO. So his view on Julia documentation lacks nuance. It is both good and bad depending on what you look at.
The post talks about the complexity if importing stuff with using and include. Yes, if you start from scratch it is more complex, but the benefit is that it scales better. I have helped compete beginners with Julia who had totally given up on Python due to the complexity of package management, modules and environments. Julia has a beautiful solution to this which is far easier to get right and manage.
A novice will find it easier to import packages they want to use with Julia compared to Python. But very often people evaluating this stuff are experienced Python developers and they don’t see the Python hassle easily. If you take somebody who is a beginner in both languages then experienced will be quite different.
The “multi paradigm” comment makes zero sense. Sure Julia is multi paradigm but it is far more streamlined than the common OOP-functional mess you see in other mainstream languages today such as Python , Java, C#, Swift.
In Julia it is all functions. You don’t distinguish between free functions and methods. For a beginner this is much easier to deal with.