Hacker News new | past | comments | ask | show | jobs | submit login
The Chapel Parallel Programming Language (chapel-lang.org)
137 points by wuschel on March 28, 2020 | hide | past | favorite | 60 comments



I’ve worked with Chapel a lot. It’s an insanely powerful language that will challenge your ideas (at least for me) about many language constructs in the spirit of HPC. I went to it around 2014 for doing distributed arm projects and implemented several projects. Here’s a simple search engine. My github had a number of chapel projects.

Once you get the hang of Chapel you will find it changes how you think about parallel computing. For more advanced concepts the learning curve is steep and not something you can get in a hello world example, but totally worth the effort.

https://github.com/briangu/chearch


For us non-HPC developers, does it make any sense to dig deeper? I get the feeling there could be some interesting thread/core usage maximisation ideas, or some such, but I've not delved before so I have no frame of reference.


Depends where your main interests in computing are are, if you are into programming languages, it is always worth having a look.


I think so. I primarily work in web companies and it’s still interesting.


Thank you.

I've been experimenting lately with multi-architecture distributed computing (ARM <---> x86_64) with frameworks such as dask, ray and although there were quirks in getting them running on ARM; I was able to make it work [1].

I was thinking of checking out Julia's distributed computing capabilities next, but their insistence on same OS, path etc. (last time I checked) didn't inspire confidence in me for my purpose.

Would Chapel be a good alternative for multi-architecture distributed computing? My goals are simple, to explore what it takes to distribute & compute something across different architectures in parallel.

[1]https://gist.github.com/heavyinfo/aa0bf2feb02aedb3b38eef203b...


Historically, Chapel had an effort to support mixed instruction sets within a single logical program, but this effort fell by the wayside due to the amount of effort required to maintain it, combined with the fact that it was virtually unused (most HPC programmers are using homogeneous nodes, or at least ones with compatible ISAs).

Chapel can still be used in multi-architecture distributed computing today, but by running a Chapel program per ISA and having them communicate through more traditional means (e.g., ZMQ).


Thank you for the heads-up.


Maybe but it wasn’t the last time I did it. The main issue at the tome was that the naive deployment model uses SSH to deploy the artifacts and setup the cluster. Maybe it’s possible to create a hybrid build and do it that way but I’ve not tried.


Very interesting; thought it was a bit academic, and finding somebody who has actually used it makes me more motivated to look into it.


Chapel was one of the three language chosen by DARPA in early 2010s (late 2000s?) to be the potential future of HPC (the other two being Fortress and X10).

I think the idea was to fund these languages, allow the developers to throw them at the supercomputers, have a multi-year shoot out, and pick the one that wins out.

I'm not sure what happened to the findings/results (feel free to chime in).

What I do know is that I've been in the HPC application side for almost 15 years and not one of the big projects I'm involved with use any of these, or any other "new" HPC languages.

The go-to languages/frameworks are the same: C/C++/Fortran, combined with OpenMP/MPI (MPI includes networking, but use of separate networking frameworks is also an option), and Cuda for GPUs (maybe very occasional OpenCL), with python being the massive success as the glue between all the system-level languages/frameworks. And now with the advent of tensor computing, the paradigm appears to be shifting once more (maybe not a shift, but one more item in the laundray list of heterogenous hardware).

Go read the overview, new features, whatever on the linked Chapel website: not a single mention of GPU. That's cz Chapel/Fortress/X10 were conceived/developed around the time GPGPU was just taking off. Nobody saw GPGPU coming and it has taken the HPC world by storm. And yet, after more than a decade, the "new HPC language" website has zero mention of GPUs. What a shame.

I'm not saying the new HPC languages are overrated. But I do believe the promise of an HPC language hasn't panned out, and yet at the same time new computer-architecture/hardware paradigms keep emerging which the "new" HPC languages {were never designed for, never foresaw}.


Like with C in systems programming, the targeted users must be willing to try new ways, and most aren't.

Here is your GPU mention,

"Targeting GPUs and Other Hierarchical Architectures in Chapel"

https://chapel-lang.org/presentations/SC11/05-sidelnik-gpu.p...

"GPUIterator: bridging the gap between Chapel and GPU platforms"

https://dl.acm.org/doi/10.1145/3329722.3330142


As interesting as Chapel is on the surface, I think the basic problem with it, and Fortress/X10 too, is that they are so niche that they're never going to get critical mass.


Same opinion here. Parallel first language, and no GPU support??


Many HPC worloads are MPI first, and then something else later.

Still there is ongoing GPU support work, see sibling comment.


Right, GPU support is very important to us and we expect to make something satisfying. We just havn't been able to prioritize it yet.



Chapel has great syntax and ergonomics as others have mentioned. What it doesn't currently have is great performance, e.g. [1]. That said, it appears to be heading in the right direction [2]. I last checked about 5 years ago and the story was much worse.

[1]: https://drops.dagstuhl.de/opus/volltexte/2019/10879/pdf/OASI...

[2]: https://chapel-lang.org/perf/chapcs/


Wow, never thought I'd see my paper[1] on HN :)

Even though getting the right performance was tricky, I was more frustrated with the lack of tooling, sometimes cryptic compiler messages and long compilation times for even mid-sized programs. Most of the issues were mitigated by the great community around Chapel. They are usually very responsive and helpful in their Gitter channel.

In regards to actual parallel programming, Chapel was a breeze. Documentation is solid and most of the parallel concepts were easy to use.


Chapel performance has improved by leaps and bounds over the past 5+ years, though link [2] above focuses on single-node performance whereas most the team's recent optimization efforts have been focused on improving distributed memory performance and scalability, some indications of which can be seen at [3].

Of course, it's very possible for programmers without Chapel expertise to write and publish bad performance results in Chapel. The Chapel team is very interested in working with such users to help them understand where performance is being lost and to improve things to benefit future users. Most of our recent optimizations have been motivated by user feedback, and in practice we're typically able to help users get their codes to match or beat the performance of conventional approaches with reduced effort. A favorite recent example of mine can be seen in slides 6-16 of Nikhil Padmanabhan's talk at PAW-ATM 2019 [4].

A good way to keep up with Chapel performance improvements beyond whatever gets peppered into our talks is through the release notes that we publish every six months, e.g. [5], [6], [7].

[3]: https://chapel-lang.org/perf/16-node-xc/?configs=gnuugniqthr...

[4]: https://github.com/sourceryinstitute/PAW/raw/gh-pages/PAW-AT...

[5]: https://chapel-lang.org/releaseNotes/1.20/06-perf-opt.pdf [6]: https://chapel-lang.org/releaseNotes/1.19/05-benchmark-opts.... [7]: https://chapel-lang.org/releaseNotes.html


These HPC programming languages (there are others too) would be good candidates for GPU programming, hopefully GPU world will at some point emerge from the C++ dark ages despite all the hinderances (proprietary fragmented SW stacks).


There was GPU support in a branch a long time ago. I think they missed the boat on that and it would have been a great way to get distributed GPU systems even 5yrs ago


GPU support is still really important. We don't see major issues with providing native Chapel -> GPU code generation and hope to work on it soon.


really love the syntax of this language and how they are repurposing existing programming language primitives for parallelism.

but i also just spent quite some time looking for a substantial example of usage and had to give up. the only thing close to that are the unit tests. and the language has been in development since late 2011.

that's somehow dispiriting.

that, in a nutshell, is the unfortunate situation with some of these interesting modern programming languages.


I never understand why people go to all this work of developing and even marketing major libraries or languages if you aren't going to at least give some doc examples. Jeez, just copy and paste some unit tests and put some prose explanation in. It doesn't take long.


You missed the "HPC" acronym.

If you're not in the market to run supercomuting projects, you'll likely never use this language.

If you are, there's a very high chance you know about it already.

Sharp divide between those sides.


And yet, in the presentation slides linked to from their homepage, they brag that it aims to be "programmable as Python, fast as Fortran, scalable as MPI, SHMEM, or UPC, portable as C, flexible as C++, fun as [your favorite programming language]". It's not even supercomputer-specific: they say it runs on "a Mac laptop".

If even half of those are true, why wouldn't I want to use it for every program I write?

(Easy answer: it looks like most of those are still simply "aims", and not actually that close to being true.)


I think it's plausible you would want to use Chapel for every program you write. I definitely want to use it for every program I write, but I'm also biased.

The main disincentive to doing so today is that Chapel is not nearly as broadly adopted or well-supported as the languages you probably do use in practice today. The Chapel team is trying to get it to that point, but it's a modest-sized team taking on large challenges (both technical and social, as this thread indicates). To date, we haven't made a significant effort to draw in a massive/mainstream audience because we know we're not ready for it yet, either in terms of the language's maturity or our ability to support a large group of users. But we hope and intend to get there.

That said, I think we're already achieving the aims in the slide you quote—far more than your easy answer suggests—though there's obviously room for differences of opinion (e.g., what does it really mean to be "as programmable as Python?"). If you're interested in pointers to supporting details, let me know.


I forget that many of the people and projects that get mentioned here are here as well. Thanks for a deeper viewpoint!

I'll definitely put Chapel on my curious list.


Why does Cray need you to use their programming language? The people who need it will seek it out, and Cray probably doesn't mind that people unwilling to spend a minute clicking on 'browse sample programs' or similar won't be trying out their language.


Arkouda [1], [2] is a recent and substantial example use of Chapel. It's a Python package that supports a subset of NumPy operations on distributed Terabyte-scale arrays implemented using a server written in Chapel. It currently involves ~10k lines of Chapel code.

Other recent notable examples include using Chapel for Dark Matter simulations [3], tree-based search algorithms [4], and CFD simulations (to appear at CHIUW 2020 [5]).

Your point is taken that the Chapel webpage could be better at pointing to such examples via a "powered by Chapel" section or the like.

[1]: http://www.clsac.org/uploads/5/0/6/3/50633811/2019-reus-arku... [2]: https://chapel-lang.org/presentations/ArkoudaForPuPPy-presen... [3]: https://github.com/mhmerrill/arkouda

[3]: https://github.com/sourceryinstitute/PAW/raw/gh-pages/PAW-AT... [4]: https://link.springer.com/chapter/10.1007%2F978-3-030-22734-... [5]: https://chapel-lang.org/CHIUW2020.html


Why do so few of these small programming language projects have a "why x?" section? How do they expect to gain marketshare if they aren't frontloading their value? This is a fairly old project and yet they've never taken the time to put a sales pitch on their website. In the past 12 years they've added 15 links to their press page [0]. This is a fairly mature project and I have no idea why I should investigate it. I guess it's fine if they never want users, but as someone who's worked in the startup world this sort of thing really gets under my skin.

[0] https://chapel-lang.org/press.html


"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."

It's fine to ask what the language is suited for. It's not fine to generically dump on it (or the people working on it).

https://news.ycombinator.com/newsguidelines.html


> Why do so few of these small programming language projects have a "why x?" section?

They do... you missed the Cray name in the logo, and the 'runs on HPC systems".

> How do they expect to gain marketshare if they aren't frontloading their value?

I'm guessing their marketshare is already well defined, there aren't a ton of supercomputing clusters in the world.

> This is a fairly old project and yet they've never taken the time to put a sales pitch on their website.

See above. They only have to sell to a small fraction of people compared to your startup 'eatt the world' mentality.

> In the past 12 years they've added 15 links to their press page [0].

And?

> This is a fairly mature project and I have no idea why I should investigate it.

Never needed to run code on big iron, gotcha.

> I guess it's fine if they never want users, but as someone who's worked in the startup world this sort of thing really gets under my skin.

As this is not a language for startups, I don't think they care if you're annoyed. They are likely dealing more with multi-million dollar single project runs that could take months to complete.

At least look up the meaning of things like HPC before you just shove the concept through your narrow filter of expectations.

I don't expect to ever run high performance code like this, and I'm curious to see what the design could tell me just in passing.


I send jobs to HPC systems on a daily basis at work. This website fails miserably to capture me as a potential customer. The cray name has plenty of baggage, but that doesn't give this language value. I do not currently work in the startup space, but having done it, I get annoyed when people put so little effort into selling their projects. I'm sure that a lot of time was spent building this (it's been around for over a decade), but they've put essentially nothing into marketing it.

I run high performance multi node code at work, typically 1-2k cores continuously. I have several contracts with HPE. Am I not part of the target audience for this language?


None of that detail came across in the post i replied to. Yes, given that extra information, i would rescind about 95 percent of my snark.


I'm an HPC developer, my workstation is a 122 PetaFLOP machine.

I go to the Rust language website, and I am able to understand in 1 minute what's the language for, and what value it adds with respect to the alternatives.

I look the Chapel website, and I get nothing. So I have to guess, and my guess is that's a framework developed by one research group to publish research about parallel computing, but that almost nobody uses for anything real in practice - except for the one or two simulation tools that they managed to lock-in 10 years ago. It's not clear to me which value these adds w.r.t the alternatives, like pure MPI+OpenMP, HPX, Fortran co-arrays, and the other dozens or hundreds of half dead research projects that were supposed to "revolutionize" HPC over the last 4 decades. The sad part is that I'm probably right.

The 3 most used languages in "classical" HPC are C, C++ and Fortran. Python was barely used, but now the recent trend of building "AI" HPC national centers have put it in 4th place. The three most used parallel programming paradigms used are MPI (which a dozen of implementations available and in active development), OpenMP, and CUDA. The amount of people that use OpenCL in classical HPC is insignificant. There are more widely-used codes using OpenACC than Intel Thread Building blocks. The users of modern frameworks like HPX is almost non-existent, and the users of things like Chapel are a signed zero. I've access to a handful of >10 Petaflop systems, and none of them have Chapel even available through their module system.

This doesn't say anything at all over whether Chapel is actually good or bad. But it is actually the job of those getting public funding payed by taxpayer dollars to advertise why we should keep funding them and not fund something else. The fact that their website doesn't even try to pitch it suggest me that the only purpose of the project is for the group that develops it to explore interesting ideas and publish papers, as I mentioned above. That's definitely something worth doing, and often decades later these ideas percolate into solutions that people actually use. But simulation frameworks that run on the Top 10 supercomputers usually take hundreds of people and multiple decades to write, and aren't something that follow trendy research frameworks.


You are generally right.

I’ve been to two chapel conferences. Brad is great, but he’s an engineer.

IMO, They have always lacked a good design and marketing support to get the messaging right.

Julia blew past them simply due to being more end-user focused.


Programming languages aren't literal businesses who the hell cares about ""frontloading their value"" mein gott. There's no reason to force your startup world ""wisdom"" on programming language websites.

Chapel is targeted at a pretty specific niche in the first place (and is pretty good at what it does). I don't think they're really after the general purpose programming language market and if you're in HPC you've probably already heard of it anyway.


That isn't startup world wisdom, it's just "how to sell x". You don't get users, or maintainers, without a sales pitch.

But I actually do work with HPC on occasion, and I don't work with cray technologies. I'm very much aware of their existence, but they have never convinced me to go out and use their products. I've heard several pitches by HPE folks, and they often push towards slurm or bright systems over the cray ecosystem. Maybe I'm just not the target audience, but I routinely run 100+ node jobs using a mishmash of slurm, bash, python, etc.


At this level, sales pitch happen with on person meetings during lunch time, not with marketing sentences on web sites.


No, not really. If(?) they want to increase adoption, they need to convince scientists to write their simulation programs in it. Realistically, that's not going to happen via HPE/Cray people having lunch meetings with individual scientists.


Yes, really, speaking from experience at CERN.

I was in a meeting organised by Apple trying to sell us how great it would be to adopt OS X and X server back in 2004.


> Yes, really, speaking from experience at CERN.

I can believe that, but I think CERN is not particularly representative of HPC. Mostly it's individual researchers in universities writing code (with the researcher or immediate supervisor making the decision which programming language to use) and then going to the university cluster or national supercomputing center when they need more oomph. A Cray marketroid having lunch with the director of the national supercomputing center is going to do squat all for convincing a researcher in a university somewhere to pick a particular programming language.

> I was in a meeting organised by Apple trying to sell us how great it would be to adopt OS X and X server back in 2004.

Yeah, and look what a smashing success Apple is in HPC.. ;)


> "why x?"

To be fair, the language is developed by Cray. So my instant assumption to the question "why Chapel?" is "because programming supercomputers is hard".

> I guess it's fine if they never want users

I think this not far from the truth. It's not that they don't need users, but that they don't need pedestrian users. Supercomputing is a niche market, so I'm sure the marketing is more about networking at conferences.

Put another way, you're going to pick up Chapel at work, not at home.


Their presentation here explicitly talks about who they're targeting, and their very first target group is "recent graduates" who want "something similar to what I used in school: Python, Matlab, Java".

If this front page makes it look like their position is "supercomputers are hard, this is for pros", that's completely misrepresenting the rest of the website.


In their defense, if you don't already have a pretty good idea of the "why?" from what is on their homepage, you're not going to be a Cray customer anytime soon and you're unlikely to find yourself working for one.

In your defense, maybe if they cared about mindshare and community beyond the people who give them money, enough people would have given them money that they wouldn't now be a subsidiary of HPE.


I never could figure out (2000's-era) Cray. Their office was in my neighborhood and a friend-of-a-friend worked there and it sounded like interesting work, so I checked out their jobs page. Almost all of the jobs were sales, and their most junior engineering position required something like 10-15 years HPC experience.

I've seen successful niche technical industries (Fluke comes to mind), and Cray didn't even look like a niche industry. It looked like a dying one.


There's a graver mortal sin, no code samples on the front page. Python, Ruby, D, Go they all have this in common. I think Rust used to, and Racket used to have an awesome set of examples before they redid their site now I think it's different examples. Syntax tells me everything about the language.


There...literally is, though?

    use CyclicDist;           // use the Cyclic distribution library
    config const n = 100;     // use --n=<val> when executing to override this default
    
    forall i in {1..n} dmapped Cyclic(startIdx=1) do
      writeln("Hello from iteration ", i, " of ", n, " running on node ", here.id);


Ah that layout just didn't make it obvious there is no syntax highlighting on any of that code.


There is, it's just done subtly.


Watch Brad Chamberlain's video. His definition of productivity by seasoned HPC people:

...I was born to suffer...

How true.


> Why do so few of these small programming language projects have a "why x?" section?

Because their target audience gets the “Why X?” for solutions from other sources, and then goes to the website of a tool for the meat, not the sales pitch.


what are you talking about? it says it right there in literally bold letters on the home page in response to "what" rather than "why":

>Chapel is a modern programming language that is...

>parallel: contains first-class concepts for concurrent and parallel computation

>productive: designed with programmability and performance in mind

>portable: runs on laptops, clusters, the cloud, and HPC systems

>scalable: supports locality-oriented features for distributed memory systems

are you such a literalist you need literally "why? because ..."

i have a question for you: why do so many hners jump on the same silly meme over and over again even when it's not accurate?


I sympathize with the grandparaent comment. The first three aren't exactly unique. Many languages are parallel and portable and productive.

Listing adjectives isn't a "why". A "why" would be: "Chapel is a language built for distributed memory systems. Unlike X or Y, Chapel has first class language concepts for distributed memory management, sharded storage…" etc. etc. (I just made that up.)

Then you know right off the bat whether this language is the one you need for the problem you have.

What you refer to as a silly meme might also be a legitimate critique that surfaces again and again when hackers/engineers build things that solve a problem they have, think it is cool or interesting, but aren't able to communicate the value to others.

It's not unreasonable to ask, at the very least, for a simple "why does this exist?" (An entirely reasonable answer to which might be, "because I thought it was cool!")


> A "why" would be:

«Chapel has the goal of supporting any parallel algorithm you can conceive of on any parallel hardware you want to target. In particular, you should never hit a point where you think “Well, that was fun while it lasted, but now that I want to do x, I’d better go back to MPI.» https://www.cray.com/blog/chapel-productive-parallel-program...


[flagged]


It would be such a low hanging fruit to put a concise rationale there. That’s not spoon feeding, but giving a minimal amount of opinionated context so a visitor isn’t completely lost when unfamiliar with the technology.


Surely anyone who works on this project could tell me why it exists, why it's better than rolling slurm/bash scripts. That feels like a bare minimum sales pitch. Instead the homepage is essentially buzzwords with links to watch the talk or read the docs. I'm sure anyone who uses this could say, "well, it makes x task much easier/more concise! This is much more difficult to accomplish using y" or something similar.


I guess a better question would why is it better at those things? A lot of languages claim to be all about performance, scalability, productivity and portability. Those are pretty much what every mainstream language aims for. So why would this be better at those things? Why was it created and what does it do that others can't?

I don't think things necessarily need a reason to exist, but I think asking those questions is still relevant when it comes to a programming language. They are mostly tools used to solve problems, after all.


@mardifoufs: I realize that your point is that the Chapel webpage didn't answer this question clearly / concisely for you and agree that we could and should improve that. The observations in this thread have definitely helped given me insights about how we could position ourselves better for the non-HPC audience (and I already have a longstanding intention to make the code sample on the page more compelling using Asciinema that I need to find the time for).

But to answer your specific question here: I'd say that most mainstream languages (e.g., C, C++, C#, Java, Python, Rust, Swift, ...) don't aspire to scalability in the same sense as Chapel and High Performance Computing want it, in the sense of being able to run efficiently on tens of thousands of processors with distributed memory where inter-processor communication and coordination is required. And even when they do aspire to it, it's rarely through the language itself, but through communication libraries, pragmas, and extensions. The result (in my opinion) is rarely as productive, general-purpose, and performant Chapel achieves.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: