Connection Machine Lisp never made it into production, but this paper had a profound impact on my scientific career. In particular, it was the following comment in the paper that triggered me to develop the Petalisp programming language (https://github.com/marcoheisig/Petalisp):
> Nevertheless, we have implemented (on a single-processor system, the Symbolics 3600) an experimental version of Connection Machine Lisp with lazy xappings and have found it tremendously complicated to implement but useful in practice.
I think there is a moral here: Don't hesitate to experiment with crazy ideas (lazy xappings), and don't be afraid to openly talk about those experiments.
After eight years of development, I can definitely confirm that lazy arrays/xappings are tremendously complicated to implement but useful in practice :)
For me, it was the papers that made me realise how we could have a world were programming could abstract the underlying architecture, while still taking advantage of it being heterogeneous.
Something that only recently is starting to take shape in compute landscape.
Not sure what you consider "production", but I used *Lisp extensively on the CM2 at Xerox PARC in the late 1980s. In fact, I published several papers based on this research.
i'm assuming you mean 'lazy' in the context of the paper and not more a general system of lazy evaluation. what was in particular that you found difficult about binding pure functions under the notion of a xapping? thanks so much for posting your work
Implementing lazy arrays or xappings naively is easy - the Petalisp reference backend has just 94 lines of code [1]. The challenge is to implement them efficiently. With eager evaluation, the programmer describes more or less precisely what the computer should do. With lazy evaluation, you only get a description of what should be done, with almost no no restriction on how to do it. To make lazy evaluation of arrays as fast as eager evaluation, you have to automate many of the high-level reasoning steps of an expert programmer. Doing so is extremely tedious, but once you have it you can write really clean and simple programs and still get the performance of hand-crafted wizard codes.
In Danny Hillis' lecture on the architecture of the CM-5 [1], he talks about some of the historical motivation behind the CM-2 described in his thesis paper turned into a book "The Connection Machine" [2] which goes into some detail on the alpha, beta, and dot operators and the xapping data type (the book calls it a "xector") described in detail in this paper.
Years ago I attempted to take a stab at implementing this language on CUDA I called hillisp [3] but at the time Dynamic Parallelism didn't exist yet and my available hardware was pretty weak, but it was fun to learn more about CUDA which was relatively early technology at the time.
DARPA provided my company with a Connection Machine 1, that was the early SIMD architecture, not the later MIMD architecture. I had a Star Lisp simulator running on Common Lisp that I could use for software development and then I had to travel to the physical location of the CM-1 to actually use it. I regret that I have misplaced Danny Hillis’s Connection Machine book.
I have always been just a programmer but I have worked with hardware people who designed fast, for the time, hardware for me to run backprop neural networks in the 1980s. Fast forward to modern times and I am fascinated by projects like Groq that greatly speed up LLM inferencing.
Similarly Hillis has created or touched many fascinating projects in his career.
It is my understanding that what this paper describes was never actually shipped and instead the StarLisp was significantly lower level affair, on par of the other Star-something languages for the CM, or rather the frontend of it (CM is essentially an equivalent of modern GPU, not a general purpose computer).
"significantly lower level affair, on par of the other Star-something languages for the CM"
I'm not sure what this means. C* and CM Fortran weren't particularly low-level, unless C or Fortran 90 are "significantly low-level". For low-level fun, you'd drop to Paris (or to CMIS for "breathtakingly heart-stoppingly low-level").
> it is my understanding that what this paper describes was never actually shipped
This reminds me of an interesting talk by David Beazley where he discuss the early days of Python, when IIRC he was a student at a national lab, and they managed to run the Python REPL in an connection machine 5 that people had a hard time programming in lower level languages. It seems the work they did back then, writing modules that allowed the Python interpreter to interoperate with scientific code was one of the foundations for it to snowball many years later.
So I wonder if what this paper describes had shipped in time, if that would lead us by butterfly effect into the alternative timeline where Lisp dominates scientific computing, instead of Python.
My thought was whether we could use architecture or programming ideas from the Connection Machine for LLM training or inference today. Also, any of the massively-parallel, distributed, shared memory systems of old times. The patents should be expired on many of them.
That many use cases today are both highly-parallel and pipelined make me think the old designs might work. Also, the hundreds of thousands of execution units in Cerebras’ hardware are a similar concept.
> Nevertheless, we have implemented (on a single-processor system, the Symbolics 3600) an experimental version of Connection Machine Lisp with lazy xappings and have found it tremendously complicated to implement but useful in practice.
I think there is a moral here: Don't hesitate to experiment with crazy ideas (lazy xappings), and don't be afraid to openly talk about those experiments.
After eight years of development, I can definitely confirm that lazy arrays/xappings are tremendously complicated to implement but useful in practice :)