Hacker News new | past | comments | ask | show | jobs | submit login
How to make Lisp go faster than C (2006) [pdf] (iaeng.org)
84 points by marinesebastian on Jan 24, 2022 | hide | past | favorite | 36 comments



Also perhaps of interest: making C faster than C by borrowing techniques from Smalltalk implementation.

Piumarta, Ian, and Alessandro Warth. “Open, Extensible Object Models,” 2007. https://piumarta.com/software/cola/objmodel2.pdf

> This shows that an extensible, object-based implementation can perform at [not even twice as slow as] a typical C implementation for a simple language primitive. With a global method cache (constant overhead, no matter how many method invocation sites exist) the performance is within 10% of optimised C. When the inline cache was enabled the performance was [twice as fast as] optimised C.


Ian piumarta c code is often poetry. Look at Maru (Lisp meta compiler iirc).


I love this paper! One thing I've never understood (because my C is poor) is what is means to access something at array[-1]. What does that even mean? Wouldn't it be a garbage value, if anything?


C array access is (not totally accurate) replaceable with pointer arithmetic and dereferencing. So array[-1] is the same as *(array - 1). So long as your program controls that bit of memory, it is a safe operation (you won't be accessing memory outside of whatever the OS/runtime allocated to your process). For instance, if you have this struct definition:

  typedef struct {
    int a;
    int b[10];
  } foo;
Then the following will (likely) work:

  foo f;
  f.a = -1;
  printf("%d\n", f.b[-1]); // => -1
In fact, the following also works:

  f.b[0] = 10;
  printf("%d\n",(&(f.a))[1]); // => 10
C is very flexible in allowing you to do what you want with addresses (once obtained). But yes, if you don't control it then it is likely going to be garbage data.


> C array access is (not totally accurate) replaceable with pointer arithmetic and dereferencing. So array[-1] is the same as *(array - 1).

Thanks, that part is clear. I guess in the Piumarta paper, he's assuming all of these objects are contiguous in memory? Because the vtable pointer is always to [-1] but it's not specified in any given struct...


I also missed the specific solution the first time, but see figure 10 on page 6 (in section 3.1, the pages aren't numbered). It's doing a variant of what I described, in this case they're doing this (copied the document, pseudocode):

  function vtable_allocate(self, size) =
    let object = allocateMemory(PointerSize + size)
    object := object + PointerSize
    object[-1] := self
    return object
This will guarantee that the memory is contiguous and properly allocated to the process so it's not using garbage data.


That quote is a bit misleading even though it's entirely true. Faster than naive-switch-based polymorphic code, yes. Effectively it shows that slow lookup + function calls via a function pointer can be faster than large switch()'s, and that's useful to convince C programmers of...

It's a fun paper that does a good job at presenting a the first few steps worth taking towards implementing well performing dynamic object models and showing that it may be worth considering those methods when writing C that requires different behaviour for different data structures over writing static code in those specific cases, but people have been writing vtable based object models for C "forever" that solves enough of those problems that it's rarely worth going the extra mile to get the dynamic model they present in the paper.

When you actually need it, it's very useful to know these tricks, though, and for those unfamiliar with dynamic object models it's a great introduction.

That said, it does miss one other simple trick that's worth considering. For inheritance it searches parents, and this is a large part of what makes the "bind" expensive and so makes the inline caching worthwhile.

Protocol extension [1] provides a way to do this which can be implemented either as cheap as vtables for calls (costs more memory) or one indirect load more expensive (more space efficient). Basically the fastest approach is to make all vtables big enough to accommodate individual slots for each method name. The "slow" approach groups methods in interfaces and gives you a "vtable of vtables". Couple that with a simple method propagation algorithm that lets you override methods like in this paper but respects sub-classes overriding a method (by seeing if the pointer to the function implementing the method matches that in the parent vtable), and that's it.

You might think this blows up really badly. But in practice in experimenting with in-progress-and-may-never-be-finished Ruby compiler, the memory overhead tends to be surprisingly small - few projects have large enough number of classes and large enough numbers of unique method names for it to be a major issue, but the two approaches can also be combined with fallback to more indirection). Even with a singly-rooted object hierarchy like Ruby it doesn't seem likely to be a big problem, and for a language implementation you can "mix and match" and put the most common method names in a shared vtable and fall back to more expensive lookup methods for less common methods if necessary to save memory.

The main benefit comes with megamorphic call-sites (places where the vtable used for the call will frequently change at any given call-site, and where you thus otherwise end up falling back to the expensive method lookup very often), which is no more expensive than monomorphic call-sites with protocol extension.

[1] Protocol Extension: A Technique for Structuring Large Extensible Software-Systems, 1995, Michael Franz https://www.semanticscholar.org/paper/Protocol-Extension%3A-...


It's great to be able to use an expressive language and to go low level and/or verbose to extract every bit of performance on hot code. I have written some amount of numerical code in C and it's not unusual to first prototype in Matlab, Python or whatever for the productivity gains and rewrite later in C (nowadays I'm using Julia and so far it's great) . That being said, the only way for language X to be faster than C is to not try very hard optimizing the C implementation.


Not really, FORTRAN is still faster than C for numerical purposes due to array aliasing concerns.


AFAIK, not since 1999:

https://en.m.wikipedia.org/wiki/Restrict

But yes, Fortran can be as fast, although I'm not sure compilers are given the same love as in C


Depends about which compiler you are talking. I think there are very specialized Fortran compilers which produce excellent code for scientific applications.


People hardly ever use restrict. Anyway that's not the only advantage FORTRAN has. It also has native array support.


Past related threads:

How to make Lisp go faster than C [pdf] - https://news.ycombinator.com/item?id=1390944 - May 2010 (15 comments)

How to make Lisp go faster than C - https://news.ycombinator.com/item?id=491057 - Feb 2009 (2 comments)

How to Make Common Lisp go Faster Than C - https://news.ycombinator.com/item?id=195067 - May 2008 (42 comments)

How to make Lisp go faster than C - https://news.ycombinator.com/item?id=172432 - April 2008 (16 comments)


Unfortunately, the majority of the 2008 comments are meta-complaints and meta-meta-complaints about HN.


Lisp can't go faster than C unless it has negative energy or mass.


Someone lacks a sense of humor apparently. Given that Lisp's mass is very large I find it doubtful that it could break the C barrier any time soon.


Slightly unrelated but I got a "sketchy download" warning in firefox.

The PDF seems fine though. I really hate the trend of "not downloaded much" = "probably bad"


Perhaps because its is only http?


This should have a (2006) tag (not that the basics have changed really, but compilers keep getting better at least).



Why does everything about Lisp immediately make the front page?

I've seen many interesting submissions about Java or C# - arguably much more popular languages; unlike with Lisp most of those we'll never see on the front page.

Why is HN so biased towards this one language. Can we please give other topics the same treatment?


This very website is authored in a Lisp variant designed by its founder.

So yeah, the Lisp love here goes deep.


Of course it is, but this doesn't make it ok for the moderators to skew the front page towards their own personal interests.


I think it’s the user base that pg drew in since even before HN. There have been 22 Lisp stories with 10+ upvotes in a month, but only five Java stories and two C# stories.

https://hn.algolia.com/?dateRange=pastMonth&page=0&prefix=fa...

Java’s slowly become an acceptable language for mixed-skill teams but I don’t think there’s a lot of exciting news about it at least until Project Loom finally lands. I don’t know what would put C# onto more teams’ radar now that they announced feature completeness on Linux.


Here you go, you can learn how the algorithm works yourself.

https://medium.com/hacking-and-gonzo/how-hacker-news-ranking...

Fair warning, it's written in Lisp.


Do you have evidence that the moderators are skewing things?


I don't agree that HN skews towards Lisp (more like towards Rust or whatever is fashionable today, but I digress), and love Lisp in general.

But, how would one be able to provide any sort of evidence of this happening without being moderator yourself?


That's the point though, isn't it? Why claim things that you can't provide evidence for?


I think moderators can do some sort of repost/reset of the algo ranking to give a post a new chance.

If the moderators are more or less like the readers, I guess it is no extra skew.


There was the day dedicated to Erlang: https://news.ycombinator.com/front?day=2009-03-11


I found the root cause:

>"You can help the spike subside by making HN look extra boring. For the next couple days it would be better to have posts about the innards of Erlang"

https://news.ycombinator.com/item?id=512145 ("Why HN is slow lately", by pg)


Is "innards of Erlang" supposed to be extra boring? Programming language implementation is totally interesting and Erlang in particular just does a lot of cool things and it'd be awesome to learn how it does it. I suppose I'm not normal...


I think pg just meant something hn audience cares about, but mainstream media won't.

(Reading the erlang stuff with gusto now)


It's biased towards what we find interesting. I don't even write Lisp, but implementation tricks for dynamic languages interests me, and less common languages interests me, and it's the same for a lot of the more active users here.


Why can't I read comments (however unpopular they may be) because they are a very faint grey? Clearly some people thought this was worth responding to, but it's hard to understand the responses if I can't read the original question.

Personally I'm in favor of removing "downvote to oblivion" because it degrades my experience of using HN.


smalltalk




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: