Hacker News new | past | comments | ask | show | jobs | submit login

I think the consensus is that when you get to more than about 4 cores the performance benefits turn negative for general purpose computing.



I was surprised to hear this (though not because I know otherwise). What does general purpose computing mean in this context?


Basically the workload that an average person is going to have: a bunch of processes running simultaneously with varying degrees of cpu and memory intensiveness.

In special cases you can get better performance with more, but less beefy cores (graphics is a prime example), but in general a few powerful cores performs better. The main reason is because communication among cores is hard and inefficient, so only embarrassingly parallel programs work well when divided among many cores. Plus the speedup you get from parallelize a program is minor in most cases. See Amdahl's law[1] for more on this topic.

Also, I'm not an expert in this area, but I have some familiarity with it. So hopefully someone with a bit more experience can come and confirm (or refute) what I've written.

[1]: https://en.wikipedia.org/wiki/Amdahl's_law




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: