Hacker News new | past | comments | ask | show | jobs | submit login

Because the original Redis is the fastest already ?



Redis is single threaded, in a world in which 16 cores/32 threads is affordable.


If your goal is to fully utilise the hardware you have then doing some deterministic sharding on top will likely be good enough.

Honestly, redis makes pretty sane tradeoffs, it's not worth the added complexity to add multi-threading as it would almost certainly slow it down while it does locking, and redis isn't typically CPU bound (except potentially the LUA stuff, but that's up to the user), so being multi-threaded doesn't really help much.

If you need multi-threading, there are other solutions, but of course they are slower w.r.t latency and throughput, since that's the trade-off.


Have you checked DragonFly? It's multi threaded and modern Redis, it blows Redis away in performance by an order of magnitude.

Some performance number here - https://www.dragonflydb.io/blog/scaling-performance-redis-vs...


As always, "performance" is not just one metric you can measure and it'll "blow" away the competition in all use cases.

For example, if you care most about latency, Redis is still the way to go, while DragonFly seems better at throughput. But, tradeoffs vs tradeoffs and all that yadda yadda.


> For example, if you care most about latency, Redis is still the way to go, while DragonFly seems better at throughput. But, tradeoffs vs tradeoffs and all that yadda yadda.

DragonFly is better at latency too. The latency numbers they are showing are measured at the high throughput. If you were to reduce the throughput, the latency number would be even better. From the same post:

> This graph shows that the P99 latency of Dragonfly is only slightly higher than that of Redis, despite Dragonfly’s massive throughput increase – it's worth noting that if we were to reduce Dragonfly's throughput to match that of Redis, Dragonfly would have much lower P99 latency than Redis. This means that Dragonfly will give you significant improvements to your application performance.


There's an ongoing thread about Dragonfly here - https://news.ycombinator.com/item?id=36018221


Sure. But how much would Reddis benefit from the extra cores? By adding threads support it would need to add support for locks and then that might make it slower. Besides, Reddis isn't usually the slow part of what you are doing.

It might be interesting to have, say, a readonly slave builtin as a second threat that might return outdated information, but I doubt how much use you would get out of it.

I am kinda struggling to come up with a scenario where a significant part of the computational need of you app was in Reddis.


It's a key value store, locks can be done with bucketing fairly trivially like any modern concurrent hashmap and locking would only slow you down if you were frequent writing to the same bucket.


Redis has MULTI transactions and many applications depend on that functionality. You can't add multiple reader threads without a hugely complex locking system to prevent things like uncommitted reads. This is the tradeoff Redis made when deciding to stay single threaded at its core.


Not clear how a rust "clone" fits into this. I wonder if people have tried pinning instances to cores and treating the ensemble as a host device bounded distributed variant. (just musing, haven't looked at Redis for ages.)


We do this as a poor man's cluster... a python script starts a redis-server process on each core and the 'master' process has a key that lets clients know about the other processes running on the machine.

It only really works well if the client can shard the redis command to the right process itself.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: