You’re using the default hash function, which is SipHash 1-3, which is known to be quite slow. Not horrendously so but it was picked for attack resistance.
If you need high hashmap performances and control input (or have a rather safe type) it’s pretty common knowledge that you should replace the hash function with something like fnv or fx.
You’re also allocating a ton in a hot loop, also a known mispattern. I’d expect C++ to also be greatly affected there.
My experience with fx has been very mixed and I don't think it's a safe default recommendation. It can often outperform fnv and wyhash because it does so little, but it can also greatly underperform once it's used in a hash table because it can cause a lot of conflicts for some types of keys, especially after modulo.
This can be much slower than just having a slightly slower hash function, because the equality function will be called against many more candidates, potentially with each requiring one or more uncached fetches from main memory.
I now recommend wyhash more often than not, and sometimes fnv can beat that. wyhash has the substantial advantage that it can also be made attack-resistant with a seed given to the builder, with much less overhead than sip.
My most recent problematic use cases for fx were all to do with hashing just one or two scalar integers per key. Its few per-word instructions weren't nearly enough to get a good distribution when there were only 1-2 words (on a usize=u64 target). Even hacking in fmix64 from Murmur3 helped a ton, but made it no faster than wyhash in the end, and harder to reason about. I ended up using wyhash for some keys and fnv for others.
In any case, nobody should switch hash functions without evaluating performance on full-sized realistic production data. You can easily end up with bad overall hash table performance because of poor distribution with your real keys.
Microbenchmarks tend to miss this, especially if they do not create real-sized and real-shaped key sets. Then all you see is the hash function performance, which is misleading if it results in poor overall table performance, which is what happened to me with fx in particular. I think a lot of people choose fx based on such microbenchmarks and assume its distribution is good enough to scale out, but this should not be assumed, only tested.
You might also see surprising effects from cache locality when testing tables big enough that they don't fit in cache, and if this matters for your workload then you definitely want to account for that in your choice of hash table data structure.
You’re using the default hash function, which is SipHash 1-3, which is known to be quite slow. Not horrendously so but it was picked for attack resistance.
If you need high hashmap performances and control input (or have a rather safe type) it’s pretty common knowledge that you should replace the hash function with something like fnv or fx.
You’re also allocating a ton in a hot loop, also a known mispattern. I’d expect C++ to also be greatly affected there.