One of the biggest issues is that deletes aren't durable in the CE edition. The index entry will be removed but the key and data remains, and may or may not be reclaimed by a background compaction process. If the server restarts before the data is compacted, the deleted data is revived because the index is recreated.
Amadeus uses community in production, they recently published a blog about it. I believe Rubicon project does too. Expiration works great for a lot of data sets, compared to explicit deletes.
That's nice. If there were a competition of who could grasp the main point from this article the quickest without even reading it you might be in the running for first.
I don't think it's helpful to consider this ban a non-issue because it's
easy to bypass. It's easy to bypass this time, next time it may be a
bit harder, depending on what the government orders ISP's. And there probably will be "next time", depending on the effects of all this.
Also, someone else here asked about legality of bypassing the ban,
that is a good question I'd like to see answered, even though I think we all know the answer to that one.
This slippery slope is exactly what happened in China.
- First, they block a few websites: no big deal, I'll just use a VPN.
- Then they block OpenVPN default port: no big deal, I'll just use another port or IPSec.
- Then international connections slow down to a crawl: no big deal, maybe they're not throttling but just having capacity issues, let's wait a bit see if it gets better.
Then one day your realize that what was at first a minor inconvenience is now wasting hours of your life and killing your productivity.
So all of a sudden work being outsourced to India needs to go through a VPN located somewhere else in the world. Whoever is doing the work will need to jump through at least a couple of hoops to set that up and keep working through it. Additional costs will be introduced. It's not a show-stopper, but it seems as though it could easily have a material impact.
It's less easy to put workarounds in place for, say, a configuration-managed CI system pulling from github than for your own dev workstation. In theory it's simple, but in practice there's a lot of boilerplate and testing.
> Ok, what does it takes to enable op code cache? It is not only about software, it is about many no brainers.
From having worked on timelines like this, it was probably less about the time to enable it vs. "If we turn on op code cache and that breaks part of our code, we don't have time to fix it."
Solution w/ nebulous oversight and looming deadlines? Punt it to the ops team.
Agree, there might not be ops team as well, it is dev ops culture. No one in team was brave enough to take the risk. I am sure there are no units test as well to ensure nothing breaks.
I am not a huge fan of NIH but more often than not it is simply easier in the long run to roll something in house that does the job. Although the debt that piles up tends to dwarf the original choice it is often still a better idea due to having employee churn. Its much easier to keep it within the standards of normal coding conventions.
I don't work at FB but near the bottom in the comments you can see that they build on existing C++ libraries that have been tried/tested. We do the [same thing][1] with smaller services simply because its easier in the SDLC process to move libraries that are already in-house.
Use Aerospike only if you are ready to buy enterprise edition. Jepsen test results are applicable to enterprise edition only.