I'm using bzip2 to compress a specific type of backup. In my case I cannot afford to steal CPU time from other running processes, so the backup process runs with severely limited CPU percentage. By crude testing I found that bzip2 used 10x less memory and finished several times faster than xz, while being very close on the compression rate.
Other algorithms like zstd and gz resulted in much lower compression rates.
I'm sure there is a more efficient solution, but changing three letters in a script was pretty much the maximum amount of effort I was going to put in.
On an unrelated note, has someone already made a meta-compression algorithm which simply picks the best performing compression algorithm for each input?
I've not seen one that picks the best compression algorithm, but I've seen ones that perform a test to try and determine if it is worth compressing. For example borg-backup software can be configured to try a light & fast compression algorithm on each chunk of data. If the chunk is compressed, then it uses a more computationally expensive algorithm to really squash it down.
Other algorithms like zstd and gz resulted in much lower compression rates.
I'm sure there is a more efficient solution, but changing three letters in a script was pretty much the maximum amount of effort I was going to put in.
On an unrelated note, has someone already made a meta-compression algorithm which simply picks the best performing compression algorithm for each input?