I'm torn both ways on the double issue. On the one hand, doubles are much more widely supported these days, and will save you from some common scenarios. Timestamps are a particular one, where a float will often degrade on a time scale that you care about, and doubles not. A double will also hold any int value without loss (on mainstream platforms), and has enough precision to allow staying in world coordinates for 3D geometry without introducing depth buffer problems.
OTOH, double precision is often just a panacea. If you don't know the precision requirements of your algorithm, how do you know that double precision will work either? Some types of errors will compound without anti-drifting protection in ways that are exponential, where the extra mantissa bits from a double will only get you a constant factor of additional time.
There are also current platforms where double will land you in very significant performance problems, not just a minor hit. GPUs are a particularly fun one -- there are currently popular GPUs where double precision math runs at 1/32 the rate of single precision.
A "panacea" is something that cures.every illness. 64-bit floats could do just that, in the cases listed. The cost of it may be higher than one cares to pay though.
And when the cure fails to be adequate, well, it becomes a band-aid, a temporary measure in search of a real solution.
Are there C++ libs that use floating points for timestamps? I was under the impression that most stacks have accepted int64 epoch microseconds as the most reasonable format.
Couple decades ago Microsoft did that too, VT_DATE in old OLE Automation keeping FP64 value inside. Luckily, their newer APIs and frameworks are using uint64 with 100-nanoseconds ticks.
Don't have a publicly visible reference to give at the moment, but it's still sometimes seen where relative timestamps are being tracked, such as in an audio library tracking time elapsed since start. It's probably less used for absolute time where the precision problems are more obvious.
LONG_MAX nanoseconds is just Friday, April 11, 2262 11:47:16.854 PM, not exactly a future-proof approach. I guess having Tuesday, September 21, 1677 12:12:43.145 AM as the earliest expressible timestamp neatly sidesteps the problem of proleptic Gregorian vs Julian calendars.
OTOH, double precision is often just a panacea. If you don't know the precision requirements of your algorithm, how do you know that double precision will work either? Some types of errors will compound without anti-drifting protection in ways that are exponential, where the extra mantissa bits from a double will only get you a constant factor of additional time.
There are also current platforms where double will land you in very significant performance problems, not just a minor hit. GPUs are a particularly fun one -- there are currently popular GPUs where double precision math runs at 1/32 the rate of single precision.