> A floating-point number system πΉ (π½, π, πΏ,π ) is characterized by a base π½ β N, precision π β N, and lower
and upper exponential range πΏ,π β Z.
IIUC, this paper describes an approach to formally modeling schemes such as like IEEE-754 floats.
I don't work with formal systems / proof assistants, but this sounds like step in the right direction for modeling floating-point computations.
But I have to wonder though about their treatment of NaN:
> We exclude NaN values which represent undefined arithmetic operations.
AFAIK, IEEE-754 has very well-defined rules for how NaN values behave, including signaling / non-signaling options. I think their model will have limited applicability until they extend it to fully model this.
> AFAIK, IEEE-754 has very well-defined rules for how NaN values behave.
If you model NaNs with no more specificity than there is a NaN, then IEEE-754 is very well-defined. If you consider NaN payloads, then IEEE-754 is underdefined (and hardware is notably divergent here).
Notably divergent, and notably hard to get correct. (I have some experience validating the correctness of 754 arithmetic. I suspect nobody gets it perfect.)
I've recently been exploring how to approach collaborative compilation in a compiler backend. Thorough control of a mere numeric literal and binop spike turned out a rich in challenge.
It sounds like my understanding might be incomplete, could you expand on your comment?
IIRC, IEEE-754 includes several options for what should happen when various operations consume or could create NaN values. (As well as underflow, overflow, etc.) But that all of those options are covered by the spec.
So, e.g., a modern x86-64 processor supports various FP modes (NaN signalling, etc.), and all of those FP modes result in behavior that's well-defined by IEEE-754.
Is that not the case?
EDIT: TIL about "NaN payloads". Now the parent comment makes more sense to me.
Does this have any practical uses, or is this just an attempt to push the limits of relational programming? For example, can you use this system to derive new compile-time optimizations? Or can you use it to formalize compile-time floating-point arithmetic in languages with refinement types or dependent types? The latter is a well-known pain point in Idris, for example.
I think you may be missing the context of what the paper is presenting, here. It looks like the paper is solving relations between floating-point numbersβfor example, β3.4 + x = 3.4; solve for xβ is of the examples from the paper.
IIUC, this paper describes an approach to formally modeling schemes such as like IEEE-754 floats.
I don't work with formal systems / proof assistants, but this sounds like step in the right direction for modeling floating-point computations.
But I have to wonder though about their treatment of NaN:
> We exclude NaN values which represent undefined arithmetic operations.
AFAIK, IEEE-754 has very well-defined rules for how NaN values behave, including signaling / non-signaling options. I think their model will have limited applicability until they extend it to fully model this.