> List them. I am not aware of any well defined parts of the C standard where GCC and Clang disagree in implementation.
Perhaps it's not "well defined" enough for you, but one example I've been stamping out recently is whether compilers will combine subexpressions across expression boundaries. For example, if you have z = x + y; a = b * z; will the compiler optimize across the semicolon to produce an fma? GCC does it aggressively, while Clang broadly will not (though it can happen in the LLVM backend).
This is behavior is mostly just unspecified, at least for C++ (not sure about C).
I'm aware of some efforts to bring deterministic floating point operations into the C++ standard, but AFAIK there are no publicly available papers yet.
P3375R0 is public now [0], with a couple implementations available [1], [2].
Subexpression combining has more general implications that are usually worked around with gratuitous volatile abuse or magical incantations to construct compiler optimization barriers. Floating point is simply the most straightforward example where it leads to an observable change in behavior.
You're very right that this goes above and beyond anything the C standard specifies aside from stating that the end result should be the same as if the expressions were evaluated separately (unless you have -ffast-math enabled which makes GCC non-conformant in this regard).
If the end result of the calculation differ (and remember that implementations may not always use ieee floats) then you can call it a bug in whatever compiler has that difference.
I have no idea how C++ defines this part of its standard but from experience it's likely that it's different in some more or less subtle way which might explain why this is okay. But in the realm of C, without -ffast-math, arithmetic operations on floats can be implemented in any way you can imagine (including having them output to a display in a room full of people with abaci and then interpreting the results of a hand-written sheet returned from said room of people) as long as the observable behaviour is as expected of the semantics.
If this transformation as you describe changes the observable behaviour had it not been applied, then that's just a compiler bug.
This usually means that an operation such as:
double a = x / n;
double b = y / n;
double c = z / n;
printf("%f, %f, %f\n", a, b, c);
Cannot be implemented by a compiler as:
double tmp = 1 / n;
double a = x * tmp;
double b = y * tmp;
double c = z * tmp;
printf("%f, %f, %f\n", a, b, c);
Unless in both cases the same exact value is guaranteed to be printed for all a, b, c, and n.
No, it's not a compiler bug or even necessarily an unwelcome optimization. It's a more precise answer than the original two expressions would have produced and precision is ultimately implementation defined. The only thing you can really say is that it's not strictly conforming in the standards sense, which is true of all FP.
I read up a bit more on floating point handling in C99 onwards (don't know about C89, I misplaced my copy of the standard) and expressions are allowed to be contracted unless disabled with the FP_CONTRACT pragma. So again, this is entirely within the bounds of what the C standard explicitly allows and as such if you need stronger guarantees about the results of floating point operations you should disable expression contraction with the pragma in which case, (from further reading) assuming __STDC_IEC_559__ is defined, the compiler should strictly conform to the relevant annex.
Anyone who regularly works with floating point in C and expects precision guarantees should therefore read that relevant portion of the standard.
"Strictly conforming" has a specific meaning in the standard, including that all observable outputs of a program should not depend on implementation defined behavior like the precision of floating point computations.
It can be controlled through compiler options like -ffp-contract
In my opinion every team finds fp options for their compiler through hard time bug fixing :)
and I am still in shock that many game projects still ship with fast math enabled.
Perhaps it's not "well defined" enough for you, but one example I've been stamping out recently is whether compilers will combine subexpressions across expression boundaries. For example, if you have z = x + y; a = b * z; will the compiler optimize across the semicolon to produce an fma? GCC does it aggressively, while Clang broadly will not (though it can happen in the LLVM backend).