Offtopic, but I think that languages should have special float types that trigger the use of fast math. That way, a programmer can better control which parts of a program are done with approximate floating point operations.
Imho that's too fine-grained. Usually you determine which variables don't need strict accuracy rather than which code, so that's better controlled through types. Also, it's easy to limit approximations to code blocks by casting to/from approximate types around a code block so you can still have fine grained control.