Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

ratio's are good to have in any language, or as a library, but surely it's slower, and more memory is wasted (also hard to argue about how much exactly), and it still has to settle at some approximations for irrational numbers like PI which is used quite a lot when floating point numbers are (or at least in my experience).


All correct, for exact rationals. Racket's exact rational arithmetic tends to take about 1000x the amount of time floating-point arithmetic takes to compute similar functions, and creates a lot of garbage on the heap. It gets worse, though: with long-running computations, exact rationals' numerators and denominators tend to grow without bound, unless you explicitly round them to a fixed precision. If you take the latter path, you might as well use bigfloats, which wrap MPFR's multi-precision floats, and are faster.

Just looking at the Wu-Decimal page, though, I can't tell whether they're internally exact rationals or base-10 floats. If they're base-10 floats, they have all the same issues base-2 floats have. If they're internally exact rationals, I wonder what they do for division, which the set D isn't closed under.


Wu-Decimal uses exact rationals (they aren't base 10 floats). Division works according to normal Lisp semantics, since the CL ratio type is used for arithmetic. Let's say you divide something by 3 and now you have infinitely repeating digits: then it is no longer in set D, and Wu-Decimal no longer considers it to be of decimal type. Instead, it is treated as a fraction, again per standard CL semantics. The "Printing" example tries to clarify this (notice that 1/2 prints as '0.5' but 1/3 remains '1/3').


I've monkeyed with a lot of numeric representations, and every one of them involves tough theoretical and practical trade-offs. Returning an exact rational as the result of division sounds reasonable.

My favorite representation so far pairs a signed float in [2^-512,2^512) with a signed integer to extend the exponent range. The idea is to avoid overflow and underflow, particularly when multiplying thousands of probabilities.

(On average, adding a few thousand log probabilities or densities and then exponentiating the sum yields a number with about 9 digits precision. Worst case for adding log probabilities and exponentiating is about 8 digits precision, and for adding log densities it's 0 digits. Multiplying the same number of probabilities or densities retains about 13 digits in the worst case.)


If they're base-10 floats, they have all the same issues base-2 floats have

Nitpick: although base 10 floats have similar issues related to being of limited precision, they are superior to base 2 floats in the aspect of representing decimal numbers without the binary approximation: e.g. 1/10 is nonrepeating as 1.0 x 10^-1 but infinitely repeating as binary, so for IEEE 754 binary32 you get 1.10011001100110011001101 (1.60000002384185791015625) x 2^-4.


Yes, but they're about as bad for representing base 12 numbers, so it evens out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: