Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think you either haven't thought about this or you did your math wrong.

You need (2^e)+m+1 bits. That is more bits than would fit in the cheap machine integer type you just have lying around, but it's not that many in real terms.

Let's do a tiny one to see though first, the "half-precision" or f16 type, 5 bits of exponent, 10 bits of fraction, 1 sign bit. We need 43 bits. This will actually fit in the 64-bit signed integer type on a modern CPU.

Now lets try f64, the big daddy, 11 exponent, 52 fraction, 1 sign bit so total 2048 + 52 + 1 = 2101 bits. As I said it doesn't fit in our machine integer types but it's much smaller than a kilobyte of RAM.

Edited: I can't count, though it doesn't make a huge difference.

 help



You also need some extra bits at the top so that it doesn't overflow (e.g., on an adversarial input filled with copies of the max finite value, followed by just as many copies of its negation, so that it sums to 0). The exact number will depend on the maximum input length, but for arrays stored in addressible memory it will add up to no more than 64 or so.

Thanks, you're right there for accumulating excess, I don't think you can actually get to 64 extra bits but sure, lets say 64 extra bits if you want a general purpose in memory algorithm, it's not cheap but we shouldn't be surprised it can be done.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: