If you know that Arrays are Functions or equivalently Functions are Arrays, in some sense, then Memoization is obvious. "Oh, yeah, of course" we should just store the answers not recompute them.
This goes both ways, as modern CPUs get faster at arithmetic and yet storage speed doesn't keep up, sometimes rather than use a pre-computed table and eat precious clock cycles waiting for the memory fetch we should just recompute the answer each time we need it.
I think this is an after-the-fact connection, rather than an intuitive discovery. I wouldn't explain memoization this way. Memoization doesn't need to specifically use an array, and depending on the argument types, indexing into the array could be very unusual.
>Well for example this insight explains Memoization
I don't think it does.
In fact I don't see (edit: the logcial progression from one idea to the other) at all. Memorization is the natural conclusion of the thought process that begins with the disk/CPU trade off and the idea that "some things are expensive to compute but cheap to store", aka caching.
Memoization is a different way around though. You're turning part of a function into an array and a traditional function is still in charge. It doesn't depend on arrays being functions.
I would also reject the idea that "Arrays are Functions" is equivalent to "Functions are Arrays". They're both true in a sense, but they're not the same statement.
We're talking past each other because we're using different definitions.
If you actually read the article you'll see that the type of arrays and functions they're talking about are not necessarily the types you'll find in your typical programming language (with some exceptions, as others noted) but more in the area of pure math.
The insights one can gain from this pure math definition are still very much useful for real world programming tough (e.g. memoization), you just have to be careful about the slightly different definitions/implementations.
> The insights one can gain from this pure math definition are still very much useful for real world programming tough (e.g. memoization), you just have to be careful about the slightly different definitions/implementations.
I agree completely here. But I think that undermines some of the earlier claims. The math definition only serves as inspiration, we're not using the math definition when we memoize. And the important part you need for that inspiration is a lot narrower than full equivalence.
> And the important part you need for that inspiration is a lot narrower than full equivalence.
The blogpost is discussing exactly what you gain (and lose) when arrays and functions fit this strict definition, allowing a unification of the syntax and possible compiler optimizations.
I think the point they're making is exactly that having only a loose equivalence between arrays and functions might be a programming status quo that could be holding us back from a higher level abstraction.
> I think the point they're making is exactly that having only a loose equivalence between arrays and functions might be a programming status quo that could be holding us back from a higher level abstraction.
Maybe. I'll sit here ready for those insightful uses of stricter connections.
But the memoization example is very loose, and memoization is what I was replying to.
https://en.wikipedia.org/wiki/Memoization
If you know that Arrays are Functions or equivalently Functions are Arrays, in some sense, then Memoization is obvious. "Oh, yeah, of course" we should just store the answers not recompute them.
This goes both ways, as modern CPUs get faster at arithmetic and yet storage speed doesn't keep up, sometimes rather than use a pre-computed table and eat precious clock cycles waiting for the memory fetch we should just recompute the answer each time we need it.