Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The JVM Language Summit 2010 (olabini.com)
36 points by fogus on July 29, 2010 | hide | past | favorite | 17 comments


Can someone explain the java/clojure code in the article? I'm assuming the jvm is able to make some sort of optimization because o was set to null which it wouldn't have been able to do otherwise?


The goal of this trick is to reduce the memory required when the argument to the count function is a lazy sequence. When iterating through a lazy sequence, the items in the sequence are realized and retained by the sequence.

The code clears the stack slot for the count method argument. If the object referenced by the argument is not referenced elsewhere, then the object is eligible for gc during the execution of the count method. In the case where the argument is a lazy sequence, this changes the memory requirement from the entire realized sequence to a single item in the sequence.

This trick is employed everywhere in the Clojure implementation where a lazy sequence might be retained. It's not specific to the count function.


> Another funny anecdote was when Doug Lea pointed out that if you use fibonacci to test performance against yourself or others,

... then it means you're completely disconnected from reality?

Edit: Can a down-modder make a convincing argument that a performance benchmark using Fibonacci is even useless, rather than worse than useless by being an appealing lure toward optimizing the wrong things?


What do you propose as an alternative? A two-line function that tests arithmetic and recursion is useful.


> What do you propose as an alternative?

Something useful that uses recursion, perhaps. A DFS comes to mind, or countless other common real-world algorithms.

When do you ever care about recursion performance in the context of a function that does nothing but one addition? You might as well compare languages on how well a billion iterations of an empty loop performs.


If you don't care about the performance of recursion in that context, why do you care about it in any other context? Recursion doesn't magically become different when there are file system calls involved. Surely adding other things to the function would only confuse the benchmark.

Your objection is like looking at a floating-point benchmark and thinking, "No real program just crunches doubles. It should include networking code and exception handling."


It should crunch doubles in a way typical to what a real-world program would. I'm fine with a ray tracer or something like that.

The point is I don't care about the performance of recursion for its own sake in a context for which there is no serious use. You might, but then you are disconnected from reality, as I said. That isn't an insult, it's just true from the definition.


The point is that the performance of recursion is not very context-dependent, and more complex contexts are less capable of accurately measuring the performance of any single operation — you don't know what elements of the function are slow or fast unless you measure them in relative isolation. Like, OK, so it runs that function fast — that tells me nothing about how any other function will perform, because you haven't determined what's fast and slow.

Doing "benchmarks" with huge functions that do a lot of unrelated things is like "unit testing" a program by running it and seeing if it crashes. It tells you something, but that something is pretty vague.


Why would the performance of recursion not be very context-dependent? I would expect it to be extremely context dependent depending on how much memory is being put on the stack, cache hits and misses, and so forth. I can easily imagine optimizing performance for an unrealistic micro-benchmark that would actually hurt overall recursion performance. If there are any famous last words in optimization it's that something isn't very context-dependent, no?


Of course it is not perfect, but it is one of the best ways to get a quick idea of the speed of a language. For example if we do it in OCaml we get native machine integers and cheap function calls. If we do it in Ruby we get expensive integers and heavy weight procedure calls.

> I can easily imagine optimizing performance for an unrealistic micro-benchmark that would actually hurt overall recursion performance.

I can't. Can you explain this and give an example?


It's useful as a benchmark, you just have to be careful about which conclusions can actually be drawn from it.


Will these talks be available online?


You can find some of the notes from the Summit here - http://wiki.jvmlangsummit.com/Main_Page


Interesting that the post is from future as of now... (21:04 July 29th, CEST).


Is it the 30th in another time zone yet?


yes

(This post was made at: 7:05 AM 30th July 2010 Sydney)


It is already the 30th in many countries, but the site is registered in Sweden, probably also hosted in Sweden and in Sweden it isn't the 30th yet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: