Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ironic, you just illustrated misconceptions #3 and #4, while trying to educate me about why I was wrong to comment on misconception #2.

All of these notations are just notations for functions, and the functions can have any relation to reality that you want. Here are precise and accurate statements about quicksort using Θ notation. The average and best case performance of quicksort is Θ(n log(n)). The worst case performance of quicksort is Θ(n²).

I maintain that most decent hackers would make those same statements using O notation (which is true), but many would jump to correct someone who said that the expected performance of quicksort is O(n²). According to what is taught in CS, that "correction" is wrong. But given that language is established by usage, they aren't really wrong. And given that arbitrary upper bounds aren't very useful, the common usage makes sense.

Before disagreeing, tell me what value there is in the true statement that the expected run time of all major sorting algorithms is O(nⁿ).



I can tell you that all of the theorists I talk to say and write O, not Θ when discussing algorithms. I think this is because that the upper bound is generally more interesting, better understood, and more useful when reasoning. In fact, if one of them used Θ, my next question would be "Wait, do we even care about the lower bound?"

I'm not disagreeing with you, just pointing out it's not just hackers. And I think this behavior is because the lower bound generally isn't relevant.


This is why you care about the lower bound:

If I say something is O(n^n)), you may come back and say "I wonder if it is O(n^2)

If you say Omega(n^2), I know not to bother.


In discussions with a theorist, they would never tell me that something is O(n^n) if it is not also Θ(n^n). My point is not on what people may say, but on what they do say. btilly made a point about programmers, not theorists.

I added the observation that even theorists - the people who are experts in this area - talk like that, and gave a possible reason why. My reason, which perhaps I did not communicate well, is that the lower bound is generally not of practical interest.


About: "In discussions with a theorist, they would never tell me that something is O(n^n) if it is not also Θ(n^n)". So you've never heard someone say, for example, that Strassen's algorithm shows that matrix multiplication for n x n matrices can be done with O(n^2.808) multiplications?

For some problems, the best known upper bounds don't match the best known lower bounds. Also, in cases where there are further improvements in the upper bounds, it doesn't automatically mean people stop talking about previous upper bounds.

(Ignore this whole comment if you specifically meant the function n^n, I have no examples relevant to that function.)


I literally meant n^n, which is also what I assumed sadga meant. That is, an upper bound which allows for absurdly fast growth.

I have discussed matrix multiplication's unknown lower bound with theorists, but more as a trivia item because there is a relatively recent result showing that it is O(n^2.3727) (http://www.cs.berkeley.edu/~virgi/matrixmult.pdf). That is, the unknown lower bound was the point of the conversation, and it was interesting because the lower and upper bound are not the same; it's still open research. The other point the theorist made during that conversation is similar to what I've said: getting a tighter bound on matrix multiplication is theoretically interesting, but there's not much practical application.

I was rather talking about when we discuss algorithm when trying to solve problems.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: