Hacker Newsnew | past | comments | ask | show | jobs | submit | jwp729's commentslogin

I'm sorry to hear about your loss. I'm glad you're able to get help through counseling.


Shouldn't this article have appeared in Nurture magazine?


You, sir, deserve the Mili Vanilli box set.


Facial recognition is possible using the IBM Watson Visual Recognition service.


If you tried the Watson vision offering when it was "AlchemyVision" then you may have tried a now out-of-date version of the service. The AlchemyVision and Visual Recognition tiles on Bluemix have recently been combined in a way that utilizes their complementary strengths. Consider retrying the updated service if you'd like!

Disclosure: I work at IBM Watson.


ok - my bad. I just saw the AlchemyVision has been merged into Visual Recognition starting May 20th. We will definitely check it out to see what extra features have been added.

qq - Is the API stabalized, by which I mean will there be further changes/merges?


The API has stabilized. There will be future changes, and I would expect to see those as the service gets new features added (like retraining).


It could also be indicative of future market moves in other markets too, even if these moves are not crises. The large moves in commodities has been anomalously _uncorrelated_ with equity markets, though historically over long enough windows the two move together. This raises the question of whether this trend has broken or if one (commodities markets or equities) will budge in the direction of the other soon. My crystal ball is too smudgy to know this myself...


Surprised there's no mention of the Curry-Howard correspondence between proofs and programs.


Mention? The title led me to believe that was the topic.


I think the information theoretic approach to modeling concerns actually implies such "simpler is better" approaches as Occam's Razor. At least that's my take on [http://arxiv.org/abs/cond-mat/9601030], which derives a quantitative form of it.


I haven't read that paper, and the abstract makes my head spin! I'll have a look later, and try and figure out the argument. I agree with you that things like the I-measure are based on the idea that simpler is good, and it works well in practice - both in Machine Learning and in the real world - which is why humans tend to prefer it. But (the paper you cite aside) I don't know of a deep reason why simple is preferred by nature.

Also there is a deep cognitive bias here, perhaps we lack the machinery to understand the world as it really is!


Maybe not self-referene but reference to the class to which it belongs, cf. Quine's paradox:

"Yields falsehood when preceded by its quotation" yields falsehood when preceded by its quotation.


Can you explain a bit more why the recurrent network structure becomes necessary at some point? Is that because reversing a CNN naturally means rendering by (de)convolution?


In order to approximately learn a "real" graphics engine with support for basic physics, just feed-forward computation might not be sufficient. A more natural way to learn graphics/physics might be to learn the temporal structure more explicitly. On the other hand, it might also be interesting to just add temporal convolution-deconvolution structure in the existing model. This is work in progress though.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: