Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I worked on a similar pipeline for renal cell carcinoma a few years ago [1], although we only published a small subset of results since parts of the pipeline (e.g., finding representative tiles, survival prediction) had better results being produced elsewhere in the lab.

Regarding the hook in the headline -- computers surpassing pathologists -- it's a bit like automated driving in that even if true the immediate problem is the social and economic system. That is, we're not going to be removing the pathologist from the diagnostic and prognostic process anytime soon for many reasons, so how instead do we leverage machine learning in concert with the human observer to improve the diagnostic system? For that reason, decision introspection may be as valuable a topic of research as improving classification accuracy: justifying a particular automated classification to the pathologist, directing them to representative regions, and describing regions of feature space in biological terms.

[1] http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6945104



Very good point. Even if this particular study isn't a "breakthrough" it is likely a matter of time before AI / Deep Learning (what term am I supposed to use?) is able to better interpret visual data than a human. The real barriers to adoption of such a technology are significant and can be witnessed in the popular Self Driving Car technology race. Many people seem to believe the technology problem has already been solved or will be shortly but regulations, public perception, and adjacent industries (i.e. insurance) must adapt before we actually see self driving cars for the common man. These tend to be slow moving beasts.

Given how many applications we're seeing for AI it seems that there may be a battle for attention of lawmakers and others for which AI application we put into production first given the limited bandwidth our regulators have; it's not like congress has been especially productive over the last many years.


This! As far as I can remember, the experience in chess is that a human+computer team beats either of them working alone (sorry, can't find a reference just now).


I would love to believe this is true, unfortunately I do not think it is.

Computer chess algorithms completely blow away human competitors. The strongest human, Magnus Carlsen has an elo rating of 2857 [0].

Stockfish, the strongest chess algorithm (open source, btw) has an elo rating of 3445 [1].

Computer chess algorithms are so much stronger than humans that if the human second guesses the algorithm -- the human is probably wrong.

You may have been thinking of this bbc article [2] in which an amateur cyborg player beat grandmaster cyborg players -- the amateurs were crunching additional metadata about what situations were best for their play. However, they didn't beat stockfish, they beat other cyborg players.

[0] https://ratings.fide.com/card.phtml?event=1503014

[1] http://www.computerchess.org.uk/ccrl/404/

[2] http://www.bbc.com/future/story/20151201-the-cyborg-chess-pl...


> I would love to believe this is true, unfortunately I do not think it is.

This is the blog post that introduced me to the idea that human + computer might be better a computer alone: https://rjlipton.wordpress.com/2015/07/28/playing-chess-with...

It's called Freestyle Chess or Advanced Chess. Humans have beaten the best computers this way, but I'm not sure it is clear that human + computer outperforms a computer alone consistently.

Incidentally, that's a great blog to browse if you like chess and math/CS theory.


You must be right about the cyborgs. I read the same thing, in "Race Against the Machine," by Erik Brynjolfsson and Andrew McAfee [1] on page 54, where the author cites Kasparov, who wrote (in 2010, regarding the cyborg thing) [2]:

The surprise came at the conclusion of the event. The winner was revealed to be not a grandmaster with a state-of-the-art PC but a pair of amateur American chess players using three computers at the same time. Their skill at manipulating and “coaching” their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grandmaster opponents and the greater computational power of other participants. Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.

It sounds like since then computers have improved enough such that humans no longer help.

[1] https://www.amazon.com/Race-Against-Machine-Accelerating-Pro...

[2] http://www.nybooks.com/articles/2010/02/11/the-chess-master-...


The top humans and top computers never play each other anymore, and so the rating pools are independent and not comparable. I suppose Stockfish would grind Magnus down in a serious match, but only because the human player is susceptible to fatigue. The teams that tune the engines for engine v engine matches would never dream of letting them make up their own moves in the opening phase, instead they use human openings, an acknowledgement that humans understand the game better.


>an acknowledgement that humans understand the game better. //

How is that true? I'd imagine it's to reduce the search space against human players and to make use of records of games played - a large corpus of which have involved traditional openings. It doesn't seem to imply the acknowledgement claimed?


Sorry I "replied" in a sibling node since no reply link was present at the time (maybe reply links only appear after a timeout?)


pbhjpbhj's comment doesn't have a reply link for some reason so I will reply here. Left to their own devices computers will play crude simple development moves in the opening. That's because there are no tactics until the pieces come into contact and it's all strategy, where humans still have an advantage. Decades of experience have resulted in a corpus of classic subtle opening strategies that the computers don't rediscover - for example in some Kings Indian positions Black has precisely one good plan - a good human player will initiate this plan with move ...f5 automatically - an engine will juggle ...f5 with a bunch of other slow moves, (all of which are irrelevant) and might not play it. If you look through the pages of "New In Chess" magazine you will see the top masters analysing their games and saying things like - "The computer does not understand this type of position - it rejects $some-good-human-move but eventually when you lead it down the right path it changes its mind".

All of this is not to say that computers have not overtaken humans in chess. They have. But primarily because they have superb qualifications as practical players - they never make crude errors and all humans on occasion do. This trumps all other considerations. But the vast gap you see when comparing the human lists and the computer lists is exaggerated - 500 Elo points means a 95% expectation. I am 100% convinced that if Magnus could be properly motivated (think big $$$ and somehow convincing him that scoring a decent proportion of draws as White would be a "win") he could deliver for humanity :-)


> I am 100% convinced that if Magnus could be properly motivated (think big $$$ and somehow convincing him that scoring a decent proportion of draws as White would be a "win") he could deliver for humanity :-)

Interesting theory. Personally, I'm not sure it's clear that good strategy can overcome ruthless tactical precision. I'm also not sure a human could ever be motivated to achieve a 0.00% tactical blunder rate. (Much as I would love to see human strategy defeat computer tactics.)


> I am 100% convinced that if Magnus could be properly motivated (think big $$$ and somehow convincing him that scoring a decent proportion of draws as White would be a "win") he could deliver for humanity :-)

You are 100% wrong. Computers overtook humans at chess in 1995. Unless there is some insight Magnus Carlsen or any other GM has which they are not sharing, it will remain that way.


Even at equal capability a person who never tired, seldom erred, never forgot a sequence, never got distracted or emotional would surely be superior at chess?


If you think there is no way Magnus Carlsen could steer a few games to draws, you really don't know much about chess.


I am a 2200 elo rated player. There will be more to the game than Magnus steering "a few games to draws". I suppose it would depend also on how many games are agreed for the match. For a suitably large number of games, the computer could simply force many dull draws then lash out with a strong tactical game. Humans have the extra dimension of emotions and fatigue to deal with. Switching from positional play to long tactical sequences does not play well to human strengths; in addition getting a draw from a "mission programmed" computer may not be trivial as there is the added dimension that it does not need to choose the most direct route to the draw.

Another factor is that it is trivial to change the computers repertoire of openings and there are a wide choice of these. Humans including Magnus require weeks to months of preparation before they are ready to play new openings or deviate from prepared lines.

Finally, (and I freely admit that this is my own personal opinion), Magnus Carlsen may not be our best choice of human to play against a computer. There is an unmistakable emotional fragility to him which manifests when he is losing (cf his games with Anand); a good deal of his strength lies mainly in the early middle game and ending but computers are superior in the latter; and he often wins games out of sheer stamina- a strategy that wont work with the silicon beast.


The original point I was making was simply that the >500 point Elo delta is an exaggeration. So I only need Magnus to steer a few games to draws to be correct.


Agreed. I published a paper on exactly that ( augmenting scientists searching for novel materials, rather than replacing scientists entirely). One of the benefits is that you can often model simpler problems which are more tractable.

http://www.nature.com/nature/journal/v533/n7601/full/nature1...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: