Hacker Newsnew | past | comments | ask | show | jobs | submit | sawwit's commentslogin

There is a huge memory leak in Preview since El Capitan and the display of PDFs sometimes fails when you zoom in. I have no idea how many users are affected by this, but it seems strange that Apple hasn't fixed it yet. Perhaps it is just a tricky bug.


I somehow doubt that this is just a photo stylized by a computer. Aren't the eyes way out of proportion? http://turing.deepart.io/f/0.png


Yes, exactly. While the background in the picture of the baseball player doesn't look photographic, at least the proportions are realistic. But the proportions of the female face are completely unrealistic. How could it be based on a photograph of a real human?

Are these computer-generated images based on photographs of real subjects, or on photographs of stylized paintings?

I don't think this test is very interesting until we can see the images the computer-generated ones were based on.


Wait, so Steve Jobs wasn't a visionary. He was just a salesman exploring unexploited markets as any salesman does? Albert Einstein wasn't a visionary. He was just a physicist that explored unconsidered theories as any physicist does?

It seems to me that in philosophy there is just as much "groupthink" as in almost any human endeavor—there is a maybe little less in the hard sciences where the systems give you feedback about whether an idea is correct or not.


You're just biased towards the so-called hard sciences. There is nothing about them specifically that prevents groupthink as you call it. Science deals with what is, not with what ought to be or shouldn't be and so on. Sure, for a scientist the universe kicks back but that is nothing to do with how a scientist chooses what to work on in the first place, and what preconceived notions and frameworks that scientist is operating under. I could give countless examples of scientists comfortably working within ideological frameworks or reasoning using incorrect theories.

The whole point of philosophy is that it is meant to encourage freedom of thought and free-thinking individuals. That's its job spec. I disagree that “there is just as much "groupthink" as in almost any human endeavor” -- if that really is the case then philosophy is failing at what philosophy _ought_ to be succeeding at.

I'm not saying Bostrom isn't an extraordinarily good philsopher, I'm saying that seeing the big picture and going against conventional wisdom and intuition goes with the territory. Don't imagine that I'm running Bostrom down, I very much enjoy reading the guy and listening to his thought processes, I find him to be a very rigourous thinker.

Maybe it's a small quibble, of course he can be both.


In a recent estimate of the bits per synapse they found it to be an order of magnitude higher than previous estimates: http://www.eurekalert.org/pub_releases/2016-01/si-mco012016....


Skim.app is very suitable for Latex PDF preview (it even supports syncing given you have an Editor that supports it too, e.g. Emacs AUCTex or TexShop).


> The difference isn't easy to describe, but one such difference would be that a single extra stone can change a Go position value much more than a single pixel changes an image classification.

A CNN can still distinguish extremely subtle differences of various animal breeds, exceeding human performance in such tasks. Why was that advance not a warning sign? The rotational-translational invariance prior of the convolutional neural network probably helps because, by default, local changes of the patterns can massively change the output value without the need to train that subtle change for all translations. Also, AlphaGo does a tree search all the way to the games end, which can probably easily detect such dramatic changes of single extra stones. Reality is likely much too unconstrained to to able to efficiently simulate such things.


Great achievement.

To summarize, I believe what they do is roughly this: First, they take a large collection of Go moves from expert players and learn a mapping from position to moves (a policy) using a convolutional neural network that simply takes the 19 x 19 board as input. Then they refine a copy of this mapping using reinforcement learning by letting the program play against other instances of the same program: For that they additionally train a mapping from the position to a probability of how how likely it will result in winning the game (the value of that state). With these two networks they navigate through state-space: First they produce a couple of learned expert moves given the current state of the board with the first neural network. Then they check the values of these moves and branch out over the best ones (among other heuristics). When some termination criterion is met, they pick the first move of the best branch and then it's the other player's turn.


they also train a mapping from the board state to a probability of how how likely it is a particular move will result in winning the game (the value of a particular move).

How is this calculated?

When some termination criterion is met

Were these criterion learned automatically, or coded/tweaked manually?


1. The value network is trained with gradient descent to minimize the difference between predicted outcome of a certain board position and the final outcome of the game. Actually they use the refined policy network for this training; but the original policy turns out to perform better during simulation (they conjecture it is because it contains more creative moves which are kind of averaged out in the refined one). I'm wondering why the value network can be better trained with the refined policy network.

2. They just run a certain number of simulations, i.e. they compute n different branches all the way to the end of the game with various heuristics.


If their learning material is based on expert human games, how can it ever get better than that?


This was the question which originally led me to lose faith in deep learning for solving go.

Existing research throws a bunch of professional games at a DCNN and trains it to predict the next move.

It generally does quite well but fails hilariously when you give it a situation which never comes up in pro games. Go involves lots of implicit threats which are rarely carried out. These networks learn to make the threats but, lacking training data, are incapable of following up.

The first step of creating AlphaGo worked the same way (and actually was worse at predicting the next move than current state of the art), but Deep Mind then took that base network and retrained it. Instead of playing the move a pro would play it now plays the move most likely to result in a win.

For pros, this is the same move. But for AlphaGo, in this completely different MCTS environment, they are quite different. Deep Mind then played the engine against older versions of itself and used reinforcement learning to make the network as accurate as possible.

They effectively used the human data to bootstrap a better player. The paper used a lot of other cool techniques and optimizations, but I think this one might be the coolest.


Fantastic explanation, thank you!


How can a human ever get better than their teacher?

In this case though they play and optimize against themselves


> How can a human ever get better than their teacher?

By learning from other teachers, and by applying original thought. Also, due to innately superior intelligence. If your IQ is 140, and that of the teacher is 105, you will eventually outstrip the teacher.


The question was rhetorical. And what is needed is aptitude for the specific task, not "IQ" ... the two are often very different.


I concluded that the all time no. 1 master Go Seigen's secret is 1. learn from all masters; 2. keep inventing/innovating. Most experts do 1 well, and are pretty much stuck there. Few are good at 2. I doubt if computers can invent/innovate.


I would have thought (he says casually) that some kind of genetic algorithm of introducing random moves and evaluating outcomes for success would be entirely possible, no?


There's a large space of random moves. How many are likely to be useful?


Do you ask that of natural evolution, too?


"I doubt if computers can invent/innovate."

Sheer ignorance.


It's because they have a much larger stack size than a human brain (which does not have a stack at all, but just various kinds of short term memories). An expert Go player can realistically maybe consider 2-3 moves into the future and can have a rough idea about what will happen in the coming 10 moves, while this method does tree search all the way to the end of the game on multiple alternative paths for each move.


Not true. Profession go players read out 20+ moves consistently. Go Seigan's nemesis Kitani Minoru regularly read-out 30-40 moves.

As an AGAAmateur 4 dan I read 10 moves pretty regularly, that's including variations. And if the sequence includes joseki (known optimal sequences of 15-20+ moves), then pros will read even deeper...


Yes, the latter number was perhaps too conservative; no doubt about deeper predictions being easily possible, but I doubt even expert players consider many alternative paths in the search tree. They might recognize overall strategies which reach many moves into the future, but extensive consideration of what will happen in the upcoming moves is probably constrained to a only few steps; at least relative to the number and depths of paths that AlphaGo considers.


"while this method does tree search all the way to the end of the game"

No it doesn't. You seem quite happy to just make stuff up that you know nothing about, like "2-3 moves into the future".


If you took one expert and faced him against a room full of experts who all together decided on the next move, who would win?


The one expert, because the others would not be able to reach a decision on which move to play.


In fact, no. A big group of average experts appears to be better than a single super expert. This is the principal justification for the success of AI in oil prospective (https://books.google.fr/books?id=6DNgIzFNSZsC&pg=SA30-PA5&lp...)


Counterpoint: https://en.wikipedia.org/wiki/Kasparov_versus_the_World

I think a key missing component to crowd success on real expert knowledge (as opposed to trivia) is captured by the concept of prediction markets. (https://en.wikipedia.org/wiki/Prediction_market) The experts who are correct will make more money than the incorrect ones and eventually drive them out of the market for some particular area.


That's no counterpoint because the World team (of which I was a member) was made up of boobs on the internet, not players of Kasparov's strength, which was the premise of the question you responded to.


The easy thing about combining AI systems is that they don't argue. They don't try to change the opinion of the other experts. They don't try to argue with the entity that combines all opinions, every AI expert gets to say his opinion once.

With humans on the other hand, there will always be some discussion. And some human experts may be better at persuading other human experts or the combining entity.

I think it would be an interesting thing to try after they beat the number 1 player. Gather the top 10 (human) Go players and let them play as a team against AlphaGo.


This is nonsense. To combine AI systems requires a mechanism to combine their evaluations. The most effect way would be a feedback system, where each system uses evaluations from other systems as input to possibly modify its own evaluation, with the goal being consensus. This is simply a formalization of argumentation -- which can be rational; it doesn't have to be based on personal benefit. And generalized AI systems may well some day have personal motivations, as has been discussed at length.


This reminds me of the story of the Game of the Century, with Go Seigen's shinfuseki. https://en.wikipedia.org/wiki/List_of_go_games#.22The_Game_o...

https://en.wikipedia.org/wiki/Shinfuseki


the expert human games are used just to predict future moves


the key part is that they basically just play all the permutations possible and next permutations and so on and get a probability to win out of each path and take the best. It is indeed a very artificial way to be intelligent.


I can't recommend the Tree Style Tabs FF add-on enough for these kinds of "large sessions that diverge into many sub-sessions". If you ctrl-click a link, this add-on creates a new child-tab of the current tab and loads the link there. It kind of builds little spanning trees of the WWW graph, which seems to be a very natural way of browsing the web, but you can also organize these tab trees completely freely with drag & drop.


Exactly. The divide between address bar and search bar is a much under-appreciated privacy feature.


How is FF ugly? I don't find the UI very different from Chrome and Safari.


I don't know what it is, but I agree. There is something subtly and subconsciously going on that nudges me away from FF too. I think it's the loose feeling UI, the default smiley face icon of the chat that I don't know anyone uses, the paper airplane as the share icon. It's that there is still not unified search/address bar even if one would prefer it without running a poorly executed extension. It's that the tabs feel so childish with the far too rounded corners and loose padding and it's even the few pixels more of padding between the address bar and the tabs boundary. It's the small font of the address bar, it's the disproportionately large back button, it's the apparent lack of design and style requirements for extension icons that make them look blurry and generally shitty and bolted on in the UI, etc.

I know that all that maybe sounds rather petty and maybe there's something wrong with me and it's my biases for some reason, but it just all adds up to a fuzzy notion of childishness and less than "down to business" feel. I say that as someone that used to exclusively use FF and shunned Chrome for all the various reasons mentioned by others.


Care to list those reasons. Genuinely interested, and not sure which reasons you are referring to.


I don't get how he thinks it's ugly or different.

Not only is it very similar but, unlike Chrome or Safari, every aspect of it is customizable so you can make it look however you want. Out of the box you can rearrange the the UI elements to match Chrome or Safari.


I absolutely hate the FF UI. I WANT to use FF, I actually switched a few months ago and did it for a couple of months. I THINK my problem (I say think because I feel like someone will measure the pixel and say I'm wrong) is that it feels like there is a ton of wasted space between the tabs, above the tabs. etc. I can't control the size of the top bars and they're huge compared to chrome. I'm a fan of tiny icons and very, very compact UIs so that I can get more on the screen at once. Here's my comparison on a 28" monitor https://www.dropbox.com/s/k32gln7s3ev5a7t/Screenshot%202016-...


You might try the "Custom Tab Width" add-on to shrink your Firefox tabs. I also like to set my toolbar bookmarks' names to an empty string, so my toolbar is just a bunch of bookmark favicons.

https://addons.mozilla.org/en-US/firefox/addon/custom-tab-wi...


Hang on the chrome UI at the top takes up a couple of mm more vertically than the FF one. What other differences are there?


For me it's the other way around, I use FF for everything but dev sessions where I use Chrome (I prefer it's dev tools).

I tried breaking the habit for a while but I ended up opening FF all the time by muscle memory.


Plugins can get you a very compact ui: http://paste.click/XfPZOR

I'm also a fan of minimal as possible.


Perhaps give Tree Style Tabs on FF a try.


It's quite different, and because of XUL it also falls in the uncanny valley of looking like native widgets but not behaving exactly like them.


I'd say FF Developer Edition has the best UI out of all of them.

Nothing fits better next to a terminal window or a code editor.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: