If neuroscience is ever going to truly succeed, that link with mathematics better get more firmly established (even as it's surely underestimated based on the journals sampled). With the number of neurons and possible synapses, there needs to be advanced computational work to tame the complexity. The NIPS conference, for instance, is already decently big. But we need many more folks from that physics and engineering cluster.
It seems to me (though I am very biased!) that most of the landmark papers in the aggregate behavior of neural networks are mostly in physics (eg: Hopfield networks) and in computer science (eg: Perceptrons, Minsky).
Some are. Some aren't. It depends on the instantiation details. Folks though seem to be much more ready to implement at a biologically-plausible level these days than they were even a few years ago.
My point is that we learn much more about neurobiology from a biological viewpoint than by studying artificial networks from a computer science perspective. If anything, the biology informs the computer science as you suggested.
It's more bidirectional than that. Higher-order cognition (language, memory, even perception and attention) isn't so easily reducible. Take for instance the hippocampus. Sure, we can simulate circuitry with precision but that doesn't explain memory formation and retrieval. More "artificial" approaches can help to explain systems from the top down even as biological constraints are more rigid from the bottom up. Both modeling approaches are likely to meet somewhere in the middle. The computational shortcuts in the more abstract models (e.g. backprop) are really just a shorthand to allow investigators to focus on less biologically-driven details or those that are not now understood in biological terms.
For instance, I know of one group using analytic techniques from social networks to correlate brain regions in fMRI data. Is the brain a massive social network? I don't think any one would say that literally. But right now, that approach is as good as any other to examine n-dimensional relationships in highly complex data.
This discussion reminds me of Paul Krugman's argument for cartoon models. I personally think that we can isolate, and therefore explain, simple parts of aggregate neural behavior by artificial construction more easily than we can by doing careful biology.
Incidentally, you might be interested to know that restricted boltzmann machines are much more biologically plausible than backprop, and seem to work faster and better.
I think you're misunderstanding the role of "neural networks" in academia. NIPS has a lot of value to the machine learning and AI communities but, beyond vague inspiration, it has almost no connections to neuroscience. There is a substantial body of work in computational modeling of neuronal behavior, but this stuff is much messier (PDEs with biologically determined constants) and limited in scope than the papers that appear at NIPS.
edit: Relevant conferences in computational neuroscience -
I don't really know much about neuroscience proper. I tend to look at the simplified artificial models and synthesize what might be possible, rather than look at all at the biology.