Doesn't seem to be same thing. OpenWorm is computer simulation, while the OP plans to build a model based on observing an actual living individual [1].
That's about right. My end product is also a computer simulation, as far as that goes, but I'm taking an obsessively data-driven approach, motivated by the advent of new technologies for single-neuron measurement and perturbation in living, behaving animals. Meanwhile, OpenWorm is taking a bottom-up approach, driven by physics and other first principles. We'll meet in the middle eventually.
I want to point out that people should be wary of this recent wave of press surrounding "neural" computing. A lot of this involves trying to emulate biophysical properties of neurons we observe in the brain, but it is not clear how these properties affect function or if they are even necessary for the types of computations we are interested in emulating in silicon.
That being said, memristors are a fascinating piece of technology and we need to make progress on all fronts if neurally-inspired computing is to become a reality. This latest development (capturing the nonlinear properties of sodium and potassium channels in a microelectronic device) is quite interesting, as these properties are crucial for reproducing the spiking behavior of real neurons. All I'm saying is take these results with a grain of salt, there is still a lot of work to do!
At least this time they're talking about something that can be made to fire like sort of like a neuron instead of yet another boring old perceptron that can't even change its own behaviour.
Even if perfect replication of a neuron electronically was possible, there are 86 billion neurons in an average human brain which is well over ten times the number of transistors present on modern chips.
Of course, functional electronic neuron models and scaling well before that point is likely to be immensely useful anyway.
Even if it took a hundred thousand transistors (/transistor-size devices) to emulate a neuron, that wouldn't actually be a problem. Compared to electricity-on-copper, the connections between neurons are slow. When operating on electricity, there are no problems whatsoever in making your brain consist of a million chips that fill a warehouse.
Indeed. So it might actually be possible to emulate a human brain in hardware rather sooner than we might have thought. Memristors are also memory devices, with access times on the order of modern RAM, and a small multiple of 86 gigabytes is not a huge amount of storage these days. So imagine a 1 billion "neuristor" processor backed by a few hundred GB of memristor storage and operating in a sort of time-shared fashion (load a "brain component" into the neuristor configuration, interface with the memory for a while, then move on to the next component and so forth).
I think it is fair to say that we just don't know yet. Neuroscience has learned a great deal about the underlying machinery (neurons, synapses, and so forth), but we have no idea how you put that machinery together to generate something like consciousness. This is, for me, what makes the field interesting, because I think these questions are answerable.
As far as the original article, I think anyone who tries to argue that you are not your brain is ignoring over a hundred years of scientific evidence that suggests otherwise.
I disagree with a lot of points made here. First, this is focused exclusively on the biological sciences, and is largely not applicable to other fields (hence the title is misleading). Second, there are a lot of personal anecdotes which don't move the central ideas forward. Finally, there was little in the article that discussed how to think about science, most of it was how to pursue science.
That being said, I agree with the sentiment. Most of what we teach undergraduates is about the knowledge science is produced, rather than about the process of doing science itself.
I didn't read the whole thing - it's not concise, that's for sure - but the general advice seems to be: Go find a lab job and see if you like it. Though his examples are specific to biology, I can't figure out what part of that advice is specific to biology. I did it in experimental physics and it worked out just fine.
Sure, the article is about about pursuing science rather than thinking about it. But that's the author's whole point. Enjoying a career is all about enjoying the day-to-day work: If you love thinking about DNA but don't love pipets, you're going to be unhappy a lot of the time, because life in the lab is about 10% deep thought and 90% pipets. (Or, in the semiconductor laser lab: 10% deep thought, 50% misaligned optics, and 40% mysterious process problems that you will never entirely understand, but which you will eventually solve by spending months on end turning knobs in a strategic manner.)
Same general advice applies in engineering: do your best to attach yourself to a lab, and see if it catches your fancy. Best way to test-drive a career choice.
He's kind of down on textbook-and-problem-set coursework and large lecture classes. This is not universal. Some large lectures are large for a reason -- the professor is a star. And some textbooks are really good, and some problem sets are worth sweating over.
Most of what we teach undergraduates is about the knowledge science is produced, rather than about the process of doing science itself.
Which is not entirely a useless endeavor; if you don't know about what has already been discovered, how can you build on it and go further? How will you know what has already been tried?