A few weeks ago, my friend and colleague Omar Rahim sent me a link to an interesting article by John Markoff which he saw in the The New York Times. The article was on a new chip that’s slated for release this year.
The new computing approach, already in use by some large technology companies, is based on the biological nervous system, specifically on how neurons react to stimuli and connect with other neurons to interpret information. It allows computers to absorb new information while carrying out a task, and adjust what they do based on the changing signals. (Source: NY Times)
This, of course, has tremendous implications for the types of applications that we provide embedded systems for (and for all of computing, for that matter).
The article talks about a shift from processers that use method Von Neumann architecture to new processors that:
… consist of electronic components that can be connected by wires that mimic biological synapses… They are not “programmed.” Rather the connections between the circuits are “weighted” according to correlations in data that the processor has already “learned.” Those weights are then altered as data flows in to the chip, causing them to change their values and to “spike.” That generates a signal that travels to other components and, in reaction, changes the neural network, in essence programming the next actions much the same way that information alters human thoughts and actions.
It’s way cool that this technology is being put into hardware. The speed there will be important. Typical computer programming is precise — the machine does exactly what it is told. With neural networks and this learning, you really don’t know what “decision” the computer is going to make. It’s still (at a very low level) performing precise operations, but it’s storing data in a distributed fashion that impacts the decisions it makes in the future. These are the coefficients they talk about in the article. As the coefficients take shape over multiple experiences, the system learns “right” from “wrong” in a fuzzy world.
Anyway, this is a really interesting article on the future of computing, and especially interesting to me because I studied neural networks when I was in grad school at UMass. While I was there, I worked in the computer vision lab where we had an autonomous robot we were working on making navigate the real world completely on its own. The robot’s name was “Harvey.”
I was able to dig up a video taken from Harvey’s perspective. I have no idea when this was taken, but that’s Harvey moving toward the Graduate Resource Center, the computer science building at UMass.
Brings back a lot of memories …