Neuromorphic engineering... trying to build computers that are more brain like... has become mainstream. Companies like Qualcomm are hiring neuromorphic engineers. Major companies like IBM and HP have DARPA funding to build more brain like computers.
But, the question is, is there really any benefit to using neural style computing versus good-old-fashioned CPUs which have gotten us oh so very far.
For neuromorphic engineering to advance further, this question has to be answered crisply and definitively.
Before we can answer that question, we have to ask: what do we want to use the computer for? I can buy a calculator that costs less than $10.00 at my local drug store that can add, subtract, multiply and divide far faster than I can in my head, and with much greater precision! I feel thick witted when compared to the simplest pocket calculator.
And there are a lot of places where being able to "crunch numbers" quickly is very important. For example, financial calculation, rendering graphics, and designing complex machines. So, lets not dismiss the obvious benefits of these machines.
However, that is not all we want computer to do. As we begin to attach computer to the real world by adding sensors and actuators-- essentially making them the computational core of robot-like machines, our computational needs are changing.
Here is where our world of computation turns upside. Humans excel at interpreting large volumes of data, ignoring what is not important, and emphasizing what is import. We attend to that which is relevant. We have also come to realize from neuroscience that the world we live in is mostly in our heads, in the form of models which we can use to reason with, but that are grounded in the real world.
Now, the interesting thing is that as our algorithms begin to resemble, more brainlike computation, we find that these algorithms cannot run efficiently on CPUs designed for computing balances on checking accounts. A new kind, of find grain parallelism is needed to handle the fire-hose of data that is flowing into these systems. As we drill down, we find that with this fine grain parallelism, in order to create efficient machines, we need to colocate computation with memory. If memory and processing are kept separate we need to have lots of long connections with use power and generate heat. Adding the local ability to adapt helps things as well. A global adaptation scheme just can't send enough signals to enough processor to keep up. This leads to extreme decentralization of computing. In the end, we end up with a collection of highly parallel, computational elements with local learning, and local data storage. We end up brainlike.
And here is the real kicker. The reason why we end up designing computers like this is to save power. Now this gives us the idea that when we look at real brains, we should also be considering how limited power and cooling capabilities shaped the creation of brains. How efficiency may have given rise to the partitioning of the brain into distinct functional regions.
So, the case for neuromorphic engineering really comes down to not necessarily computation, but the practical issue of how to host that computation most efficiently on a physical substrate. Since both brains and silicon inhabit a world with real consequences for their organization, the strongest case for neuromorphics is going to be made on the basis of power.