Interesting story here about what parallel resources the brain musters to perform simple tasks. It suggests that trying to build a functional brain-analog by simulating individual neurons is unnecessary. Instead, a much more practical silicon implementation would come from understanding the aggregate behavior of groups of neurons and simulating that instead. Not a new idea but it’s interesting to see an attempt to start to understand how this might work.
This is somewhat similar to weather forecasting – to get a truly accurate model of the atmosphere really means going down to the atomic level and modeling the behavior of each molecule – clearly not very practical. Instead, weather forecasters model the group behavior of molecules in much larger systems to make the calculations (barely) possible. Or consider Newtonian mechanics – a model of physical interactions of aggregates of atoms with simple laws. Provided speeds aren’t too high and gravitational fields aren’t too strong, the model works extremely well and there’s no need to get more detailed.
Of course, anything that’s an aggregate model is going to diverge from reality, as weather forecasts are prone to do. In the context of brain simulation, this may not be terribly important as long as the fundamental model is good enough. And that’s the problem of course. There may not be that many parallel processes in play at any one time but what exactly are they and how do they interconnect? Still a difficult problem but a bit more practical perhaps than trying to build a brain from impossibly large numbers of simulated individual neuron models.