The serious side of yesterday’s blog is the implied question – Is it enough simply to put together enough processing units as a human (or other animal) brain?
Short answer … no.
A few months ago I was at a conference where a speaker was proud of the fact that current supercomputers contain 50-100 million “gates” or transistors, the supposed equivalent of 50-100 million neurons – the size of a cat brain. By 2015 they anticipate supercomputers of up to 500 million “neurons.“
“That’s the size of a primate brain!” was the claim.
Sure. It is. But it’s not enough.
Remember the description of the input and output connections for neurons? Tens, hundreds, even thousands of connections per neuron! Thus it’s not enough to simulate the half-billion neurons; rather, it is necessary to simulate the three orders of magnitude greater connections between neurons.
See, it is the *connections* that do the *real* processing in the brain. There is only a limited range of activity available to a neuron – either it “fires” (an action potential, the electrical discharge that travels from one end of the neuron to another) or it doesn’t. One or zero. Sounds very binary, very computer-like, right? Well, not really. If the brain operated as strictly on-off switches, it would be *easy* to mimic a brain with a digital computer [but that’s a blog for another day]. However, much greater variability is available in the connections between neurons. Connection strength allows for weak, strong, and all of the variables in between. Instead of a billion transistor digital computer, we’d need a trillion connection *analog* computer to provide the same simulation.
However, even that is not enough. Neurons are far from “dumb.” Each neuron is a processing unit capable of modifying both its own inputs and outputs. That processing is dependent on the neuron’s own activity levels, thus we now need not just a trillion *connections*, we need a trillion CPUs.
Now, let’s add some more complexity – Neurons come in different sizes, shapes, neurotransmitters, inhibitory, and excitatory. Connections can’t just be random, they need to be (A) specific to a brain area, (B) specific to a function, and (C) specific to a neurotransmitter/receptor combination – oh, and there can be more than one type of neurotransmitter and receptor in a given neuron – in fact, it’s pretty much guaranteed.
Does this mean that modeling the mammalian brain is impossible? No. Just complex. There will come a day when there really is a computer with the number of processing and connections sufficient to model a brain. In the meantime, there are a few tricks that can reduce complexity in a model. One of those is the use of nonlinear systems analysis. The beauty of a nonlinear model is that it derives mathematical equations that transform input signals to outputs. The inner complexity is captured in the math, only the input and output needs to be known.
Other techniques include biological modeling with slices and cultures of tissue that form neuron-like networks. As processors get smaller, with ever increasing number of processing cores, our technology *is* approaching the ability to model small brains. Whether such a model “wakes up” and starts demanding cheese, remains to be seen.