News:

NOTICE: Posting schedule is irregular. I hope to get back to a regular schedule as the day-job allows.


Thursday, February 10, 2011

*Can* we build it?


The serious side of yesterday’s blog is the implied question – Is it enough simply to put together enough processing units as a human (or other animal) brain?

Short answer … no.

A few months ago I was at a conference where a speaker was proud of the fact that current supercomputers contain 50-100 million “gates” or transistors, the supposed equivalent of 50-100 million neurons – the size of a cat brain.  By 2015 they anticipate supercomputers of up to 500 million “neurons.“

“That’s the size of a primate brain!” was the claim.

Sure.  It is.  But it’s not enough.

Remember the description of the input and output connections for neurons?  Tens, hundreds, even thousands of connections per neuron!  Thus it’s not enough to simulate the half-billion neurons; rather, it is necessary to simulate the three orders of magnitude greater connections between neurons.

See, it is the *connections* that do the *real* processing in the brain.  There is only a limited range of activity available to a neuron – either it “fires” (an action potential, the electrical discharge that travels from one end of the neuron to another) or it doesn’t.  One or zero.  Sounds very binary, very computer-like, right?  Well, not really.  If the brain operated as strictly on-off switches, it would be *easy* to mimic a brain with a digital computer [but that’s a blog for another day].  However, much greater variability is available in the connections between neurons.  Connection strength allows for weak, strong, and all of the variables in between.  Instead of a billion transistor digital computer, we’d need a trillion connection *analog* computer to provide the same simulation. 

However, even that is not enough.  Neurons are far from “dumb.”  Each neuron is a processing unit capable of modifying both its own inputs and outputs.  That processing is dependent on the neuron’s own activity levels, thus we now need not just a trillion *connections*, we need a trillion CPUs.

Now, let’s add some more complexity – Neurons come in different sizes, shapes, neurotransmitters, inhibitory, and excitatory.  Connections can’t just be random, they need to be (A) specific to a brain area, (B) specific to a function, and (C) specific to a neurotransmitter/receptor combination – oh, and there can be more than one type of neurotransmitter and receptor in a given neuron – in fact, it’s pretty much guaranteed.
Does this mean that modeling the mammalian brain is impossible?  No.  Just complex.  There will come a day when there really is a computer with the number of processing and connections sufficient to model a brain.  In the meantime, there are a few tricks that can reduce complexity in a model.  One of those is the use of nonlinear systems analysis.  The beauty of a nonlinear model is that it derives mathematical equations that transform input signals to outputs.  The inner complexity is captured in the math, only the input and output needs to be known.

Other techniques include biological modeling with slices and cultures of tissue that form neuron-like networks.  As processors get smaller, with ever increasing number of processing cores, our technology *is* approaching the ability to model small brains.  Whether such a model “wakes up” and starts demanding cheese, remains to be seen.

3 comments:

  1. I wonder about how the processor architecture and programming language used would affect the ability to model brain functions.

    ReplyDelete
  2. Something got lost in translation here. High end CPU chips crossed the billion transistor mark last year, and even if you're counting CPU cores instead of chips the biggest super computers are still only in the six figure range.

    Last summer IBM claimed to have a cat brain simulation running on one of their super computers (if at only 25% realtime speed). Unfortunately for the people who were expecting magic insights into how the brain worked it was reportedly as much of a black box as the organic version.

    ReplyDelete
  3. You're right, Dan. Transistors was too simplistic. This was a measure of "gates" that could perform the basic logic of a neuron, and it was still in the hundreds of millions range.

    The figure of a billion transistors *was* mentioned, as well as the millions of cores figure. The presentation was by a rep. from a computer manufacturer, and the chief criticism from neuroscientists was that a transistor is not a neuron, nor was a "core" - the former being too simplistic, and the latter, too complicated.

    The net result is that, yes, computers are coming close to being able to mimic the *neuron* count of small primate brains. They are at least 6 orders of magnitude away from being able to model the *connections* of that same brain.

    ReplyDelete

Please add comment - no links, spammers will be banned.