NOTICE: Posting schedule is irregular. I hope to get back to a regular schedule as the day-job allows.

Monday, May 2, 2011

What's the Code? [Full link to blog for email clients.][FT:C44]

In the past blogs we have discussed the electrical and chemical means by which neurons produce electrical activity which can signal information.  Within a neuron, this signaling is electrical, between neurons it is chemical.  Using these characteristics, plus the organization of neurons into different brain areas, neurotransmitters, circuits, networks, etc., the brain can potentially encode a *lot* of information.

Still a major question remains... "What is the information code?"

In fact, Drs. S. Deadwyler and R. Hampson asked that very question back in 1995 in the scientific journal "Science"  (vol. 270, pg. 1316) regarding neural representation in the hippocampus in rats - known previously for representing information about an animal's location in space.  Their results described a hippocampal code for information in time, particularly with relation to encoding an memory and later retrieval within a behavioral task.

It has long been known that there are different types of codes that neurons use to represent information.  In the auditory and visual cortex, there is a *topographic* organization to the code.  Neurons in the part of the retina or cochlea that respond to a specific stimulus (light angle or position or sound pitch) are "wired" to specific locations in the sensory cortex.  Thus the "code" is in the neuron connections, and the neuron need only be active, or not, to represent its portion of the total information.

Within many neural structures, the neurons adopt a frequency code - the firing rate of the neuron represents a particular "value" of information.  Stretch receptors in muscle or the proprioceptors in joints respond to the amount of stretch or degree of angle by increasing firing rate. Muscle activation in the form of neural outputs from the motor cortex most often use this type of code. The example in the upper right of the figure above shows an example of a joint receptor that increases firing rate as the angle of flex increases.  By the way, these are examples of "rastergram/histogram" plots.  The dots represent individual action potentials fired by a single neuron over the time or position axis at the bottom.  Each row of dots (raster) represents a single trial, test or repetition of the stimulus.  The bargraph (histogram) beneath is the sum or average of all of the repetitions above, and allows neuroscientists to examine the average ("mean") firing of a neuron over time and repetition. This is necessary because single neuron firing is subject to many variables, and is in fact a very chaotic or more appropriately a *nonlinear* system in which each firing is subject to many more variables than we typically observe or measure.  Thus we first need to know the *mean* firing, then we can look at the individual variability.

This is quite apparent in the "On-Off" pattern at the upper left.  Typical of a retinal ganglion neuron, you can see that the firing of the neuron is seldom fully *on* or *off*, but that there *is* a considerable difference between the two states.  In the eye, a retinal ganglion cell will respond ("on") with action potential firing when a spot of light touches the photoreceptor neurons that it is "wired" to.  When the light touches adjacent photoreceptor neurons, the activity of that retinal ganglion cell is suppressed, and is in the "off" state.  However, stray photons *do* touch the primary receptor neurons in the "off" state, resulting in random firing.  Likewise, individual neurons may have a different threshold for triggering action potentials due to fatigue (too much light) or other factors - thus even the "on" state shows variability.

Note that both of the above types of code are very well suited to the "topographical" code described above.   In fact, auditory neurons tend to utilize frequency codes, and visual neurons utilize On-Off codes within the topographical "wiring of auditory and visual cortex, respectively.

More complex coding is typified by the "place cells" of hippocampus.  Originally described by J. O'Keefe and J. Dostrovsky of University College London in 1971 (Brain Research, Volume 34, Issue 1, Pages 171-175), a hippocampal "place cell" is a neuron that only becomes most active when the animal is in a particular place in its environment .  Within that "place field" these neurons appear to utilize a frequency code to represent distance from the center of the field. As the animal moves through the field, the neuron will fire action potentials that are also entrained to one of the background oscillations of the brain [subject of the next blog].  When the animal reaches another location, a different place cell begins to fire, and the first returns to a background firing rate (lower left in figure above).  Thus the Place Cell incorporates distance, speed and directionality in its firing.  An animal's entire traversal through an environment can be tracked if enough Place Cells are recorded, and neuroscientists have since discovered neurons in connected brain areas that represent direction, body and head angle, visual mapping features, and even a "coordinate system" that underlies this "cognitive map."

The final type of code is the one utilized by most of the rest of the Cortex - a sparse, distributed code (Figure, lower right).  This code is so named because the neurons that form the code are often *not* located close to each other, the adjacent neurons do not fire with the same correlate to the stimulus, and only a few neurons out of any given brain area appear to be active in the code at a given time.  The sparse, distributed code is in fact a combination of the other three types of code - it includes frequency elements, on-off elements, and "mapping" elements.  It requires a defined topography of connections, but these are frequently self-organizing in a hierarchical manner.  Thus new information can easily be added to the network by forming (or reinforcing) new connections between neurons.  The sparse, distributed network is the least well understood, yet it is the easiest to model using neural network and advanced mathematical and statistical models.  The reason is because such models rely on being able to extract arbitrary correlations and "mapping" those relationships across multiple dimensions in a manner that appears random when projected back onto the three-dimensional patterns with which we are familiar.  Yet, the neural connections appear to do the same thing.  The example above shows neurons from rat brain active during the encoding and recall of information in a memory task.  We can clearly see which neurons are active in each phase, even though the brain structure is not apparent.  In fact, mathematical analysis and modeling can suggest how the neurons "ought" to be connected, and we frequently find that thos connections exist, although they defy a purely "topographical" approach.

One option is to assume that instead of a "topography", such neurons represent a "topology" - a transformed space that is connected, but "stretched" or "deformed" from what we think is normal.  Another option is to assume that the neurons self-organize - in other words, through their connections, each neuron "knows" its inputs and outputs, even though it would take painstaking measurement to find them all.

This is one of the challenges of building a computer that can serve as a model of the brain.  We can pattern all of the connections, but may not have the appropriate code.  Here we see the code, but the connections are too complicated to map.  Nonlinear modeling is very useful in understanding, but it still does not provide the final product of "wiring" plus "code".  Sometimes we just have to ask "What's the code?" and proceed to *use* the code without necessarily understanding it!

Until next time - take care of your brain, and don't worry *too* much that it is hiding all of its secrets in the code!

No comments:

Post a Comment

Please add comment - no links, spammers will be banned.