NOTICE: Posting schedule is irregular. I hope to get back to a regular schedule as the day-job allows.

Monday, February 28, 2011

Science and Learning

I would be ashamed if I had to admit that everything I know about the brain I learned in med school and grad school. The truth is, I didn't. Back in the 70's my parents had these great "Time-Life" science books. They joined the club and got a new volume each month. Light and Vision, Sound and Hearing, the Brain, the Cell, and other disciplines – the Solar System, the Earth... I remember them well and read them cover-to-cover ... as a pre-teen. In high school I was on a science academic competition team, so I learned even more science than was taught in my classes. Thus when I got to college, I studied all of the science I could handle – biology, chemistry, physics, oceanography, ecology and even bits of geology. This took me through a Bachelors and even a Masters degree.

*Then* I got to a doctoral program and discovered that for this particular discipline, I would essentially take the first two years of medical school and supplement with additional courses in Neuroscience as well as advanced Physiology and Pharmacology. It was here that I finally learned the difference between what we teach children and what we teach adults. James Hogan, in his book "Kicking the Sacred Cow" refers to these as "Lies told to children." We don't teach the full complexity of science to children because it is too ... complex. Likewise you don't need to know how to build a watch to tell the time. However, I disagree. We can tell children – and adults! – that we are removing complexity – simplifying without being simplistic. The thing I remember about my own learning process is that I never approached those childhood science books as a *child* - I approached them as an adult mind in a child's body. What I remember best was that they did their best to explain science in simplified terms without the simplistic or "simple-minded" approach. Teach and explain, without talking down. Draw nice pictures so that the students can *see* the parts clearly, but use the *real* names and terms. Use analogies with the surrounding world to improve understanding.

The one thing I have discovered is that I like talking about my work. I teach several graduate students, give professional seminars and public talks on science. The hardest part, for me, is to find someone misusing or misinterpreting science. I can deal with a lack of understanding, I bristle at deliberate misstatements. The rewarding part is hearing from someone that my comments are interesting and informative. As a young professor, just getting started in lectures and seminars, I tried the old Toastmasters trick of leading off with a joke. I quickly discovered that my jokes tended to fall flat, but if I just injected a *tiny* bit of on-topic humor, the talks were much more enjoyable for me as well as my audience.

Which brings us back to The Lab Rats's Guide to the Brain and my semi-daily blog. The Guide is my attempt to write science in a way that I enjoy and others can read. I think the need is there for a readable guide to brain science that doesn't talk down to the audience, but also puts the correct words, the correct terms and the *real science* into Science Fiction and other forms of communication. My last post was a bit of a rant, but I feel that it highlights the goals of this blog. I will continue to write The Guide and blog it here. I will also take some time out to handle topical matters and answer questions. Sometimes we'll have a bit of fun, and let the LabRats out to play. And when I find myself having to wrestle with a concept and get the science right *myself* before writing it into the Guide, you'll get filler blogs like this one.

Most of all, thanks to my readers. I appreciate your comments and your questions. Where possible I will include them in future blogs, or maybe just pass them off to the LabRats to chew on. I have about two weeks until my trip, so I'll try to get major parts of brain finished during that time. I have at least one guest blog and one blog from on-the-road planned during my time away. Tomorrow I promise to start in on hearing and the auditory system! Until then, take care of your brain... it's the only brain you've got!

Saturday, February 26, 2011

They they go again...

Warning. This is a rant.

OK, more of a complaint, not a rant, but the subject material is one that I have touched on before, and I will come back to it again and again.

So. They're doing it again. Hollywood. Misusing Science in the name of movie plot. The whole point of this blog and The Lab Rats' Guide to the Brain is to convince writers (of all varieties) that there is plenty of story potential in getting the brain science *right* that there is no need to deliberately of ignorantly get it wrong.

New movie, due out in a few weeks, called "Limitless." The trailer contains the line "You know how you only can access about 20% of your brain? What if you could access *all* of it?"



It's a common myth, but inexcusable all the same. When people know better, it just makes the person who perpetuates it look ignorant.

IMDB says that the movie is about some guys who take a miracle drug that makes them smarter, then have to deal with the consequences in personal and business matters, particular when they find themselves cut off from supply of the drug.

OK. True. There are drugs that act as "cognitive enhancers" but the writers and producers didn't have to lie to set up the plot.

"It's just fiction!" Is the reply.

No. It's not. TV and Movies are an incredible conduit of information in our society. In a culture that deplores the quality of education in our schools; in a world in which the impression of America is colored by the products of Hollywood; in a generation that reveals our kids are increasingly being left behind in science and technology... getting science wrong is *not* an option (to paraphrase Gene Krantz).

The truth:

Your *whole* brain is active ALL THE TIME. The *myth* that a person only ever uses about 10-20% is just that... a myth.

What *is* true is that any given activity such as reading, listening to music or watching a movie requires the coordinated activity of neurons in about 10-20% of the brain. More complicated activities - such as playing a musical instrument, carrying on a conversation, playing a game, requires even more.

Yes, some people are more efficient in using their brain. They are "smarter." Yes, there are drugs which increase efficiency - *temporarily* - and they are being intensively studied for mechanisms to combat neurodegenerative diseases such as Alzheimer's disease.

A few blogs back I mentioned the incredible number of *connections* that each neuron receives, and in turn makes onto other neurons. Each connection has varying strength, and it is in the connections that the brain does most of its work. If one were to question what percentage of the *connections* were active at high strengh at any given time, the answer would be well under 1%. However, it is the surplus of connections that allow for learning and memory. The *brain* is active at 100% all the time (even during sleep), just not all at the same level of activity.

I haven't seen the movie. I may, eventually. I may even find it entertaining, but I can't help but let this anti-science color my perceptions. It's one of the reasons that I will be on panels at several SF cons this spring talking the *science* of Science Fiction.

Science is important. Of course I know that books, TV and movies revolve on good story, but it is possible to do both! A good story does *not* require perpetuating bad science - and sometimes it's pretty enjoyable even when it does. But in this case, the willing or unwilling "suspension of disbelief" is not buried in the story, but prominently displayed in the 30 sec trailer that is played several times each evening for the weeks leading up to the movie's release.

If any one is out there listening... get it right writers. And if you have any questions, just keep reading this blog, because getting the science right is what this is all about.

Friday, February 25, 2011

"'s clouds' illusions I recall..."

OK, so on to the next system.


Oh, Ratley, hi. What's that? Timmy's fallen in the well and can't get out?


Oh. Mail? Well, I wasn't planning on doing a mailbag post this week.


Well, if you insist. OK.

Let's see, Chris K. asks about optical illusions. Yeah. OK, I can see where this might be a good time to talk about those, even though it is a function of the association cortices which we'll get to in about a week.

Well, Chris – optical illusions are usually caused by one of two processes. The first is to simply *confuse* the eyes by playing tricks with what we have come to learn is "normal." For example, in typical 3-D vision, left is closer to the left eye, right is closer to the right eye (and hence the cross-over shown yesterday will have two slightly different sizes), close is big, and far away is small. A number of the Escher optical illusions take advantage of violating visual rules and conventions. We follow the line of the Penrose staircase ( but the artist violates the rules of perspective by using the *same* perspective for up/down and near/far. Likewise the Penrose triangle on the same page violates logic, because instead of consistently shading one surface, Lionel and Roger introduce discontinuities that cannot co-exist, thus creating the illusion.

The second method is to *tease* the eyes by taking advantage of how the retinal ganglion neurons, lateral geniculate nucleus and V1 visual cortex process vision. The text to the right shows the distinction between the real world, and the V1 representation. Because the RGN and LGN are tuned to detect edges, the "fill" in the middle of the text is not represented in V1. That information is not lost, however, color and fill information is transmitted to V2 and V3 second visual areas, which detect shadings, colorations and start to interpret perspective and parallax.

When viewed simply as independent lines, the elements of the Vase optical illusion look distinctly like two faces, or a vase (see the figure at left). V1 has no problem distinctly identifying either feature when presented independently. However, once the lines are put together and shaded, there is conflict between what V2&V3 (the vase) and V1 (the faces) detect due to the interference by the edges of the dark shading.  Thus this second type of optical illusion, rely on the brain being presented with two different interpretations – simply because the visual system processes lines and shading separately.

By the way, look up "Necker Cubes" online to see more examples.  There are *way* more examples than I can show in this space. 


Yes, I was getting to that, Ratley.

One of the more interesting "psychometric" aspects of neuroscience is that it is possible to detect *when* a person's perception of an optical illusion shifts. Most of the motor control are of the brain is in the frontal cortex, just forward of the border with the parietal cortex, and control of the eye muscles is now exception. It may seem that occulomotor control (Cranial Nerve III, the third "O" in yesterday's mnemonic) is a simple matter of pointing the eyes in the right direction. However, the process is *much* more complicated, requiring actual target acquisition and identification – in other words, the full suite of visual cortical processing. Distance and horizontal tracking requires that the eyes move at slightly different angles; focus and lighting changes requires pupil diameter control. The "Frontal Eye Fields" along (with the Edinger-Westphal nucleus of deep thalamus and the superior colliculi and locus coeruleus of the brainstem) is involved in the complex process of integrating actual *vision* with the process of adapting the eye to light and motion. When visual information *changes* it can be revealed as changes in scanning the environment or reacting to light.

In fact, it has been demonstrated that if a person is shown a Necker Cube-style optical illusion, and told to press a button whenever their perception of the cubes changes from the "top" to the "bottom" view, the pupils dilate briefly. This is just one small way in which the operation of the brain (or – dare I say it – The Mind) can be monitored by a physiological reaction.

And now for Ratfink's favorite: one last type of optical illusion that depends on even more complex association of vision and language. The "Stroop Interference" effect shown at right violates the consistency of line vs. shading, but also introduces understanding of the word meaning. This process depends heavily on the multi-sensory association cortices at the intersection of Occipital, Temporal and Parietal lobes – with the added involvement of decision making by the Frontal Lobe. This is one of those phenomena that belies the idea that we only tap a tenth of our brain.

But more on that rant later, for now I need to shoo these LabRats back into the lab, get YDR out of the peanut butter, and explain to Ratface that we weren't talking about Alaskan Island eyeglass makers.

Thursday, February 24, 2011

The Eyes Have It

No discussion of vision would be complete without discussing the miracle that is the mammalian eye and the pathways that lead from eye to brain.

“On Old Olympus’ Towering Tops, A Finn And German Vend Some Hops.”  This was the mnemonic taught to pre-med and medical students to help memorize the cranial nerves.  O.O.O.T.T.A.F.A.G.V. S.H.  (i) Olfactory.  (ii) Optic.  (iii) Occulomotor.  (iv) Trochlear.  (v) Trigeminal.  (vi) Abducens.  (vii) Facial.  (viii) Auditory (now called “vestibulocochlear”).  (ix) Glossopharyngeal.  (x) Vagus.  (xi) Spinal Accessory (now called “cranial accessory” or just “accessory”).  (xii) Hypoglossal.

And there it is.  Cranial Nerve Number II.  Optic Nerve.  Note that this is the first time I’ve used the word “nerve”, because this is the first time we are dealing with a bundle of *axons* which project from a sensory organ to the brain. 

The eye consists of several structural details of cornea, lens, iris, retina, etc. which can be found on many on-line sources and Biology textbooks.  However, the *neural* details important to our discussion the Brain are in the retina.  The photoreceptors of the retina come in two varieties – black and white (rods) and color (cones).  However, the most fascinating processing of the visual system takes place in the “retinal ganglion” neurons.   These neurons receive connections from dozens of photoreceptor neurons and organize their inputs into a “field” of vision that responds to light in its center, and dark in its surrounding area (see Figure 1, right).  These neurons operate by receiving an excitatory connection from photoreceptor neurons in a particular location of the retina, and inhibitory connections from the photoreceptor neurons surrounding it. 
Figure 1: Receptive Fields

What this means is that the neurons in the retinal are best tuned to detecting spots of light very similar to pixels in a computer image or on an LCD screen.  The optic nerve is actually made of the axons from the retinal ganglion neurons, and not the photoreceptor neurons.  They enter the base of the brain and travel to one of the specialty regions of the thalamus.  If you recall the earlier description, the thalamus is a relay gateway for sensory information entering the brain, and vision is no exception.  At the “Lateral Geniculate Nucleus” of the thalamus, the axons of the retinal neurons are joined to form receptive fields that resemble bars of light – or light-dark edges of objects in the visual field.  This neurons then project to the V1 primary visual cortex, and form the orientation-specific and ocular dominance columns discussed yesterday. 

In parallel pathways, the color sensitive photoreceptors form similar “center-surround” structures, but this time they are paired by color.  Red and green sensitive neurons are paired to produce the red-green fields shown in Figure 1.  Blue and yellow are likewise paired.  Interestingly, there are no true “yellow” sensitive photoreceptors, and the yellow fields shown in the figure result from the fact that green-sensitive photoreceptors differentially respond to blue-green vs. yellow-green light. 

In case you were wondering, red-green color-blindness results from deficiencies of the retinal cones, and not the ganglion or LGN neurons.  The “after-image” effect you “see” when you stare at a color picture, then look at a white piece of paper is a result “rebound” when the inhibition is released on the surrounding color fields of the retina and LGN neurons.
Figure 2:  Visual Field and Optic Chiasm

Most, but not all, of the sensory and motor neurons from the body actually connect to the *opposite* side of the brain.  That crossover usually takes place in the spinal cord about midway between the brain and the location where the nerve enters the spinal cord.  For the cranial nerves, not all of the connections cross over.  The optic nerve is very strange in that half of the connection cross, and the other half do not.  The distinction is in what part of the visual field is represented.  Figure 2, left shows how this works.  The lens of the eye reverses and inverts the image, so that objects that appear in our vision on the right, end up on the left half of the retina.  Each eye receives input from left and right visual areas, but they are combined so that the images from the right visual field go to the left LGN and visual cortex, and images from the left visual field go to the right LGN and visual cortex.  Once in the visual cortex, projections from the two eyes, but for the same visual field, form the ocular dominance columns. 

The “X” crossing is actually called the “decussation” or “optic chiasm” and certain brainstem or pituitary tumors or strokes can be diagnosed by their effects on the visual field caused by pressure on the optic nerve and chiasm.  Visual information goes to additional locations to aid in visual tracking, localization and papillary reflexes.   A novel form of stroke, involving the thalamus and the optic nerve projections, results in agnosia, for visual neglect, in which the person is unable to acknowledge any vision in one half of the visual field.  Both eyes function normally, and the person will *track* a moving object anywhere in sight, but if shown a picture in the neglected area, they cannot identify it, and in fact will deny that there is anything to see (however, if the picture is a sexy, horrific or embarrassing one, they will show emotional reactions, still without recognizing that the picture is there).

So, the eyes really do “have it.”  They detect light, color, motion – and even do a large part of the visual processing before the signals even get to the brain.  Is it true that the retina preserves the last image seen before death?  No.  The retina is not like film or a video camera.  Vision consists of electrical signals from neurons that detect light by chemical means, however those chemicals break down pretty rapidly when neurons are deprived of oxygen, and the electrical signal is actually a variation in the frequency of action potentials.  Once those processes stop, there is not image, no picture, no information.  But even without the old myth, the visual system is a fascinating place to hang out.

Next up, we will move on to the intersection of the parietal and temporal lobes for the auditory system.  After that we will return to the association cortices to discuss how the brain puts multiple signals together to recognize the outside world. 

Wednesday, February 23, 2011

The Vision to See

Occipital Lobe.  Visual Cortex.

As promised, starting at the "back" of the brain takes us to the Occipital Lobe of the brain.  Unlike many of the "gross" or major identifications of the brain, this lobe is almost entirely devoted to processing one sense: vision.  It makes sense, though (no pun intended) since the visual system is *the* major sense in the primates including humans. 

It is also one of the "newer" structures of the brain.

What do I mean by that?  Well, in developmental terms, comparative neurophysiologists "date" parts of the brain in terms of when they appear - both in evolution and during fetal development. No matter your *personal* opinion of evolution, as a fetus develops, complex organs appear and mature ("ontology") in a sequence very similar to the "phylogeny" of evolutionary development from lizard to higher order mammal.  Thus the "ancient" lizard brain performs the most basic functions, and corresponds to the brainstem and subcortical structures.  As more senses are added - smell, hearing, touch, vision - the appropriate cortical regions develop and expand.  The most advanced functions are complex cognitive decisions centered in frontal lobe, hence that is the "newest" cortical regions.  But aside from cognition, vision is the next most complex, and evolutionarily most recent development - hence the visual cortex occupies a disproportionately large area of the brain and it makes a certain sense that it occupies a large percentage of total brain area. 

From: Psychology Wiki -
At the most "rostral" (anterior or rear-most) extent of the occipital lobe, the part referred to as the occipital pole, and in the fold between the left and right hemispheres of the brain, is the primary visual cortex.  This is the region that directly represents the shapes, shades and colors that we see.  In stained sections, this cortex looks striped, thus one of the names is "striate" cortex. 

To better understand the structure of this brain area, it is necessary to understand the *signals* that come to this area of the brain from the eyes.  This really deserves it's own blog, and I'll talk more about it in tomorrow's blog, but for now, the important point is that by the time visual information reaches the visual cortex, it consists of short lines - bars or edges of light (or dark).  These bars are then organized by visual orientation (angle), as shown at left.  In the visual cortex, these representations of visual angle are alternated with the identical image from the left and right eye. In scientific terms, these are refered to as "ocular dominance columns," and research has shown that these columns develop with days of birth (or eye opening in animals) when the brain first begins to receive input from the eyes.  Rows of these representations are stacked to form the primary visual area, with each row representing different areas of the entire field of vision. 

Thus for each line, curve, pixel, light/dark spot, left/right - top/bottom portion of the visual field, there is a patch of visual cortex that represents it.  Building a "picture" of what we see, is just a matter of recombining these visual elements into pictures.  This is the role of the other regions in the occipital lobe.  V2 - secondary visual cortex builds these elements into more complex images.  V3 and V4 visual association cortex develop even more complexity - including templates for common features such as faces and familiar shapes.

One of the common *stories* told to neuroscience students is of the "Grandmother Cell" which is a neuron in the V4 (or other association) cortex that receives enough convergent connections from various shapes, colors, images that it only responds to your grandmother's face.  While that is a bit of an exaggeration, there really are visual association cortex cells that respond to faces - and specific features of faces in fact.  A personal example that I experienced was seeing a recording from V4 that responded to faces - when an experimenter peeked around the partition and was seen by the animal, the recording showed evidence of increased neural activity - but only if the person had a mustache!  Clean shaven faces did not activate the neuron.

Real "Grandmother Cells" while rare, but can exist in the association cortices that combine visual (Grandma's face) with auditory (Grandma's voice) with olfactory (her fresh baked apple pie) - and memory.  Yes, memory is a key component, and rather than just one neuron, there are many neurons that represent one or more portions of the total stimulus.  But again, I get ahead of myself, and will return to this concept when we talk about memory later in the blog.

Tomorrow's blog will go into more detail on eyes, retinas and visual representation - before we finish the occipital lobe with another look at the association cortices.  Tune in tomorrow and we will discuss how spots of red, green, green, blue and white light turn into a mental image of our surrounding environment.

Monday, February 21, 2011

How do we know that?

Just a quick post today to clarify something from yesterday...

Just how do we *know* what all of these brain areas do?  It's not like they can tell us, right?

Well, actually, they do.  Until the past 50 years, mostly we found out what role a brain area fulfilled by losing it.  Losing the brain area and the function, that is.  The most famous case in Neuroscience I have mentioned before - that of H.M. who had surgery to remove part of his temporal lobe, and lost the ability to form new memories.

Likewise, most of what we have learned about brain function has been and still is learned by studying loss.  In humans we look at the consequences of epilepsy, stroke and brain injury.  In rats we can perform experiments to temporarily or permanently inactivate small regions of the brain.  By doing this, we learn about fucntion, regrowth and adaptation following injury. 

The most profound work was done by Canadian Neurosurgeons Wilder Penfield and Herbert Jasper.  Penfield treated epilepsy by destroying the brain cells that triggered seizures.  To do this, he first had to determine which brain areas - and cells - were damaged.  In the course of surgery, he would apply a probe delivering a small electrical stimulus to various brain parts of the brain surface (while the patient was awake!) and asking what the patient experienced.  Penfield and Jasper published "Epilepsy and the Functional Anatomy of the Human Brain" in 1951 which mapped the connections between touch sensations and muscle control for the whole body onto the motor and sensory cortex of the brain.  We'll speak more of the "motor homunculus" and "sensory homunculus" in later blogs.

In the past 60 years medical science has developed astonishing new techniques for imaging the functional activity of the human brain.  Functional magnetic resonance (fMRI) detects areas of the brain with increased blood oxygen flow and consumption.  Positron emission tomography (PET) tracks the flow of radioisotope-labeled glucose into neurons. Magnetoencephalography (MEG) detects minute electrical currents to depths of 50-70 mm into the brain (Note the human brain stem is about 100-150 mm from the surface) providing thousands of times the resolution of EEG.  On the noninvasive stimulation side, we have transcranial direct current stimulation (tDCS) and transcranial magnetic stimulation (TMS) which can stimulate many neurons without surgery - albeit these techniques are not precise (i.e. minimum volumes of 5-10 cubic *centimeters* and typically only to depths of 10-20 mm.  While still not perfectly precise - the imaging resolution is measured in cubic millimeters, which can *still* contain over one million neurons - it is now possible to place a subject in an MRI scanner, ask them to perform a task such as reading a book or imagining playing a musical instrument, and watching which parts of the brain "light up" in real time. 

So, yeah, we do have a pretty good understanding of the functions of various brain areas.  The interesting part is not just what we know, but how we learned it. 

Until next time...

Sunday, February 20, 2011

Back to Basics

The LabRats are back in the lab and we can return to the discussion of parts of the brain.

For the next few weeks, this blog will work through the essential sections of The Lab Rat's Guide to the Brain which deal with the functions of the brain, and which of the various regions, cortices (plural of cortex, if the term is unfamiliar), ganglia and /or nuclei  are responsible for those functions.  Before going in depth into the sections, I prefer to specify the *types* of functions first.

I will describe brain function in terms of:

(A) Input
(B) Output
(C) Processing
(D) Control

Clearly "Output" and "Control" could be thought of as the same thing, but I will clarify that in my classification scheme, "Output" results in an action of the body, while "Control" results in a change to the body's internal workings.

Now, what are the "Inputs"?  Vision, Hearing, Smell, Taste and Touch.  Those are the five basic senses.  In addition, there are special cases:  "Proprioception," the sense of body and limb position is a special case of "Touch," although the receptors are in the joints and muscles and not the skin.  Balance is tightly associated with the sense of hearing, but is controlled mostly by proprioceptors, pressure sensors on the feet, and the proprioceptor-like neurons of the semicircular canals.  Smell and Taste are essentially the same sense - what we sometimes call "Chemoreception" - and there are special instances of chemoreception through skin receptors.  This brings us to pain.  The sense of Pain is closely intermingled with all of the senses, each can signal a painful stimulus, but for the most part it is organized with, and associated with the sense of touch.

Outputs:  The commonly considered outputs of the brain are speech and muscle movement.  In truth there are many more outputs, but most of them fall into regulating the various physiological systems fo the body, and are more appropriately considered "Control" functions.  In addition, muscle movement is not just moving the limbs, but also includes eye blinks, pupil dilation and constriction, "scanning" movements, adjustments of the ear drum, swallowing, breathing, and piloerection (goosebumps).

Processing is the function that involves the largest percentage of the brain.  Once a sensory neuron reports to the appropriate part of the brain, that information is *represented* then *associate*.  Smell gets associated with taste, and we decide which foods we like.  Sound and vision are associated, and we can track a moving car, bird, airplane, or that baseball flying toward us at 75-80 mph.  Vision, touch (vibration) and proprioception are associated, and we are certain we've *hit* that baseball out of the park.  Vision, hearing, and proprioception are associated to give us the power of speech and reading. 

Memory is processing; as is "executive function" or decision-making. 

Control functions take two forms - what we call the "autonomic" - and you can substitute the word "automatic" - functions, and coordination.  Taking the latter type first, coordination usually involves association between the Input and Output functions.  Vision plus eye movement (and pupil dilation) provides tracking.  There is a brain area that performs precisely those functions.  Proprioception plus muscle movement plus vision plus the sense of touch  is necessary to coordinate the smooth muscle movements necessary to reach out - *find* the object we are reaching for, stop the hand, grasp the object, and move it to another position.  Autonomic functions include resting heart rate, breathing rate, body temperature, blood pressure, hunger, fear, excitement, and even mating.

To reiterate from a previous blog, the diagram of the various "lobes" of the brain at right also serve to divide up functions as well.  If one were to draw a line directly downward from the point marked "Central Sulcus," most Input functions would be to the right and Output to the left.  Red (Frontal), Orange (Cerebellum), and light Blue (Brainstem) are Control areas.  Blue (Parietal) and Yellow (Temporal) are Processing areas, although there are also some processing in the Frontal lobe. 

Over the next several blogs, we will work from back (Occipital) to front (Frontal) regions and describe a bit of the organization, roles, and specialization of each brain area.  We will then work "downward" into subcortical areas and the brain stem, with a brief discussion of the spinal cord.  Along the way I'll introduce some of the mnemonics we learned in Medical School for keeping all of this straight long enough to pass exams, the LabRats will probably make an appearance, and I'll collect questions for the mailbag.

To paraphrase Joe Bastardi's weather wisdom (  "Protect your brain, it's the only brain you've got!"

Friday, February 18, 2011

How to read a scientific paper…

My good friend Sarah Hoyt asked me to do a special crossover  between her blog “According to Hoyt”  ( and “The Lab Rats’ Guide to the Brain” at Teddy’s RatLab (  For those of you joining us from the world outside the lab, I am Tedd Roberts, a professional researcher in the field of Neuroscience, and an apprentice SF writer.  As a Ph.D., I talk to a number of writers and give advice – some requested, some gratuitous (grin!) about getting the science right in Science Fiction.  Here in Teddy’s Rat Lab I am working on “The Lab Rats’ Guide” as a way to describe the basics of brain science in an informal way, without losing the accuracy of the science.

After all, *some* brain science in TV and Movies is just laughable.  What?  You’re not laughing?  Well, trust me, the doctors, scientists and students who watch and read are laughing; that is, when they aren’t hanging their collective heads in shame.

I’m sure you’ve seen it – the brain probe that is long enough to stick out the other side of the skull, yet somehow it never seems to do any damage when inserted into the back of the brain.  The outer space doctor emoting over “The engram has wrapped itself around the neocortex and we’ll never get it out!” 

Right.  Sure.  And the engines, they canna take ennimore, Cap’n. Yup, a whole university’s worth of professors is shaking their heads over that one.

So – as a writer, or as a reader, what are you supposed to do?  Read a scientific paper?

In a word?  No.

My advice, don’t do it. “That way lies danger, young apprentice.”  Scientific papers *really* aren't written for nonscientists.  They are full of phrases like "Under conditions of altered physiological constituents of the interstitial fluid, we determined a significant 5% increase in intracellular osmolality."  Now, if you're a scientist you can read that and figure out that when the liquid outside a cell is salty, the liquid inside a cell gets a little bit salty, too. Scientific writing is *stilted*.  It uses a very rigorous style that is meant to convey certain facts in a manner such that other scientists will know to look at the information in a certain way.

I have a colleague that says "Scientists only really only know how to write about 20 sentences.  They just have to learn how to use those same sentences over and over until they run out of results to include in a paper."  He should know, he's written over 150 articles for scientific journals, and they all use the same basic construction.  Only another scientist, schooled in the same art of manuscript preparation can truly wring all of the essential facts out of a scientific paper.

"Psst, hey Boss?"

"Yes, Ratley, what is it?"

"What about scientific magazines?  The Grad Students keep leaving them lying around in the lab.  Surely they're not so bad!"

"That's true, Ratley, but sometimes I think those magazines are edited by Ratfink.  Somehow, people seem to get impressions about science that the scientists themselves never intended, particularly in the magazines that have "Popular" in the name.  Unfortunately, the better public science magazines are still scientific journals, and the articles can still be hard to understand."

[Oh, sorry folks, Ratley is my assistant.  He showed up in the lab one day, and asked for a job.  Who better to handle lab rats, than ... a Lab Rat?  I introduced him and the other LabRats a couple of days ago over at the Guide.  Oh, and yes, I’m translating.  When Ratley speaks, most people just hear “squeak.”]

Back on track.  I'm sure you've seen them on the newsstand, Magazines with Science or Scientific in the title.  There are two high quality "public" journals ("Science" and "Nature") that publish new or important results with broad appeal.  Manuscripts are typically submitted to a board of editors, who then send them to be read and reviewed by other scientists in the appropriate field before the editors will consider publishing.  If an article passes this "peer review" and is also considered to be of interest to persons other than just those who study that exact phenomenon, then the magazine will consider publishing it.  These magazines are considered "public" because scientists and knowledgeable people from many different scientific fields read them.  There are other magazines that take science seriously, such as "Scientific American," either by inviting scientists to write articles for the general public, or having their own writers interview scientists before writing an article.  Then there are the ones that are not so serious - those are usually the ones with "Popular" in the name.  Getting an article to be understandable by the public requires someone that can *write* first, and the science comes second – sometimes without even talking to scientists.  It is very rare that a scientist is such a great communicator that they can write Sunday Supplement articles on science that anybody can understand – the late Carl Sagan was one, and the Science Fiction and Fact author Isaac Asimov was another.  The hazard in writing an article so that anyone can understand it, is that you might lose the science along the way.

So, "hard science" is ... hard, and "easy science" may not be science at all. In fact, I have quite often found that some of the "Popular" and "Today" magazines can sometimes take a decidedly *antiscience* stance. Is there a middle ground?  Sure.  If you are serious about including science, and in particular brain science, in your writing, consider taking a couple of courses at your local community college.  Often there are classes in physiology, neuroscience or psychology for non-majors – you may even find one taught by the same professor that teaches a university course to PhDs. Take the survey courses, learn the language.  It may not help you understand The New England Journal of Medicine, but it can certainly help with Scientific American.  The other thing it will help with is ...

Ask a scientist. 

"Ya want I should call Ratley and get some help in here, Teddy?"

"No, Ratso.  I think I can handle this one."

"Are ya sure?  Da emails have been pilin' up ever since ya posted dat blog on da Internet"

"No."  (pant) "I can handle it." (heave) "Man, that's heavy. How many more sacks of mail? Oh, heck no.  Yeah, Ratso, call the guys in here, we've got to sort through all of this stuff."

"Hey, Boss.  You've got more fan mail."

"No Ratley, not fan mail.  More questions, but I can't figure out how they got my address.  Do you know?  Ratfink ?"

(Ratfink leans on a mail sack, whistling)

"Ratfink?  You *do* know!  You did this, didn't you!"

"Sure.  You know folks, Teddy here will be *glad* to answer your questions, just email him at..."

"No, Ratfink.  Don't you dare, or no cheese."


You've probably figured out by now that if you *really* want to get a better understanding of the brain, who better to ask than a real brain scientist?  There are a bunch of us out there that are fans of *whatever* fiction genre you might choose.  Science Fiction is a favorite, and there are quite a few scientists that write as well.  Many years ago at a very large scientific conference, one professor had a booth selling (and signing!) his mystery books.  They were quite good, and appealing to scientists and nonscientists as well.  However, we scientists are not always the best at *writing* fiction, but we sure can tell when the science is wrong.  There are over 30,000 people attending the Society for Neuroscience meeting each year, and if you mention “Spock’s Brain” or “The Matrix” they will laugh, but at the same time they will speak well of “Memento.”

Need help finding a scientist?  Just ask on whichever bulletin board you frequent.  Ask the local medical school or university, find out who teaches the night classes in Biology, Chemistry  or Physics at the local community college.  Ask someone you know.

Getting the science right is *worth* it.  You owe it to the readers.  You’ll find that many if not most scientists will appreciate it – but be sure to explain to them that The Story comes first.  You aren’t writing a Ph.D. dissertation.  They’ll understand.

And who knows?  You might find out that you’ve gained a whole bunch of new fans!

Thursday, February 17, 2011

From the Mailbag, part 2

Since our last appearance, and thanks in no small part to invited guest appearances on talk shows and other discussion groups, the LabRats and I have received a few emails with questions folks might like answered about science, writing, and the brain. Without further ado – a few samples from our mailbag.

"Dear Lab Monkey: Why don't you let the Rats talk more? -A. Nony Mouse"

My reply: "Dear Nony: I think you have me mistaken for Dr. Freer. Still, just for you, I will sometimes let the LabRats have their say. Do read on, and I'll translate... - Tedd"

... another ...

"Hey Ratley, Do you LabRats really talk? Signed Thomas."

Ratley squeaks. [Translation: "Doubting Thomas: You wrote to me. Are you really expecting an answer? –Ratley]"

... Okay, back to the Mail ...

"Hey Tedd: If humans were to develop telepathic or psychokinetic abilities, what parts of the brain would be responsible? - J.R."

Whoo-boy. Okay, that's a long one, let's table that and look at another. It'll take another whole blog post to answer that one.


"Dear LabRats: How is it possible that we can hear our own name spoken in the midst of a crowded, noisy room? - S.H."

Ah, another. Do be patient. Ratley and I will make a list and get back to these one by one.

... ah, here's a good one!

"Dear Dr. Roberts: We sometimes hear that humans use only 10% of their brain. Is this true? - Dave F."


Hey, Ratfink, don't startle me like that, I didn't know you were there. Okay, it's just the right size for this blog. You go ahead and answer it.

[translated from rat-squeak]

"Dear Dave:

"We hear that little bit of trivia all the time. It is, as you may have guessed, a misconception. Now, we LabRats always use 100% of our brains – well, except for maybe Ratface. Oh, and Ratso, he uses 100% of his stomach to do his thinking.

"Anyway, not to get off target, it is true that one could say that at any given time, only 10% of a humans brain is active – 5% if they are watching daytime talk shows. What it really refers to is the incredible redundancy of the human brain, plus the fact that there are specialized brain areas that are only used for particular tasks.

"Let's start with vision. There are millions of cells responsible for detecting light and sending the information to the brain. If you think of the light-sensitive retina cells (photoreceptors) like pixels in a camera or TV, you can understand why you need so many – to get the maximum fine detail in anything you look at. In fact, though, there are multiple types of photoreceptors for each "patch" of our visual field. There are 'rods' which detect black vs. white, and there are 'cones' which detect color. Further, unlike us LabRats, you humans have different color sensitivities of cones: red vs. green and blue vs. yellow. Thus for each 'pixel' of image we see, there are at least three types of retinal cells.

"Redundancy – and different functions.

"Now, take that same visual information into the brain. Before it gets there, there are 'ganglion cells' in retina that combine inputs from photoreceptors. Each ganglion cell has a donut-shaped field that represents the part of a visual scene covered by the ganglion cell. For some cells, light shining in the 'donut hole' more electrical and chemical activity in the cell, while light shining on the 'donut ring' causes less activity. Again there is redundancy and different functions. There are millions of ganglion cells covering all possible combinations of photoreceptors covering our entire visual field, and there are ganglion cells with 'on-center, off-surround' fields as described above, as well as cells with 'off-center, on-surround' functions.

"But what is it all for, and why the redundancy? Well, moving to the 'visual cortex' of the brain, combinations of inputs from retinal cells result in cells that are activated by short lines of light, and by edges. Multiple redundant copies means that there are cells for every conceivable angle of line or edge, every possible location in the visual field that our eyes can detect, as well as combining two eyes, and all those colors. Working forward into the parts of the brain called 'visual association' areas, we start to find brain cells that respond to circles, shapes, even faces!

"Then we move into combining other senses, and other parts of the brain: sound, smell, touch, movement, memory, and decision making. We LabRats get to continually hear the Neuroscience student's favorite story about Grandmother Cells. That's the brain cell that is active only when it sees "Grandma's face, hears her voice, and smells her fresh-baked apple pie (with a slice of Cheddar for well-behaving LabRats). Believe it. While there may not be specific 'Grandmother Cells' in any given human's brain, there *are* brain cells tuned to respond to such high specific combinations of inputs!

"So, if the Grandmother Cell is active, does that mean that we have used only one *trillionth* of our possible brain capacity?


"In the first place, the signals that activate Grandma have traveled through five widely separated brain areas for vision alone, and an equivalent number of sections for sound, smell, touch, movement (don't forget, Grandma wants a hug and a kiss before you go!). Then we had to search through memory. Is that Grandma H. or Grandma D.?

"That's another contributor to the 10% myth. The human brain has an incredible capacity for memory. With the possible exception of Ratface, the typical LabRat has billions of brain cells available for storing memory, and that's sufficient to remember where to find the cheese, the peanut butter, fruity cereal, water, electrical cords that are fun to chew… In fact, Dr. Rob has experiments that show that no matter *how* complex an environment, the rat brain can map it and remember it.

"Now in comparison, the human brain has *trillions* of brain cells in each brain area. How much more information than a mere LabRat (except for me, of course) can it store and process? As far as we can detect, no human has ever run out of storage space in the brain. Hence, humans must use only a small portion of brain resources. However, the rest of the brain is there, active, and ready to contribute at a moment's notice.

"To finish up, some of the most fascinating examples come from the field of brain imaging. MRI scanners can be set to track the flow of water or oxygen in the brain, resulting in a map of the most active brain regions. If your everyday average human lies in a scanner and listens to music, the primary and associative auditory regions light up. Sounds with an emotional or memory content may activate brain regions involved in memory. Sort of like Ratface and Heavy Metal.

"Ask them to read written music, and the visual and reading centers light up, but very little activation occurs in auditory areas. However, if you ask a professional symphony conductor to read a musical score, the brain scan lights up with brain areas involved in reading, listening, singing, memory, even the areas responsible for moving the hands and arms in conducting motions!

"Now *that* is using your brain!

"So, final answer. At any given time, sure humans only use a portion of what the brain is capable of. The rest of it gets used at other times and for other purposes. We *still* don't know the total information capacity of the LabRat brain, let alone the human one, but it is certain that it does *not* go unused!"

[end translation]

Thanks, 'Fink. Not a bad explanation. Now I'll add one more piece of information before we let these good folks go back to their regularly scheduled blog reading…

How can we use all of our brain and *still* have room for more information? The most astounding thing is that information is both spread out among a lot of different brain cells (that redundancy again) yet still specific enough that activating only a few brain cells will get the whole bit of information back. SF writers like to call this "holographic." That's not a bad term, but not really accurate. The scientific terms are "sparse, distributed" and "associative." That last one is the key: "associative;" that means that the reason that the brain can keep holding more information is that it keeps making *associations* between new and old memory. Apart from diseases and injury, the reason why we forget is not a loss of the actual information, but a failure to come up with the appropriate associations.

So folks, keep those questions coming. Nestor is currently making a mess out of the discarded envelopes and YDR is now covered in paper link and ink. We'll try to answer some more of your questions in future guest blogs.

Wednesday, February 16, 2011

From the Mailbag 1

Whew.  Taking a bit of a breather tonight. 

One of the challenges of academic research is the paperwork, and this week is no exception.  Today alone saw me correcting proofs on a research manuscript, completing a research grant application, handling budgets and writing a progress report.

With all of that office work, you may wonder who actually runs the lab?

Well, many labs jokingly refer to the technicians as lab rats, in Teddy's Rat Lab, we have some literal LabRats (tm).  No, that's not Speaker to Lab Animals over there to the right, no matter what Sarah Hoyt says.

Actually, he's just a cousin.  You can tell because he's a white rat.  The LabRats are "Rattus norvegicus", or Hooded Norway rats just like Ratley over there to the left. Ratley is the boss, and he's in charge of the mailbag.  Right, Ratley?


Oh, yeah, right, they don't understand you.


Anyway, Ratley has also been keeping the lab running in my absence, assisted by Ratso, Ratface, Ratfink, YouDirtyRat and Nestor.  Well, mostly helped.  Ratface keeps getting his tail stuck in the doors. 

The hardest part is keeping up with the mail.  Last week Teddy's Rat Lab got two big questions that have been saved *especially* for this blog:

William asks:  How can a sensory neuron respond to very low signals or even wavelengths that are larger than the cell itself?

Neil asks:  Is there really such a thing as an automatic reaction or "muscle memory"?


Right., Ratley. He reminded me that there are two particular adaptations of visual and auditory neurons that handle just those problems asked by William.   The outer segment of rod and cone cells in the retina have a membrane that folds over itself to pack lots of rhodopsin into a small space.  In this manner, the light-sensitive cells of the retina *amplify* the signal in response to limited space.  For auditory neurons, the neuron itself doesn't respond to the sound wavelength, but the long, tapering Basilar Membrane vibrates in response to sound.  The membrane is long enough to respond to frequencies from 200-20,000 hz.  In fact, different frequencies reach a maximum vibration at different points along the membrane, thus auditory neurons simply have to be organized according to *where* they connect along the length of the basilar membrane.  The neuron itself doesn't have to respond to the wavelength of the sound, but merely to any vibration.  All of the sorting of sound pitch and frequency is donw by the connections between neurons.  We'll talk more about that in later blogs as we work our way through various systems of the brain.


Ratley also mentioned that I should not forget the guy with the big feet.  I guess he means you, Neil, and the answer is yes, there is a special subset of memory commonly called "skill memory."  While it might seem like a type of "reference memory" (that is memory of rules, skills, techniques, etc.) in fact it is not stored or processed through the same parts of the brain as other types of memory. 

To further explain, I present the case of H.M.  Back in the 50's, the patient known only by his initials, H.M., had epilepsy.  It could not be controlled by the drugs of the time, but the brain area in which it originated could be located by electroencephalogram, EEG, the recording of brain electrical activity from the surface of the head.  So the surgeon performed brain surgery and removed the critical area, the inside surface of the temporal lobe on both halves of the brain.  In that region lies the hippocampus - essential to making and storing *new* memory.  Like the man in the movie "Memento," H.M. was no longer able to store new memories, although he could easily recall memories from before his surgery.  For instance, he would read a newspaper, put it down, pick it up 15 minutes later and read it again, never remembering that he had already read it.  Yet, when given a game of skill (Towers of Hanoi - look it up, it's a fascinating game), he continually got better, until he could solve the puzzle perfectly every time.  When asked, he would tell the doctors that he had never seen it before, and would be surprised at his own skill.

Neuroscientists now know that phsyical skills are processed through the cerebellum, caudate and putamen (and other subcortical structures) and do not require the *conscious* memory regions of the brain.  Pretty neat stuff!   So yes, there is "muscle memory" and it is every bit as automatic and mysterious as it seems.

So for now, keep those questions com....


... No, Ratface, watch where you put your... ... tail.

...too late.  I guess Ratley and I need to get back to the lab before Nestor makes an Rat's Nest out of everything.

Until next time...

Tuesday, February 15, 2011

Your Brain on Steampunk

Going to SF/Fantasy conventions can be dangerous. Giving in to our inner fan is certainly liberating, but you have this incredible let-down when you leave the Con and return to Mundania. In the case of Dragon*Con it can be even more hazardous – to your eyes (costumes), your ears (concerts) and your mind (due to all of those ideas and concepts fizzing around in there. The following is a consequence of Dragon*Con 2009, and departs from the usual format of The LabRat's Guide to the Brain.

The Brain is Steampunk.

For years popular media has latched onto the idea that the human brain is best described as an organic form of electronic computational device. Our movies and TV are full of references to the brain as computer, from "The Terminal Man" to "The Matrix" in which it is seemingly easy to connect a brain to electronics and expand the brain into some type of super computer. Even in the medical and neuroscience fields we talk in terms of neural circuits and neural computation as if we could easily replace parts of the brain with chips and transistors and all would be right with the world.

But the brain is not electronics. It’s not even digital. Modern computers are built on binary logic. The smallest computing element is the bit – on or off, no in between. Gang enough of these tiny switches together and we get bytes, words, and the internet. Build a device that can add and subtract enough bits and we get PCs, MACs and HAL 9000. Yet, we already know that the brain does *not* operate on simply on or off. Sure, there's a function called an action potential which is all-or-nothing, but that refers just to how much voltage is produced. Action potentials can occur in singles, pairs, bursts, fast, slow, and from single neurons or billions at once. Any given neuron (single brain or nerve cell) is not exposed to simply an "on" or "off" input signal, and it doesn't produce one, either.

Unlike a computer, there is not a one-to-one relationship between the input bits and the output bits in the brain. If you trace a single memory bit through the central processing unit of a computer, you find that that bit is manipulated in isolation from all other bits, sure it can be added, subtracted, multiplied and divided, but you can track that same bit through every operation from input to output. Not so in the brain. Each neuron receives inputs from hundreds to thousands of other neurons, and in turn projects its own outputs to a similar spread of targets. In addition, those inputs can be any mix of excitatory (on) or inhibitory (off) signals. They don't even have to be "all the way on" or "all the way off", there is plenty of room for "in-between."

So the brain is not digital. Does that mean it is an analog computer? The answer is: maybe yes, maybe no. Long before ENIAC ushered in the digital age, there were mechanical analog devices in common use for astronomy, navigation and ballistics. Early electrical analog computers required dedicated wiring to connect networks or resistors and capacitors to producing variable voltage outputs. In an analog computer, voltage might represent altitude, a resistor – the wind, a capacitor – distance. It's similar in the brain – the rate at which a neuron "fires" (produces action potentials) can represent the angle at which the elbow is bent, while the difference between whether one neuron or another fires can signify whether it is the right or left elbow. Modern neuroscience has returned to the concepts of analog computing to produce better models of neurons and brain neural function.

But I digress. I actually want to convince you that neurons are not bits and transistors, but steam, clockwork and lightning. To do so I have to reiterate the basic operation of a neuron.

Neurons are cells in the body that have a special function to collect many different types of input, combine them and produce an output that can be projected to a distant part of the body or body. The first basic principle is that the outer covering (membrane) of these cells serves to separate molecules that have a charge. When salt (NaCl) is dissolved in water, it separates in positively charged sodium (Na+) and negatively charged chloride (Cl-). We call these ions, and if they can be sorted and separated, we have an electrical gradient that can be tapped to do some work. Neurons have an ingenious way to separate ions – in this case by actively pumping sodium (Na+), and positively charged potassium (K+) ions, with the Na+ on the outside and the K+ on the inside. To make this separation stick, the K+ can freely drift back into the cell through tiny pores in the neuron membrane, but sodium cannot. The corresponding negatively charged chloride (Cl-) mostly stays on the outside of cells since there's a corresponding pool of negatively charged protein inside the cell to mix with the K+. The result is that when you measure the electrical charge across a neuron membrane you find that the inside of a neuron is slightly negative with respect to its surroundings.

Keep in mind that these ions are all dissolved in water and you have "steam" part of the steampunk brain. Add in the enzymes that will actively pump Na+ out and K+ into the neuron as long as they have energy, and you have the clockwork. But what about the lightning?

Remember all that sodium pumped out of the neuron and not allowed to flow back in? The potassium can freely cross the membrane, and does, but is generally held in against high internal concentration by the overall negative charge inside the neuron. Not so for sodium. Both the high concentration and positive charge outside the neuron would push sodium into the cell if only it could pass through the neuron's membrane. Enter the sodium channel. There are pores that will allow sodium to enter a neuron, but they are normally closed. Ironically, these guardians of the neuron's electrical charge are themselves opened by small amounts of positive voltage. Opening even a single channel allows enough Na+ into a neuron to change its voltage – thus opening adjacent sodium channels spreading outward from the initial entry point and spreading the voltage ever outward. When the sodium channels are organized into a tube or pipe-like structure – such as an axon, the main component of a neuron for transmitting signals over distances – the electrical signal can quickly flash in a single direction to a destination microns to meters away. Here we have our lightning.

A combination of gates which close channels after a short interval of time (one to two milliseconds) and the sodium/potassium pump recharges our neuron just like a Tesla coil to repeated produce these lightning-like "action potentials" in as little as 5 milliseconds. At the end of the axon, internal sacs or "vesicles" hold chemicals that are when the action potential reaches the end of the line. The "neurotransmitter" chemical thus released starts the whole process in the next neuron by chemically operating still other ion-selective channel, producing "steam" and "lightning" in cell after cell.

The net result of millions and billions of Steampunk neurons acting together is even more steam, clockwork and lightning. Neurons can connect into networks that oscillate and reverberate just like the pendulum of a clock, producing a background of timing signals useful for operating muscles, producing hormonal rhythms, and separating sights and sounds in time. The analog nature of converging input and diverging output for any given neuron means that the brain itself is a mass of interconnections that defy all but the most sophisticated of wiring models. Then again, neurons and brain activity are about so much more than just the electrical signals. Neurons themselves are responsible for releasing a variety of chemicals into the blood and body which regulate temperature, attention, hunger, and thirst. Each of these means of chemicals provides yet another type of communication among neurons, and even the condition of the body feeds back to affect the operation of the brain.

So the brain is Steampunk. It's as good an analogy as any other, and better than a simple digital computer model. The entire concept captures not only the complexity, but the elegance of a totally interconnected system in which bubbling liquids, hissing steam, ticking clocks and flashing sparks all have their place. I'm sure that Agatha Heterodyne would certainly agree.

Monday, February 14, 2011

A Piece of the ACTION (Potential)

Sorry for the brief hiatus, I had to participate in a major review of research projects last week, and it required travel. All seems to have gone well, and we can resume discussion of How the Brain Works. For this week, topics include Action Potentials: how neurons make and harness electricity; "Your Brain is Steampunk" and then a look at the mailbag.

Figure 1
The following is taken from my lectures to medical and graduate students. The artwork is mine, I'll not excuse it, but it also can't be blamed on anyone else. I have simplified, but I hope not too much to encapsulate this material enough for an informed public audience. For a humorous view, take a look at Thanks to Eeyore for pointing this out to me when it was published. It is a very accurate depiction with the punchline "(oversimplified)".

Let's first start with a depiction of the neuron membrane (Figure 1). I recall a great Star Trek alien that refered to humans as "Great ugly bags of mostly water." It's true, and neurons are no different. Each neuron has a thin membrane of lipids (fats) and proteins that enclose a salt solution. The chemical composition of the salt solutions are different inside and outside the neuron as shown at right. The plus (+) and minus (-) signs indicate the "charge" of the ions, and it does not quite even out, allowing the inside of the neuron to have a slight negative charge. Note the detail in the left of the Figure 1 that says – these ions do *not* naturally cross the neuron membrane. They have to have channels.

The critical component of the solution for our discussion right now is sodium, Na+. Yes, too much sodium in the diet is a bad thing, but sodium is an essential chemical in the body. You will notice the concentrations are listed as "mM". That's "millimolar." One "molar" concentration is 35 grams of sodium per liter of water, or 4.72 ounces per gallon. "Milli" means 1/1000, therefore, 145 millimolar sodium is 0.145 x 4.72 = 0.56 ounces per gallon of water. You could *barely* taste that concentration of salt in water, so it really isn't much, and most people consume enough salt in the foods they eat and water they drink without the need to add salt for *nutritional* purposes. For flavor is another story.
Figure 2

So, sodium has a higher concentration outside the neuron than in, and it has an electrical charge. We can calculate the resulting electrical "potential" voltage simply by asking how much electrical energy would be required to prevent the positive charged sodium from entering the neuron and *balance* the concentrations at the same level as if no diffusion were possible (see Figure 2 at left).

Figure 3
We do know what this value is, and can calculate it as shown in Figure 3 at right. The equation uses R – the Gas constant, T – temperature (in degrees Kelvin), F – faraday's constant (to account for the charge of the ion), "ln" – the natural logarithm function, and the concentrations of sodium inside and out (the square brackets are chemist shorthand for "concentration of". This is called the "Nernst Equation" and we abbreviate it as "E" to indicate that it is the voltage at which the concentration is at "Electrical Equilibrium."

However, the most important part is that (A) sodium has to flow through a channel to get into the neuron, (B) that channel is normally closed. Thus when we *do* open the channel, we can get 66 millivolts worth of electrical energy out of this neuron (ionic current) by allowing the sodium ions to diffuse until they reach equal concentrations on both sides of the neuron membrane.

That's the complicated part – chemically. Now for the electrical part.

Figure 4
The way to open these sodium channels is with a slight positive electrical charge. We call that "depolarization." When a channel is depolarized, it allows sodium ions into the cell. The extra positive ion in the cell causes the electrical charge across the membrane to become more positive (usually it sits at about -70 millivolts). This extra positive charge also does a couple of other things, it diffuses out into the neuron, it attracts the negatively charged ions away from the inside of the membrane, and it causes the channel to shut itself off (Figure 4 at the left). Once the sodium ion stops flowing, the neuron can reset everything by allowing other positive charge ions to flow, and by pumping the sodium back out of the cell.

Figure 5
All of these stages happen in sequence, and the sequence causes a distinct electrical current to be recorded outside the neuron (Figure 5, right). We call this the "action potential" and it has a very characteristic shape – show this to any Electrophysiologist or Neuroscientist, and they can immediately tell you that it is a neural action potential. The very small depolarization caused by sodium entering through a single channel is enough to "depolarize" adjacent channels and make them go through the same process. Those channels in turn trigger other channels, and so on until we run out of neuron. What makes all of this work is that these channels are exceedingly small. A square centimeter or inch of neural membrane would contain millions of sodium channels, and they are especially concentrated in the "axon" – the part of the neuron that connects to other neurons and conducts information in the form of electrical pulses. Our last diagram, Figure 6 (left), shows a typical neuron, with the long (in some cases, meters long) axon. In this case, the electrical pulse has traveled halfway down the axon. Sodium ions are entering at the red arrows. AS sodium diffuses *down* the axon, it will eventually depolarize the membrane and open the sodium channels at the blue arrow. "Upstream" from the action potential, at the red arrows. The channels are closed and the neuron is pumping ions back to their normal concentrations.
Figure 6

The whole process takes about 2 to 5 milliseconds, and the neuron is ready to "fire" again. Action potentials travel at about 10 meters per second. That's maybe 25 miles per hour, although the *really* long neurons are specially insulated to speed that up 10 times or faster. Yes, signals from the brain to hands and feet travel at over 250 mph! And they do it over and over and over again – as much as 100 times per second, every minute, every hour, every day.

That's an amazing system, and it is all biologically based. The *information* content comes from changing the timing, the frequency, and the specific connections between neurons in a complex pattern not unlike the way a laser show makes complex patterns out of a single beam of light that simply turns off and on very fast.

Sound familiar? Kind of like a computer making information out of just ones and zeros? Well, perhaps, but that's a matter for the next blog.

Thursday, February 10, 2011

*Can* we build it?

The serious side of yesterday’s blog is the implied question – Is it enough simply to put together enough processing units as a human (or other animal) brain?

Short answer … no.

A few months ago I was at a conference where a speaker was proud of the fact that current supercomputers contain 50-100 million “gates” or transistors, the supposed equivalent of 50-100 million neurons – the size of a cat brain.  By 2015 they anticipate supercomputers of up to 500 million “neurons.“

“That’s the size of a primate brain!” was the claim.

Sure.  It is.  But it’s not enough.

Remember the description of the input and output connections for neurons?  Tens, hundreds, even thousands of connections per neuron!  Thus it’s not enough to simulate the half-billion neurons; rather, it is necessary to simulate the three orders of magnitude greater connections between neurons.

See, it is the *connections* that do the *real* processing in the brain.  There is only a limited range of activity available to a neuron – either it “fires” (an action potential, the electrical discharge that travels from one end of the neuron to another) or it doesn’t.  One or zero.  Sounds very binary, very computer-like, right?  Well, not really.  If the brain operated as strictly on-off switches, it would be *easy* to mimic a brain with a digital computer [but that’s a blog for another day].  However, much greater variability is available in the connections between neurons.  Connection strength allows for weak, strong, and all of the variables in between.  Instead of a billion transistor digital computer, we’d need a trillion connection *analog* computer to provide the same simulation. 

However, even that is not enough.  Neurons are far from “dumb.”  Each neuron is a processing unit capable of modifying both its own inputs and outputs.  That processing is dependent on the neuron’s own activity levels, thus we now need not just a trillion *connections*, we need a trillion CPUs.

Now, let’s add some more complexity – Neurons come in different sizes, shapes, neurotransmitters, inhibitory, and excitatory.  Connections can’t just be random, they need to be (A) specific to a brain area, (B) specific to a function, and (C) specific to a neurotransmitter/receptor combination – oh, and there can be more than one type of neurotransmitter and receptor in a given neuron – in fact, it’s pretty much guaranteed.
Does this mean that modeling the mammalian brain is impossible?  No.  Just complex.  There will come a day when there really is a computer with the number of processing and connections sufficient to model a brain.  In the meantime, there are a few tricks that can reduce complexity in a model.  One of those is the use of nonlinear systems analysis.  The beauty of a nonlinear model is that it derives mathematical equations that transform input signals to outputs.  The inner complexity is captured in the math, only the input and output needs to be known.

Other techniques include biological modeling with slices and cultures of tissue that form neuron-like networks.  As processors get smaller, with ever increasing number of processing cores, our technology *is* approaching the ability to model small brains.  Whether such a model “wakes up” and starts demanding cheese, remains to be seen.