News:

NOTICE: Posting schedule is irregular. I hope to get back to a regular schedule as the day-job allows.


Monday, April 29, 2013

Monday FUNNY: The Institute of Official Cheer

http://teddysratlab.blogspot.com [Full link to blog for email clients.]

James Lileks does not consider himself as a journalist... although he has worked as one.  He is a writer and chronicler of the humorous and strange.

He's also an internet icon.

Lilek's website features the humorous "Institute of Official Cheer" (http://www.lileks.com/institute/index.html) where you will find such oddities as:

The Gallery of Regrettable Food, in which you can find "The Core Sample from A Geological Age When Vegetables Ruled the Earth!"  http://www.lileks.com/institute/gallery/vegetables/3.html



From Interior Desecrations: 


It's the Nautilus, and a giant octopus has grabbed hold of the viewing port! (either that or a trio of shaved monkeys pressed up against the camera lens!

 Don't miss The Art of Art Frahm (1950's) in which we learn of the mysterious Celery Effect that causes the elastic in ladies undies to fail (and mysterious breezes to swirl their skirts...).

Check out The Gobbler... The Grooviest Motel in Wisconsin!  http://www.lileks.com/institute/motel/index.html

A chronicle of the odd and absurd, The Institute is a good way to distract yourself when you have that important term paper or grant to write! 

Run along, Kids, and marvel at the wonderful world of (old) advertising!

Friday, April 26, 2013

NEWS and COMMENT: Back from the Dead






http://teddysratlab.blogspot.com [Full link to blog for email clients.]

Can a medical procedure "resurrect" a person who is clinically dead?

Dr. Sam Parnia says yes.  [http://www.guardian.co.uk/society/2013/apr/06/sam-parnia-resurrection-lazarus-effect]

I saw references to the article via three different sources - New Scientist RSS feed, Instapundit and through my friends at Baen's Bar.  It's a fascinating subject, and one I couldn't wait to cover - except for the fact that I already had content scheduled.  So, it may be a week or so after the fact, but here goes:

To quote the article:
[Dr. Sam] Parnia is head of intensive care at the Stony Brook University Hospital in New York. If you'd had a cardiac arrest at Parnia's hospital last year and undergone resuscitation, you would have had a 33% chance of being brought back from death. In an average American hospital, that figure would have fallen to 16% and (though the data is patchy) roughly the same, or less, if your heart were to have stopped beating in a British hospital.
How is this possible?

Well, many years ago, I was discussing the effects of cerebral ischemia with a colleague.  Ischemia results when blood flow is blocked and oxygen delivery to the cells is reduced (or stopped).  Some tissues are more sensitive to low oxygen, and some are less sensitive.  Muscle tissue contains stores of glucose and oxygen that allows it to function with lowered oxygen - they have to, because some exertion requires more energy in "real-time" than can be supplied by the blood.  The brain doesn't have those stores, and within 5 minutes of low/no blood flow, the sensitive neurons begin to die.

"But even then," my colleague informed me, "the problem is less about the lack of blood flow and oxygen, but what happens when the blood flow is restored!"  Sudden restoration of blood flow causes neurons to release potassium and calcium which in turn can be toxic to neurons if they build up in the fluid around the cells.  [See: http://teddysratlab.blogspot.com/2011/02/piece-of-action-potential.html for details of the delicate chemical balance needed for brain cell function.] Neurons require glucose, oxygen and water in order to function, but they also need the blood to flow and remove lactic acid, CO2 and other metabolic products.

http://www.bio.miami.edu/tom/courses/protected/MCB6/ch12/12-08.jpgAs an aside, cells in our bodies break down glucose via the Citric Acid Cycle (or Krebs' Cycle).  The result is not actually very much of the energy molecule ATP, but rather a lot of intermediate products which are then metabolized via "oxidative phosphorylation" (OXPHOS) to form ATP.  OXPHOS requires oxygen; when oxygen supplies are low, such as during prolonged exertion or exercise, ATP can still be produced in lower quantities, leading to increased CO2 and lactic acid byproducts.  It is the lack of ATP and the buildup of lactic acid that produces fatigue and muscle pain. Reduced blood flow (ischemia) and consequent reduced oxygen is responsible for the sharp chest pains of a heart attack.

So, the solution to surviving the cessation of the heart beat is to keep the brain (and body) oxygenated.  It also helps to provide a means of removing the metabolites.

A key element of Parnia's work is hyperbaric oxygen therapy - not just increasing the concentration of oxygen in the air, but increasing the air pressure as well to force more oxygen to the tissues.  A colleague of mine has demonstrated that hyperbaric oxygen improves the rehabilitation after a stroke as long as it can be applied within 30 minutes of the stroke.  It is his goal to get hyperbaric chamber-equipped ambulances into common use to aid stroke victims.

A second element of "resurrection therapy" is to keep blood flowing.  Parnia uses an "extracorporeal membrane oxygenator" (ECMO) to oxygenate the blood, coupled to what is commonly termed a "heart-lung machine" to provide pressure to the blood and force it through the circulatory system.  Another technique is an LVAD - left ventricular assist device - which assists the heart in pumping blood and is becoming a common therapy for heart disease in which the left ventricle is unable to produce enough pressure to keep the blood moving to all tissues. While intended only to assist the heart until a more permanent solution (i.e. heart transplant) can be performed, LVADs are quite effective in maintaining life - to the point that one patient with an LVAD was found to eventually have no heartbeat at all!  The patient's own heart had failed, but the LVAD kept the blood flowing!  [http://www.popsci.com/science/article/2012-02/no-pulse-how-doctors-reinvented-human-heart?page=3].

The third major element of preserving the "life" in living tissue is cooling.  It is very well known that cooling of the body slows down the enzymes that break down cells - so if we couple oxygen, circulation and cooling, we have a way to suspend the breakdown of neurons until normal blood flow and oxygenation can be restored.

Sam Parnia claims we need a new definition of death:  even "clinical dead" is not irreversible as long as the cells can be maintained and revived.  But what about the "mind" and "soul"?  Frankly, the latter is beyond my capacity to answer, but as to the former, suspending the activity of neurons - or even not suspending it, since we are providing support therapy in the form of oxygen and blood flow - is not much different than going into low activity states of unconsciousness or coma.  The mental capability after "resurrection" is dependent only on how much degradation was allowed to occur - minimize that, and the effects of "death" are also minimized.

It is a heartening outcome, and may mean that stroke and traumatic brain injury are much more treatable than previously thought.  It probably won't prolong life or reverse the effects of aging, but will preserve it from a premature end.

In the meantime -  let's all try our best not to become test cases for Resurrection Therapy!



Tuesday, April 23, 2013

Missing a couple of days...

http://teddysratlab.blogspot.com [Full link to blog for email clients.]

...yes, I know.  I had business travel and burned through my buffer.  I still have Friday's NEWS and COMMENT feature scheduled, but I was out of luck finding Monday FUNNY in time, and I still don't have the next installment on writing research grants written.

A friend of mine claims that she posts "brain-sweepings" when that happens.  I don't know about that, but I will make it up to y'all with the following link:

http://www.sciencedaily.com/releases/2013/04/130422154756.htm

Antibody Transforms Stem Cells Directly Into Brain Cells

Apr. 22, 2013 — In a serendipitous discovery, scientists at The Scripps Research Institute (TSRI) have found a way to turn bone marrow stem cells directly into brain cells.

Oh yes, neat stuff indeed.  I'll cover this in more depth next Friday.

Also, MIT Technology Review has posted their "10 Breakthrough Technologies of 2013"...


A colleague of mine is mentioned, along with the project we are working on.

I'll see you in a few days!

Friday, April 19, 2013

NEWS & COMMENT: Bad Neuroscience? Say it isn't so…

http://teddysratlab.blogspot.com [Full link to blog for email clients.]



…It isn't so.

There's a popular press release making the rounds right now and it is summed up in this headline from The Register: "Most brain science papers are neurotrash"


– I've also seen similar interpretations in "Science Codex": "Reliability of neuroscience research questioned"


– and The Guardian: "Unreliable neuroscience? Why power matters"


The original research entitled: "Power failure: why small sample size undermines the reliability of neuroscience," by Katherine S. Button et al. appeared in the April 10, 2013 issue of Nature Reviews Neuroscience, a quite respectable journal featuring reviews in the field of neuroscience.  This report by researchers from the University of Bristol, Stanford University School of Medicine, The University of Virginia, and Oxford University set out to review whether research reports from the field of Neuroscience are analyzing data in a manner that is statistically reliable.

What they conclude is that in the data they sampled, the "statistical power" of the analyses is low.  In fact, the statistics suggest that the studies are very likely to either accept a hypothesis as true – when it is not – or miss confirming a true hypothesis. However – and this is very important – it does not apply this conclusion to the field of neuroscience as a whole.  In short –headlines implying that the study condemns an entire field of science are false.

It is important to understand from the start that in the field of scholarly scientific publication, we classify research articles as either "primary publications" or "review articles." A primary publication consists of a report of data that (ideally) has never been published before, while a review consists of references to primary articles for the purpose of summarizing previous findings or comparing results from different studies.  In recent years, a new form of review article has arisen – meta-analyses.  A meta-analysis looks at the data gathered and reported by other labs and attempts to find new information by applying complex mathematical and/or  statistical analyses that would either be too small to detect, or not evident until one collects data from multiple sites.

As a good scientist, rather than rely on the press releases and reviews, I went to the original publication.  Button et al. started with a literature search for meta-analyses in Neuroscience published in 2011.  Anyone who wants to find out what has already been published in a field can perform an internet search for published articles.  They used Web of Science, but that requires a subscription; I prefer the national Library of Medicine's Medline service, accessible through PubMed (http://www.ncbi.nlm.nih.gov/pubmed).  

Their search for keyword "neuroscience" + keyword "meta-analysis" + publication year "2011" yielded 246 articles.  They then had to sort through those articles for ones that included meta-analyses and provided enough information on the data used to allow calculation of "statistical power."  I'll talk more about statistical power later, but first, let's put this in perspective:  

Their search returned just 246 articles – yet what can we get from a Medline search? 
·         First let's look at publication year 2011: 1,002,135 articles. 
·         Articles with the keyword "neuroscience" in 2011: 14,941 articles. 
·         Articles with keyword "meta-analysis" (and variations) published in 2011: 9,099. 
·         Articles with both "neuroscience" and "meta-analysis" keywords, published in 2011:  128

So, my search returned fewer articles than Button et al. – in many ways that is good, because it means that my numbers are conservative – it also means that their analysis applies to 246 articles out of about 15,000 Neuroscience articles published in 2011!   

Before one condemns an entire field of science – one should consider that the same criticism regarding lack of statistical power can be leveled at the condemnation itself:  the authors started with only 1.6% of all Neuroscience papers published in a single year! From that starting point, they still rejected 4 out of 5 papers, applying their analysis to only 0.3% of the possible Neuroscience papers from 2011, causing a statistical mess of a totally different type.  It is also important to point out that of the 14,941 Neuroscience articles listed for 2011, there were 2,670 review articles, leaving 12,271 primary publications to which the statistics of meta-analysis are totally irrelevant!

But what does "statistical power" really mean?

When designing experiments, scientists need to determine ahead of time how many samples they need to perform valid statistical tests.  The Power Function (D = fP * σ / √n) relates the size of the effect to be measured to the population standard deviation and the number of subjects.  In the equation above: D = the difference in means that we want to consider to be a "real" effect; fP = a constant from the Power Function table (found in most statistics textbooks) that is selected for a particular level; σ = the anticipated standard deviation (measure of randomness) of the measurements that I am making; and n = the square root of the number of subjects I will study (or measurements I will make).  For animal behavior, I like to work with a Power = 90%.  The fP function is exponential, it becomes very large as Power approaches 100%, so 90% is quite reasonable.  fP for 90% Power = 3.6.  

A real-world example of the calculation of statistical power: 
Given n = 10 animals, σ = 0.5 Hz difference in neuron firing rate, and fP = 3.6, the minimum difference (D) that I can reliably detect as significant in firing rate is 0.56 Hz.  

Put another way, if my analysis says that two groups of 10 neurons each have significantly different mean firing rates, and the difference between those means is at least 0.56 Hz, then I can be confident that 90% of the time I have reached the correct conclusion, but that there is still a 10% chance that I am wrong.  However, if I increase my n, decrease my σ , or increase D, the statistical power increases and I can be much more confident in my results.  

Power functions are also very useful in decided how many subjects to test or measurements to make – fixing D at 0.5 Hz, I can determine that at least 8 neurons must be included in each group to detect a 0.5 Hz difference at 90% Power.  The Power Function is the foundation of experimental design, and is the basis for justifying how many subjects to test, and what is considered a "statistically significant result."

While I do not dispute the results of Button et al. with respect to meta-analyses, their results cannot be applied to primary publications without additional consideration.  In fact, I feel that the authors raise quite valid concerns... about meta-analyses.  Good experimental design is a standard part of the research ethics that every author confirms when submitting an article for publication.  In addition, most primary publications look for rather large effects (mean differences) and can do so with relatively small group numbers (group sizes of 6-10 are not uncommon).  However, meta-analyses by their very nature are looking for small effects that are otherwise missed in small groups – otherwise it would not be necessary to combine data sets to perform the meta-analysis. 

There are other factors at work which point out the good and bad with respect to scientific research, but this need not be one of them.  In perspective, this article in Nature Reviews Neuroscience sounds a cautionary note regarding the need for better statistical planning in meta-analysis.  What the article does not do is state that all or even many Neuroscience articles have the same flaw.  In particular, given that this caution applies to making unwarranted conclusions, it behooves headline writers and journalists to avoid making the same type of mistake in the course of reporting! 

Monday, April 15, 2013

Monday FUNNY: More funny scientist cartoons

http://teddysratlab.blogspot.com [Full link to blog for email clients.]

Two cartoons I ran across this week.

Come on, you know you want to find out if it would work!


I'm such a geek, I got one of those "executive pacifiers" that was broken... and fixed it.  You know what I'm talking about - it had the five steel balls suspended in a row and you swung one to the side, and when it hit the remaining four, it kicked out the one on the end.  Anyway, my mother managed a card and gift shop and had one that had broken fishing line holding a couple of the balls.  I restrung it, and kept fiddling with string length and position until I got it to work again.  I still have it and it still works.

The following is very true: 



For most of my career I have had an opposite number who disagrees with my techniques and even some of my results.  He's a nice guy, and when we can avoid talking shop (arguing!) we get along rather well.  Sort of like many friendships.  The only difference between this cartoon and my own experience is that it would be much more likely to be beer than wine.

Stay calm and keep laughing!

Friday, April 12, 2013

NEWS: The B.R.A.I.N. Initiative

http://teddysratlab.blogspot.com [Full link to blog for email clients.]

On Tuesday, April 2, President Obama announced:
The BRAIN initiative — short for Brain Research through Advancing Innovative Neurotechnologies — is modeled after the Human Genome Project, in which the federal government partnered with philanthropies and scientific entrepreneurs to identify and characterize the nearly 25,000 genes that make up human DNA.

"A human brain contains almost 100 billion neurons making trillions of connections," Obama said Tuesday as he outlined the initiative in the East Room of the White House. In the absence of a detailed map of the brain's complex circuitry and operating instructions that could help troubleshoot when the brain's wiring goes awry, scientists often grope in the dark for therapies that can treat Alzheimer's or autism or to reverse the effects of a stroke, Obama said. "So there is this enormous mystery waiting to be unlocked."

http://www.latimes.com/news/science/la-sci-brain-initiative-20130403,0,417413.story

---

I've received a few inquiries about the announcement - this is my field, and it is of great interest to me, but there are many specifics I really can't discuss due to confidentiality and conflict of interest.  However, I will say that this is a great concept and I really want to see this carried out.

"But haven't we already done this?  What about the 'Decade of the Brain'?"

The 1990's were indeed the 'Decade of the Brain' promoting advances in science and medicine related to the brain.  Research funded during that period is responsible for current surgical and medicinal for many brain diseases.  The essential work to develop neurally-activated prosthetics was greatly advanced during that period, as was the science behind Deep Brain Stimulation, transcranial magnetic stimulation (TMS), magnetoencephalography and my own work in neural prosthetics.

However, much as the Human Genome Project decoded the 'language' of genes, the greatest need to both continue the advances in brain science, and essentially transform them from theory to knowledge is an understanding of the 'language' of the brain.  In essence, finding all of the codes that the brain uses to represent different types of information.  We have a pretty good start on this: we understand the coding of sensory inputs - how sight, sound, smell, taste and touch are represented in the primary sensory cortex.  We also understand how the brain signals muscles to move, so that we have the basic input and output codes.

What we don't have is all of the internal coding.  What constitutes the neural code for understanding of written words? ...the sense of comfort we receive from loved ones? ...the wonder of an enjoyable book or movie? ... of dreams?

The language of the brain - a map of not just anatomy, not just connections - but of knowledge and information.  The BRAIN initiative will combine new funding and existing projects (hence why I can't speak except in generalities).  The uncertainties come from continuing the funding in years to come - the commitments are only for this year, and the sum of $300 million-or-so dollars may not go very far when it comes to mapping hundreds of billions of neurons and tens or hundreds of trillions of connections.

But we have to try.  We have made great progress - this is a way to make even more. 




Wednesday, April 10, 2013

The GUIDE: How to Write a Research Grant Proposal: Part 4 - Experimental Design

http://teddysratlab.blogspot.com [Full link to blog for email clients.]

Continuing with grant preparation, we've selected our type of funding, selected a funding agency and written the Specific Aims.   Now we need to move on to the essence of the proposal - the experiments.

How to Write a Research Grant Proposal:  Part 4 - Experimental Design

Actually, before we get to the experiments, we have two intermediate sections that together take up less than two or the 12 pages allocated to the Research Strategy - every proposal must include a background section that justifies why the research is important, and a section that explains why this research is innovative.  These are very important sections, but they are also key contributions to another part of the application package necessary for all federal grant applications (the "SF424" form).  The "Narrative" and "Summary" of the grant are the parts that are published for public access, and need to briefly explain the 'what and why' of the grant for the public.  Because Background & Significance and Innovation sections justify why the research should be done, that it is not repetitive or wasteful, and how it applies to public need - I'll spend a little more time on the concept later.

Now on to "Approach", i.e. experiments...

In general, the experiments should match the Specific Aims. In the last installment, I explained that the Aims are the goals of the project.  The Experiments are how we will reach those goals.  There should be some relationship between Aims and Experiments.  Maybe you want to number each experiment separately and state how they address the Aims - i.e. "Experiments 1&2 address Aim 1, Expts. 3&4 address Aim 2, Experiment 5 addresses Aim 3."  You may make the relationship one-to-one:  "Experiment 1 addresses Aim 1... 1a will test..., 1b will confirm..., 1c will manipulate..." I prefer the latter arrangement, that way the reviewers can immediately see the how the Aims will be tested by experiments.

Description of Experiments needs to include the following:  Rationale - Hypothesis - Specific procedures.  However, I was taught to use the following structure:


  • Rationale
  • Hypothesis
  • Relevant Preliminary Results
  • Specific Procedures 
  • Predicted outcomes, potential problems, and alternatives
Rationale: 'Why' are we doing this experiment?  What part of the problem or history of prior research makes us think this experiment will add to our knowledge or answer the research questions?


Hypothesis: As with any good scientific inquiry, we need a hypothesis such as "We hypothesize that stimulating hippocampus within 30 seconds of onset of a seizure will stop the seizure."  Implied, then, is the Null Hypothesis - that stimulation does not stop a seizure.  Thus we have something specific to test - and test it we will!  


Relevant Preliminary Results:  This is not what others have done, but what we ourselves have done.  It may be a test of a drug we developed, or a test that the stimulation patterns don't cause seizures.  This is the place to show that the techniques we will use can be successfully applied to the problem and produce a  result that proves or disproves the hypothesis.  


Specific Procedures:  Here's where we specify the testing groups: which drugs, which tests, how many groups (statistics!), how many subjects per group.  We generally don't go into General Methods here - in fact, many grant applications don't have enough room for General Methods - so we need to cite our published papers and articles which do include the methods we use.  


Predicted outcomes, potential problems, and alternatives:  In this last section we need to demonstrate to the reviewers that we have a plan, we are approaching this as scientists.  What do we really expect to see, what do we plan to do if we don't get the result we expect?  What confounding problems could we get ("Stimulation causes hiccups!") and how will we either eliminate or work around the problems?  Many grant applications fail because inexperienced grant writers propose experiments with a high chance of failure, and don't have a plan for turning failure into an advancement of general scientific knowledge.

I find that each experiment takes 2-3 pages to write.  Since we used 3 of our 12 page limit on Specific Aims, Significance, Innovation and the introduction to Approach, that means that practically we have room to write 3-4 experiments.   If designed well, those experiments should take 3-5 years to complete; thus, we propose our grants for 3-5 years of funding.  If you have complex experiments, it's a good idea to have a timeline of how and when each part will be started and expected to finish.  If there's any room left over, spend it with descriptions of Methods - especially those unique to this proposal.

This finishes the "Research Strategy" section of the grant proposal - considered by many to be the "meat" of the proposal.  It is, but there are many important sections yet to come, such as:

How to Write a Research Grant Proposal:  Part 5 - Research Subjects: Human and/or Animal 
 
See you next time!