Skip to content

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

PLOS BLOGS The Official PLOS Blog

Ask them Anything about fMRI

By Sara Kassabian

Updated on October 14, 2015.

In advance of this month’s 2015 Society for Neuroscience Annual Meeting, PLOS Neuro Community today hosted a ‘PLOS Science Wednesday’ Ask Me Anything (AMA) on redditscience. The AMA  featured neuroscientists Ben Inglis and Jean-Baptiste (JB) Poline from UC Berkeley Brain Imaging Center  discussing functional magnetic resonance imaging (fMRI). Read the completed AMA here. 

For background on this subject and to read excerpts from an earlier interview with Ben Inglis by Emilie Reas, scroll down.

PLOS will be at #SfN15, visit us at Booth #115 & tell us what you asked our fMRI experts on 10-14 PLOS Science Wednesday.
PLOS will be at #SfN15, visit us at Booth #115 & tell us what you asked our fMRI experts on 10-14 PLOS Science Wednesday.

Functional magnetic resonance imaging (fMRI) is a neuroimaging technique that uses MRI technology to measure activity in the brain by recording changes in blood flow. As outlined in a previous post, fMRI advances over other neuroimaging tools by its “superb spatial resolution, non-invasiveness, safety and minimal preparation time.” Yet, despite gaining popularity since the 1990s, this technique is still subject to several limitations, including “relatively poor temporal resolution, susceptibility to various signal artifacts and relying on a signal that only roughly approximates the activity of neurons.” These potential confounders and the relative newness of the technology mean fMRI results are highly susceptible to misinterpretation and misrepresentation by researchers.

Ben Inglis and Jean-Baptiste (JB) Poline have made it their mission to improve the methods behind fMRI acquisition and analysis to mitigate the damage caused by these potential confounders. Inglis is an MRI physicist who tracks fMRI news and shares tips and tricks to improve fMRI experiments during acquisition in his popular blog, practiCal fMRI. Jean-Baptiste (JB) Poline is a scientist at UC Berkeley whose research focuses on the statistical analysis methods of braining imaging and brain imaging genetics, as well as neuroinfomatics. In a recent PLOS One article, titled “Orthogonalization of Regressors in fMRI Models” Poline explained how the use of a linear model in fMRI analysis often results in collinearity during data analysis, a common pitfall in fMRI analyses.

Poline and Inglis appeared on PLOS Science Wednesday ‘Ask Me Anything’ (AMA) on redditscience to answer questions about neuroscience, fMRI neuroimaging, and how researchers can maximize the rigor of their science using fMRI technology. This AMA took place on Wed October 14, 2015.

To read excerpts from an earlier interview with Ben Inglis, scroll down.

 

In January of 2015, PLOS Neuro Community Editor Emilie Reas sat down with Ben Inglis, who blogs as “practiCal fMRI”, to discuss his work as an fMRI physicist and some common mistakes that follow fMRI acquisition. Selected portions of the Q&A are included below, while the complete interview can be found here.

ER: You recently wrote an outstanding summary of some of the potential confounds for fMRI. Could you briefly touch on some of the major concerns, including the less obvious ones that even experienced researchers may not be aware of?

BI: I think most people doing fMRI have a very good awareness of the major practical issues and the ways to screw up. My worry is that awareness of the problems doesn’t necessarily translate into changes in approach. As a field we haven’t been especially demanding when it comes to methods validation, for example. I could give you a long list of methods – including some very recent ones – that have been deployed in neuroscience without the sort of rigorous testing that I’d like to see beforehand. Sometimes we seem to have decided that good enough is good enough. Motion correction – an oxymoron if you’re a cynic, a wildly optimistic term if you’re a realist – is a good example. We are aware of several limitations of rigid body realignment, e.g. it interacts with slice timing correction, and under-reports brain motion in the presence of strong (scanner-based) contrast such as receive field heterogeneity, yet there are no calls to cease and desist with its use until we have completely determined its consequences. Why is this? Surely it must be because the costs of imperfect methods are low yet the costs of slowing down are high.

I believe we can improve our game massively. It involves more education/training (hence my blog), acquiring more data (especially pilot studies, then a first real investigation followed by a replication experiment), and occasionally not using the latest, greatest method until it had been fully verified. (I wrote a blog post on deciding where in your experiment to place the risks and the novelty.) But these steps imply a slower timescale for an fMRI study, and I can’t control that. I am, however, hopeful that the tide is turning and that there is a new generation of young principal investigators and graduate students who know they can and should be doing better. Statistical methods have been under increasing scrutiny in the past few years. I encourage the field to put the acquisition and pre-processing steps under the same microscope. If that means you make your MRI physicist squeal, so be it.

ER: Although the flexibility of fMRI is one its great advantages, its ambiguity also opens the door for a certain level of “abuse”, if you will. Unfortunately, fMRI can all too easily be misinterpreted to support sometimes unfounded conclusions. Is there a way scientists can make our methods and reporting more rigorous to avoid this?

BI: Recent moves towards post-publication peer review – whether instantaneous and occasionally flippant as on Twitter, or more nuanced commentaries as on blogs, and of course on dedicated websites like PubPeer – will eventually raise the bar and collectively they will force improved rigor. It’s great to see the new tools of social media blending with mainstream facilities such as PubMed Commons to attain a fundamentally novel way of evaluating and disseminating science. So, by all means run an underpowered study, use a flawed control, or double-dip for your stats. But be on notice that you’ll learn of your transgression within days after it appears online. (Make that within hours if you accompany your precious new study with an embellished press release!) The flaws will be noted for the entire world to see, too, which is just as it should be. Sometimes a review might be as trivial as pointing out a typographical error or an incorrect unit, but if that one tiny review saves subsequent readers time and effort then it’s entirely worth it. Until recently, nobody was going to bother to write to a journal’s editor every time a small mistake was discovered. I would argue that everyone now has a moral responsibility to note in a permanent fashion, e.g. on PubPeer, whenever they find a mistake in a paper.

I’ve also chosen to focus on education, recognizing that I’m likely preaching to the choir for the time being. JB Poline and Matthew Brett, both also at Berkeley, have been leading the charge on the stats and processing front. They favor a direct instructional model whereas I prefer to put things out there today that can be read by anyone, anywhere at any time in future. But otherwise we are in agreement that less abuse – accidental or intentional – is best mitigated by better education. There are many resources online that can be used. Here are two good ones: UCLA’s Advanced Neuroimaging summer program and NIH’s fMRI Course.

ER: Lastly, what one or two key pieces of advice would you give to a new researcher conducting their first fMRI experiment?

BI: Take your time, and pilot like crazy! Get a solid grounding in methods and don’t be in a rush to go “solve the brain.” It’s great that you’re eager but recognize that you will almost certainly do something wrong by trying to do an experiment before you have a decent grounding in the physics, the physiology, experimental design and statistics.

It’s a massive amount to learn and become competent in using. There is nothing wrong with apprenticing with experienced people until you’re ready to pursue your own experiment. It may take years rather than months.

Next, I encourage people to pilot everything. More often than not an attempt at a “real experiment” unearths some fundamental problems that turn the experiment into a de facto pilot experiment. I would also like to see more people use extensive piloting as the basis of their own replication attempts. But replication is part of a cultural shift related to your earlier question about rigor, and I recognize that there are limitations based on time and money. Still, considered, extensive piloting should help reduce errors and enhance the reproducibility of your fMRI findings. Pilot experiments are also an incredible teacher.

As noted, PLOS will be at the 2015 Society for Neuroscience conference in Chicago (#SfN15) from October 17-21. Visit us at Booth #115. If you tell us what question you asked our fMRI experts on PLOS Science Wednesday, we’ll be happy to give you a free PLOS Neuro t-shirt.

Back to top