Skip to content

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

PLOS BLOGS The Official PLOS Blog

FMRI under the Microscope: An interview with MRI Physicist PractiCal fMRI

By Emilie Reas, PLOS Neuroscience Community Editor

Since its development in the early 1990’s, functional magnetic resonance imaging (fMRI) has grown in popularity to become one of the most commonly used techniques to image activity in the human brain. This rapid growth is due largely to several advantages that set it apart from other neuroimaging tools, including superb spatial resolution, non-invasiveness, safety and minimal preparation time. However, it also has several critical limitations, including relatively poor temporal resolution, susceptibility to various signal artifacts and relying on a signal that only roughly approximates the activity of neurons.

Although the development of fMRI has considerably advanced our understanding of human cognitive function, its powers have been misinterpreted to suggest that fMRI can be used to “read minds”, that particular concepts or functions are located in specific brain regions, or that a particular activity pattern causes some behavior. fMRI has become one of the hottest, but also most controversial, neuroimaging methods in recent years, and is unlikely to fall from this pedestal any time soon. Ongoing advances in fMRI data acquisiton, processing and analysis methods continue to refine our understanding of how fMRI can and should be used to study human brain function.


Here to discuss the state of fMRI, including its limitations, practical concerns, and future development, is PractiCal fMRI, an MRI physicist at the UC Berkeley Brain Imaging Center.


To start off, what inspired you to become an MRI physicist? Was there an “aha!” moment when you realized this was your dream career, or did things just fall into place as you explored your interests?

I’d originally planned to be an organic chemist but found myself migrating into physical chemistry as my undergrad degree progressed. I was particularly taken with nuclear magnetic resonance (NMR) spectroscopy and how it could be used to determine molecular structures without doing any chemistry whatsoever. In my final undergrad year I opted to do a research project. Buried among the dozens of proposals on chemical analysis was an obscure project called simply “NMR imaging.” About ten seconds after I saw how a magnetic field gradient could be used to encode spatial information in an intact, living organism I knew I didn’t want to do anything else. Ever. It felt like magic. It still does sometimes, even decades after completing my PhD thesis on in vivo NMR spectroscopy of the brain.

Whenever I’m baffled by some fancy new pulse sequence I like to remind myself that the entire process of MRI is achieved by nothing but magnetic fields. These magnetic fields may be big or small, they may be constant or vary with the complexity of a Mozart symphony, but at the end of the day it’s all magnetic fields! I’m sure there are other disciplines that can make a valid claim for being “Most like magic” but I reckon MRI wins the title.


You recently wrote an outstanding summary of some of the potential confounds for fMRI. Could you briefly touch on some of the major concerns, including the less obvious ones that even experienced researchers may not be aware of?

I think most people doing fMRI have a very good awareness of the major practical issues and the ways to screw up. My worry is that awareness of the problems doesn’t necessarily translate into changes in approach. As a field we haven’t been especially demanding when it comes to methods validation, for example. I could give you a long list of methods – including some very recent ones – that have been deployed in neuroscience without the sort of rigorous testing that I’d like to see beforehand. Sometimes we seem to have decided that good enough is good enough. Motion correction – an oxymoron if you’re a cynic, a wildly optimistic term if you’re a realist – is a good example. We are aware of several limitations of rigid body realignment, e.g. it interacts with slice timing correction, and under-reports brain motion in the presence of strong (scanner-based) contrast such as receive field heterogeneity, yet there are no calls to cease and desist with its use until we have completely determined its consequences. Why is this? Surely it must be because the costs of imperfect methods are low yet the costs of slowing down are high.

I believe we can improve our game massively. It involves more education/training (hence my blog), acquiring more data (especially pilot studies, then a first real investigation followed by a replication experiment), and occasionally not using the latest, greatest method until it had been fully verified. (I wrote a blog post on deciding where in your experiment to place the risks and the novelty.) But these steps imply a slower timescale for an fMRI study, and I can’t control that. I am, however, hopeful that the tide is turning and that there is a new generation of young principal investigators and graduate students who know they can and should be doing better. Statistical methods have been under increasing scrutiny in the past few years. I encourage the field to put the acquisition and pre-processing steps under the same microscope. If that means you make your MRI physicist squeal, so be it.


Over the past several years there’s been a lot of research devoted to understanding how the BOLD signal relates to hemodynamic, neural or metabolic activity. Given your knowledge of this literature as well as the physics and physiology underlying the BOLD signal, how confident are you that fMRI can accurately approximate neural activity? 

I have a secular opinion of fMRI’s ability to approximate neural activity. The “vein not brain” issue is one limitation that doesn’t actually worry me that much. Indeed, it’s remarkable how much neural information seems to be reported in vascular changes.

Experiments at high field in animals have shown that vascular changes localize down to the columnar and laminar levels of the mammalian cortex. As to the neurovascular coupling and the aspects of metabolism that drive our signals, I suspect that the overall relationships we observe will change as the length scale is changed. What we see in a voxel with 3 mm sides is a blurred spatial average of changes that might be separable (in principle) with 100-micron resolution. It’s something we can continue to strive for. Even the (admittedly sluggish) temporal aspects of neurovascular coupling don’t worry me yet because at present we need hundreds of milliseconds to encode spatial information in MRI. In my opinion, the limitations today are primarily in the physics, that is, in the signal-to-noise ratio (SNR), rather than in the physiology. These are rich areas for future research.


Although the flexibility of fMRI is one its great advantages, its ambiguity also opens the door for a certain level of “abuse”, if you will. Unfortunately, fMRI can all too easily be misinterpreted to support sometimes unfounded conclusions. Is there a way scientists can make our methods and reporting more rigorous to avoid this?

Recent moves towards post-publication peer review – whether instantaneous and occasionally flippant as on Twitter, or more nuanced commentaries as on blogs, and of course on dedicated websites like PubPeer – will eventually raise the bar and collectively they will force improved rigor. It’s great to see the new tools of social media blending with mainstream facilities such as PubMed Commons to attain a fundamentally novel way of evaluating and disseminating science. So, by all means run an underpowered study, use a flawed control, or double-dip for your stats. But be on notice that you’ll learn of your transgression within days after it appears online. (Make that within hours if you accompany your precious new study with an embellished press release!) The flaws will be noted for the entire world to see, too, which is just as it should be. Sometimes a review might be as trivial as pointing out a typographical error or an incorrect unit, but if that one tiny review saves subsequent readers time and effort then it’s entirely worth it. Until recently, nobody was going to bother to write to a journal’s editor every time a small mistake was discovered. I would argue that everyone now has a moral responsibility to note in a permanent fashion, e.g. on PubPeer, whenever they find a mistake in a paper.

I’ve also chosen to focus on education, recognizing that I’m likely preaching to the choir for the time being. JB Poline and Matthew Brett, both also at Berkeley, have been leading the charge on the stats and processing front. They favor a direct instructional model whereas I prefer to put things out there today that can be read by anyone, anywhere at any time in future. But otherwise we are in agreement that less abuse – accidental or intentional – is best mitigated by better education. There are many resources online that can be used. Here are two good ones: UCLA’s Advanced Neuroimaging summer program and NIH’s fMRI Course.


Recent advances are making it feasible to image the brain at increasingly higher resolutions, and researchers are continuing to push these limits. Do you feel that there’s a ceiling to what fMRI will be able to resolve?

We are still an order of magnitude, possibly two orders of magnitude, from attaining in humans what can be done in animals today. We polarize only a few spins per million using the biggest magnets we can construct and this means low SNR compared to other spectroscopic techniques. Ultimately, SNR is the barrier to doing anything. If we had SNR that was 100-fold higher we could make a serious attempt at detecting neuronal currents in routine, whole brain studies (although we would then have to circumvent BOLD contamination, too). But even a five-fold improvement in SNR is inconceivable for whole human brains today. We may be able to get a five-fold improvement in small patches of brain, however. Perhaps we should adjust our expectations so that we can be satisfied by localized gains and not insist on 100-micron resolution across an entire human brain.

If there are to be major breakthroughs for whole head functional imaging then it may involve moving away from proton spins as the reporting agents. Proton spin has much to recommend it, of course. It is ubiquitous, incorporated into water it crosses the blood-brain barrier freely, and it is non-toxic. But we can only do so much to boost the SNR from protons. Unless and until we can change the reporting agent I think the state-of-the-art animal fMRI literature demonstrates what we can aspire to do with proton spin. Just don’t ask me what the new reporting agents will look like. As I read it that is one of the goals of the BRAIN Initiative.


Do you expect that developments in human neuroimaging will be based mostly on refinement of current tools or do you predict any major paradigm shifts in our approaches?

Well, this is a tricky one. If you look at the bulk of whole brain fMRI studies conducted today, the spatial-temporal performance of the scanner is only incrementally better than what we were doing a decade ago. We still use voxels with typical dimensions around 3 mm and we cover the brain once every two seconds. If we use simultaneous multi-slice (aka multiband) methods we can accelerate the rate to perhaps two whole brain volumes per second and voxels as small as 2 mm.  Going beyond these performance limits is incredibly difficult, ultimately due to the SNR. Unless and until someone develops a way to use something other than proton spin as our signal generator, I think we can expect refinement of current tools and small incremental gains in performance.

A major paradigm shift will probably require new physics, something that is impossible to predict. I wouldn’t wait for inspiration to hit! I’ll take this opportunity to note that much of what we do presently for fMRI was decided by the clinical applications of MRI, not by neuroscience. Our MRI scanners are produced to do radiology. This is both good (ease of access/use) and bad (clinically-oriented hardware and software) for doing neuroscience. So the one major cultural shift I want to see develop is a move back to the development of scientific instruments for the brain, again something the BRAIN Initiative is supposed to be fostering.


Lastly, what one or two key pieces of advice would you give to a new researcher conducting their first fMRI experiment?

Take your time, and pilot like crazy! Get a solid grounding in methods and don’t be in a rush to go “solve the brain.” It’s great that you’re eager but recognize that you will almost certainly do something wrong by trying to do an experiment before you have a decent grounding in the physics, the physiology, experimental design and statistics.

It’s a massive amount to learn and become competent in using. There is nothing wrong with apprenticing with experienced people until you’re ready to pursue your own experiment. It may take years rather than months.

Next, I encourage people to pilot everything. More often than not an attempt at a “real experiment” unearths some fundamental problems that turn the experiment into a de facto pilot experiment. I would also like to see more people use extensive piloting as the basis of their own replication attempts. But replication is part of a cultural shift related to your earlier question about rigor, and I recognize that there are limitations based on time and money. Still, considered, extensive piloting should help reduce errors and enhance the reproducibility of your fMRI findings. Pilot experiments are also an incredible teacher.

profileYou can read more from PractiCal fMRI about the “nuts and bolts” of fMRI at his blog, follow him on twitter at @practiCalfMRI or contact him at

Back to top