When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

PLOS BLOGS The Official PLOS Blog

Improving Reproducibility in Animal Modeling of Psychiatric Disorders, by Anand Gururajan

By Anand Gururajan

When a new drug fails to make the grades for regulatory approval, fingers start pointing more often than not at the weaknesses of our animal models, especially in the context of drug discovery in psychiatry. Disorders like schizophrenia, depression and autism are so complex that is impossible to anthropomorphise rats and mice to be schizophrenic, depressed or autistic. Our models are based on what we understand about the aetiology and pathophysiology of these disorders. It only stands to reason that if our understanding is limited so is the potential of our animal models.  Nevertheless, despite all the criticisms of animal models, I make the bold prediction that they are and will forever be indispensable in the study of psychiatric disorders. But one issue that significantly constrains the validity of our animal models and subsequently how much we can learn from them is reproducibility. Creating animal models of aspects of psychiatric disorders is a difficult task. Each lab has its own unique protocol to create them but when that same protocol is used in another lab with or without ‘seemingly’ minor alterations, the phenotype can change significantly or disappear altogether. They are frustratingly fragile.

One example would be in the use of isolation rearing paradigms to generate rodent models of aspects of schizophrenia. One of the phenotypes associated with this paradigm is a disruption in the prepulse inhibition (PPI) of the acoustic startle response, a paradigm used to assess sensorimotor gating. Disrupted sensorimotor gating is a feature of psychiatric disorders such as schizophrenia and autism. In a study by Weiss et al. (1999) isolation rearing of Wistar rats for 12 weeks post-weaning did not produce a robust deficit in PPI. Several years later Rosa et al. (2005) showed that a 10 week isolation period post-weaning did produce a deficit in PPI. This result is somewhat counterintuitive. You would expect a longer isolation rearing protocol to have a more robust effect but clearly this wasn’t the case. Other differences include technical aspects of PPI testing which are equally as important as the length of the isolation rearing protocol. So it’s quite clear that methodological differences can make it difficult to reproduce findings and this is almost always a point of discussion in every manuscript. Within labs, standard operating procedures are essential in this regard to improve reproducibility but between labs, it’s a significant problem. Each lab will obviously stick to what works but the limited reproducibility in the field overall has an impact on the rate with which we make meaningful breakthroughs. This issue has received considerable attention in the neuroscience community over the last few weeks.

Five years ago, the ARRIVE (animals in research: reporting in vivo experiments) guidelines were developed in the UK which encourages researchers to be more transparent about how they go about doing their experiment (Kilkenny et al. 2010). Last November, a consortium of journals produced their own set of NIH-endorsed guidelines for the publication of preclinical research. Nature was one of them and in fact they were the first to address the issue head-on back in 2013, developing a checklist for journals to use when assessing the quality of manuscripts. What the two sets of guidelines are actively trying to promote and that I personally advocate very strongly is transparency in methodology. Yes, the introduction and the discussion are important but given how serious the issue of reproducibility has become, the focus should be on methodology and making experimental data publicly available. PLOS Biology recently published a perspective piece by Baker et al. (2014) stating that journals are still not enforcing these guidelines and the NIH has received backlash from some journals who are describing the need to meet these guidelines as onerous. They’ve argued that it will make it difficult to recruit peer reviewers, lengthen the peer review process itself and will distract readers from the scientific message in the paper. We need to actively debate these issues.

Image

So what do we need to start seeing more of in terms of methodology? If we have a quick look through the Nature group’s guidelines, we will by and large see things that we already include (most of us anyway) in our reporting. For example, species/strain, age, etc. But on top of that, we should also make efforts to inform readers how we decide on the choice of the species/strain. In terms of numbers of animals used, a determination of study power should be included. Simply stating the total number of animals used at the very beginning and not mentioning individual group numbers later on in the manuscript I think is unacceptable, especially if groups have uneven numbers. For husbandry, details on the cage size, bedding type, room climate and how often animals are handled should be provided. For behavioural testing procedures information on habituation to the setup, the time of day at which the test is conducted, the lighting conditions and whether the experimenter is inside or outside the testing room should be included. In terms of experimental design, we should aim to replicate experiments at least twice within labs and perhaps more than that between labs to ensure that what we are observing in our models is a legitimate phenomenon and not something anecdotal. When it comes to analysis, if outliers are to be removed the criterion should be established beforehand, not afterwards to ‘tidy up’ data sets. In the discussion, a brief description could be included describing resource constraints, limitations or unforeseen circumstances that could have had an influence on experimental outcomes (e.g. construction work down the corridor from behavioural testing rooms). These suggestions are not exhaustive; the list goes on and I guess that creates a whole new problem – when does it become too much? And even if we do manage to tick all the boxes, what guarantee is there that we will improve reproducibility? I think it’s too soon to tell (maybe another 5 years?) but there is no doubt about the need for all of us in the field to be transparent and harmonise to some extent how we work in the preclinical laboratory. A focus on reproducibility will instil a much needed sense of confidence in drug companies, funding and regulatory agencies that we are taking a serious and proactive approach to improving our understanding of psychiatric disorders as well as fine-tuning our drug discovery strategies in this area.

References

Weiss IC, Feldon J, Domeney AM (1999). Isolation rearing-induced disruption of prepulse inhibition: further evidence for fragility of the response.   Behavioural Pharmacology 10:139-49

Rosa ML, Silva RC, Moura-de-Carvalho FT, Brandao ML, Guimaraes FS, Del Bel EA (2005). Routine post-weaning handling of rats prevents isolation rearing-induced deficit in prepulse inhibition. Brazilian Journal of Medical and Biological Research 38:1691-6

Kilkenny C, Browne WJ, Cuthill IC, Emerson M, Altman DG (2010). Improving bioscience research reporting: the ARRIVE guidelines for reporting animal research. PLOS Biology 8: e1000412

Nature Editorial (2013). Announcement: Reducing our irreproducibility. Nature 496: 398

Baker D, Lidster K, Sottomayor A, Amor S (2014). Two years later: Journals are not yet enforcing the ARRIVE guidelines on reporting standards for pre-clinical animal studies. PLOS Biology 12:e1001756 

Any views expressed are those of the author, and do not necessarily reflect those of PLOS.

AnandGAnand Gururajan is a postdoctoral researcher in the Department of Anatomy & Neuroscience, University College Cork in Ireland. His current research focus is in the role of microRNAs in psychiatric disorders.

 

Discussion
  1. Issues to do with reproducibility are mainly to do with the models themselves being fundamentally flawed – in two fatal respects.

    Firstly the almost universal use of rats and mice is based on the assumption that the common psychiatric disorders are universal mammalian features that can be modeled in any species of this class.

    This is not the case and whilst it is possible to build valid animal models, the models themselves and the choice of species to be used in them has to be far more rational than it has been to date if progress is to be made.

    Secondly, all the current models are defined as models of X solely on the basis of their sensitivity to drugs used to treat X in the clinic. i.e. they are all models with predictive validity only.

    This has introduced a terminal logical flaw known as a tautology and its effect is to lock the drug discovery process into an iterative loop capable only of producing further variants of the drugs we already have.

    Predictive validity therefore prevents progress rather than facilitates this and this on its own could have been responsible for the spectacular lack of progress in psychiatric drug research over the past 60 years

    These points are expanded on in this J Psychopharm paper

    https://www.academia.edu/2240421/The_failure_of_the_antidepressant_drug_discovery_process_is_systemic

    The conclusion is that we have little choice but stop and start again from a new place and that new place has to focus on the disorders themselves rather than the drugs used to treat them

    Big Pharma has taken the first step – by abandoning psychiatric drug research circa 2005-2010 but has yet to be persuaded to try an new approach – and the concern is that it may take them some time to do this – concern because people haven’t stopped becoming mentally ill just because the major drug companies are no longer looking for new ways to treat them

    More about that here

    https://theconversation.com/why-big-pharma-is-not-addressing-the-failure-of-antidepressants-41868

Leave a Reply

Your email address will not be published. Required fields are marked *


Add your ORCID here. (e.g. 0000-0002-7299-680X)

Back to top