The ARRIVE guidelines 2.0: updated guidelines for reporting animal research
Journal: 2021/June - BMJ Open Sci
Abstract:
Reproducible science requires transparent reporting. The ARRIVE guidelines (Animal Research: Reporting of In Vivo Experiments) were originally developed in 2010 to improve the reporting of animal research. They consist of a checklist of information to include in publications describing in vivo experiments to enable others to scrutinise the work adequately, evaluate its methodological rigour and reproduce the methods and results. Despite considerable levels of endorsement by funders and journals over the years, adherence to the guidelines has been inconsistent, and the anticipated improvements in the quality of reporting in animal research publications have not been achieved. Here, we introduce ARRIVE 2.0. The guidelines have been updated and information reorganised to facilitate their use in practice. We used a Delphi exercise to prioritise and divide the items of the guidelines into two sets, the 'ARRIVE Essential 10', which constitutes the minimum requirement, and the 'Recommended Set', which describes the research context. This division facilitates improved reporting of animal research by supporting a stepwise approach to implementation. This helps journal editors and reviewers verify that the most important items are being reported in manuscripts. We have also developed the accompanying Explanation and Elaboration document, which serves (1) to explain the rationale behind each item in the guidelines, (2) to clarify key concepts and (3) to provide illustrative examples. We aim, through these changes, to help ensure that researchers, reviewers and journal editors are better equipped to improve the rigour and transparency of the scientific process and thus reproducibility.
Relations:
Content
Similar articles
Articles by the same authors
Discussion board

The ARRIVE guidelines 2.0: updated guidelines for reporting animal research

+19 authors

Why good reporting is important

In recent years, concerns about the reproducibility of research findings have been raised by scientists, funders, research users and policy makers.1,2 Factors that contribute to poor reproducibility include flawed study design and analysis, variability and inadequate validation of reagents and other biological materials, insufficient reporting of methodology and results and barriers to accessing data.3 The bioscience community has introduced a range of initiatives to address the problem, from open access and open practices to enable the scrutiny of all aspects of the research4,5 through to study preregistration to shift the focus towards robust methods rather than the novelty of the results,6,7 as well as resources to improve experimental design and statistical analysis.810

Transparent reporting of research methods and findings is an essential component of reproducibility. Without this, the methodological rigour of the studies cannot be adequately scrutinised, the reliability of the findings cannot be assessed and the work cannot be repeated or built on by others. Despite the development of specific reporting guidelines for preclinical and clinical research, evidence suggests that scientific publications often lack key information and that there continues to be considerable scope for improvement.1118 Animal research is a good case in point, where poor reporting impacts on the development of therapeutics and irreproducible findings can spawn an entire field of research, or trigger clinical studies, subjecting patients to interventions unlikely to be effective.2,19,20

In an attempt to improve the reporting of animal research, the ARRIVE guidelines (Animal Research: Reporting of In Vivo Experiments) were published in 2010. The guidelines consist of a checklist of the items that should be included in any manuscript that reports in vivo experiments, to ensure a comprehensive and transparent description.2130 They apply to any area of research using live animal species and are especially pertinent to describe comparative research in the laboratory or other formal test setting. The guidelines are also relevant in a wider context, for example, for observational research, studies conducted in the field and where animal tissues are used. In the 10 years since publication, the ARRIVE guidelines have been endorsed by >1000 journals from across the life sciences. Endorsement typically includes advocating their use in guidance to authors and reviewers. However, despite this level of support, recent studies have shown that important information as set out in the ARRIVE guidelines is still missing from most publications sampled. This includes details on randomisation (reported in only 30%-40% of publications), blinding (reported in only approximately 20% of publications), sample size justification (reported in <10% of publications), and animal characteristics (all basic characteristics reported in <10% of publications).11,31,32.

Evidence suggests that two main factors limit the impact of the ARRIVE guidelines. The first is the extent to which editorial and journal staff are actively involved in enforcing reporting standards. This is illustrated by a randomised controlled trial at PLOS ONE, designed to test the effect of requesting a completed ARRIVE checklist in the manuscript submission process. This single editorial intervention, which did not include further verification from journal staff, failed to improve the disclosure of information in published papers.33 In contrast, other studies using shorter checklists (primarily focused on experimental design) with more editorial follow-up have shown a marked improvement in the nature and detail of the information included in publications.3436 It is likely that the level of resource required from journals and editors currently prohibits the implementation of all the items of the ARRIVE guidelines.

The second issue is that researchers and other individuals and organisations responsible for the integrity of the research process are not sufficiently aware of the consequences of incomplete reporting. There is some evidence that awareness of ARRIVE is linked to the use of more rigorous experimental design standards37; however, researchers are often unfamiliar with the much larger systemic bias in the publication of research and in the reliability of certain findings and even of entire fields.33,3840

This lack of understanding affects how experiments are designed and grant proposals prepared, how animals are used and data recorded in the laboratory and how manuscripts are written by authors or assessed by journal staff, editors and reviewers.

Approval for experiments involving animals is generally based on a harm-benefit analysis, weighing the harms to the animals involved against the benefits of the research to society. If the research is not reported in enough detail, even when conducted rigorously, the benefits may not be realised, and the harm-benefit analysis and public trust in the research are undermined.41 As a community, we must do better to ensure that, where animals are used, the research is both well designed and analysed as well as transparently reported. Here, we introduce the revised ARRIVE guidelines, referred to as ARRIVE 2.0. The information included has been updated, extended and reorganised to facilitate the use of the guidelines, helping to ensure that researchers, editors and reviewers—as well as other relevant journal staff—are better equipped to improve the rigour and reproducibility of animal research.

Introducing ARRIVE 2.0

In ARRIVE 2.0, we have improved the clarity of the guidelines, prioritised the items, added new information, and generated the accompanying Explanation and Elaboration (E&amp;E) document to provide context and rationale for each item42 (also available at https://www.arriveguidelines.org). New additions comprise inclusion and exclusion criteria, which are a key aspect of data handling and prevent the ad hoc exclusion of data43; protocol registration, a recently emerged approach that promotes scientific rigour and encourages researchers to carefully consider the experimental design and analysis plan before any data are collected44; and data access, in line with the FAIR Data Principles (Findable, Accessible, Interoperable, Reusable).45S1 Table summarises the changes.

The most significant departure from the original guidelines is the classification of items into two prioritised groups, as shown in tables 1 and and2.2. There is no ranking of the items within each group. The first group is the ‘ARRIVE Essential 10’, which describes information that is the basic minimum to include in a manuscript, as without this information, reviewers and readers cannot confidently assess the reliability of the findings presented. It includes details on the study design, the sample size, measures to reduce subjective bias, outcome measures, statistical methods, the animals, experimental procedures, and results. The second group, referred to as the ‘Recommended Set’, adds context to the study described. This includes the ethical statement, declaration of interest, protocol registration and data access, as well as more detailed information on the methodology such as animal housing, husbandry, care and monitoring. Items on the abstract, background, objectives, interpretation and generalisability also describe what to include in the more narrative parts of a manuscript.

Table 1

ARRIVE Essential 10
ARRIVE Essential 10
Study design1For each experiment, provide brief details of study design including:
  1. The groups being compared, including control groups. If no control group has been used, the rationale should be stated.

  2. The experimental unit (e.g. a single animal, litter, or cage of animals).

Sample size2
  1. Specify the exact number of experimental units allocated to each group, and the total number in each experiment. Also indicate the total number of animals used.

  2. Explain how the sample size was decided. Provide details of any a priori sample size calculation, if done.

Inclusion and exclusion criteria3
  1. Describe any criteria used for including and excluding animals (or experimental units) during the experiment, and data points during the analysis. Specify if these criteria were established a priori. If no criteria were set, state this explicitly.

  2. For each experimental group, report any animals, experimental units, or data points not included in the analysis and explain why. If there were no exclusions, state so.

  3. For each analysis, report the exact value of n in each experimental group.

Randomisation4
  1. State whether randomisation was used to allocate experimental units to control and treatment groups. If done, provide the method used to generate the randomisation sequence.

  2. Describe the strategy used to minimise potential confounders such as the order of treatments and measurements, or animal/cage location. If confounders were not controlled, state this explicitly.

Blinding5Describe who was aware of the group allocation at the different stages of the experiment (during the allocation, the conduct of the experiment, the outcome assessment, and the data analysis).
Outcome measures6
  1. Clearly define all outcome measures assessed (e.g. cell death, molecular markers, or behavioural changes).

  2. For hypothesis-testing studies, specify the primary outcome measure, i.e. the outcome measure that was used to determine the sample size.

Statistical methods7
  1. Provide details of the statistical methods used for each analysis, including software used.

  2. Describe any methods used to assess whether the data met the assumptions of the statistical approach, and what was done if the assumptions were not met.

Experimental animals8
  1. Provide species-appropriate details of the animals used, including species, strain and substrain, sex, age or developmental stage, and, if relevant, weight.

  2. Provide further relevant information on the provenance of animals, health/immune status, genetic modification status, genotype, and any previous procedures.

Experimental procedures9For each experimental group, including controls, describe the procedures in enough detail to allow others to replicate them, including:
  1. What was done, how it was done, and what was used.

  2. When and how often.

  3. Where (including detail of any acclimatisation periods).

  4. Why (provide rationale for procedures).

Results10For each experiment conducted, including independent replications, report:
  1. Summary/descriptive statistics for each experimental group, with a measure of variability where applicable (e.g. mean and SD, or median and range).

  2. If applicable, the effect size with a confidence interval.

Explanations and examples for items 1 to 10 are available in the Explanation and Elaboration document42 and on the website at https://www.arriveguidelines.org).

ARRIVE, Animal Research: Reporting of In Vivo Experiments.

Table 2

ARRIVE Recommended Set
Recommended Set
Abstract11Provide an accurate summary of the research objectives, animal species, strain and sex, key methods, principal findings, and study conclusions.
Background12
  1. Include sufficient scientific background to understand the rationale and context for the study, and explain the experimental approach.

  2. Explain how the animal species and model used address the scientific objectives and, where appropriate, the relevance to human biology.

Objectives13Clearly describe the research question, research objectives and, where appropriate, specific hypotheses being tested.
Ethical statement14Provide the name of the ethical review committee or equivalent that has approved the use of animals in this study, and any relevant licence or protocol numbers (if applicable). If ethical approval was not sought or granted, provide a justification.
Housing and husbandry15Provide details of housing and husbandry conditions, including any environmental enrichment.
Animal care and monitoring16
  1. Describe any interventions or steps taken in the experimental protocols to reduce pain, suffering, and distress.

  2. Report any expected or unexpected adverse events.

  3. Describe the humane endpoints established for the study, the signs that were monitored, and the frequency of monitoring. If the study did not have humane endpoints, state this.

Interpretation/scientific implications17
  1. Interpret the results, taking into account the study objectives and hypotheses, current theory, and other relevant studies in the literature.

  2. Comment on the study limitations, including potential sources of bias, limitations of the animal model, and imprecision associated with the results.

Generalisability/translation18Comment on whether, and how, the findings of this study are likely to generalise to other species or experimental conditions, including any relevance to human biology (where appropriate).
Protocol registration19Provide a statement indicating whether a protocol (including the research question, key design features, and analysis plan) was prepared before the study, and if and where this protocol was registered.
Data access20Provide a statement describing if and where study data are available.
Declaration of interests21
  1. Declare any potential conflicts of interest, including financial and non-financial. If none exist, this should be stated.

  2. List all funding sources (including grant identifier) and the role of the funder(s) in the design, analysis, and reporting of the study.

Together with the Essential 10, the Recommended Set represents best reporting practice. Explanations and examples for items 11–21 are available in the Explanation and Elaboration document (Percie du Sert et al. 2020) and on the website https://www.arriveguidelines.org.

ARRIVE, Animal Research: Reporting of In Vivo Experiments.

Revising the guidelines has been an extensive and collaborative effort, with input from the scientific community carefully built into the process. The revision of the ARRIVE guidelines has been undertaken by an international working group—the authors of this publication—with expertise from across the life sciences community, including funders, journal editors, statisticians, methodologists and researchers from academia and industry. We used a Delphi exercise46 with external stakeholders to maximise diversity in fields of expertise and geographical location, with experts from 19 countries providing feedback on each item, suggesting new items and ranking items according to their relative importance for assessing the reliability of research findings. This ranking resulted in the prioritisation of the items of the guidelines into the two sets. Demographics of the Delphi panel and full methods and results are presented in Supporting Information S1 Delphi and S1 Data. Following their publication on BioRxiv, the revised guidelines and the E&amp;E were also road tested with researchers preparing manuscripts describing in vivo studies, to ensure that these documents were well understood and useful to the intended users. This study is presented in Supporting Information S1 Road Testing and S2 Data.

While reporting animal research in adherence to all 21 items of ARRIVE 2.0 represents best practice, the classification of the items into two groups is intended to facilitate the improved reporting of animal research by allowing an initial focus on the most critical issues. This better allows journal staff, editors and reviewers to verify that the items have been adequately reported in manuscripts. The first step should be to ensure compliance with the ARRIVE Essential 10 as a minimum requirement. Items from the Recommended Set can then be added over time and in line with specific editorial policies until all the items are routinely reported in all manuscripts. ARRIVE 2.0 are fully compatible with and complementary to other guidelines that have been published in recent years. By providing a comprehensive set of recommendations that are specifically tailored to the description of in vivo research, they help authors reporting animal experiments adhere to the National Institutes of Health standards43 and the minimum standards framework and checklist (Materials, Design, Analysis and Reporting47). The revised guidelines are also in line with many journals’ policies and will assist authors in complying with information requirements on the ethical review of the research,48,49 data presentation and access,5052 statistical methods51,52 and conflicts of interest.53,54

Although the guidelines are written with researchers and journal editorial policies in mind, it is important to stress that researchers alone should not have to carry the responsibility for transparent reporting. Funders’, institutions’ and publishers’ endorsement of ARRIVE has been instrumental in raising awareness to date; they now have a key role to play in building capacity and championing the behavioural changes required to improve reporting practices. This includes embedding ARRIVE 2.0 in appropriate training, workflows and processes to support researchers in their different roles. While the primary focus of the guidelines has been on the reporting of animal studies, ARRIVE also has other applications earlier in the research process, including in the planning and design of in vivo experiments. For example, requesting a description of the study design in line with the guidelines in funding or ethical review applications ensures that steps to minimise experimental bias are considered at the beginning of the research cycle.55

Conclusion

Transparent reporting is clearly essential if animal studies are to add to the knowledge base and inform future research, policy and clinical practice. ARRIVE 2.0 prioritises the reporting of information related to study reliability. This enables research users to assess how much weight to ascribe to the findings and, in parallel, promotes the use of rigorous methodology in the planning and conduct of in vivo experiments,37 thus increasing the likelihood that the findings are reliable and, ultimately, reproducible.

The intention of ARRIVE 2.0 is not to supersede individual journal requirements but to promote a harmonised approach across journals to ensure that all manuscripts contain the essential information needed to appraise the research. Journals usually share a common objective of improving the methodological rigour and reproducibility of the research they publish, but different journals emphasise different pieces of information.5658 Here, we propose an expert consensus on information to prioritise. This will provide clarity for authors, facilitate transfer of manuscripts between journals, and accelerate an improvement of reporting standards.

Concentrating the efforts of the research and publishing communities on the ARRIVE Essential 10 items provides a manageable approach to evaluate reporting quality efficiently and assess the effect of interventions and policies designed to improve the reporting of animal experiments. It provides a starting point for the development of operationalised checklists to assess reporting, ultimately leading to the build of automated or semi-automated artificial intelligence tools that can detect missing information rapidly.59

Improving reporting is a collaborative endeavour, and concerted effort from the biomedical research community is required to ensure maximum impact. We welcome collaboration with other groups operating in this area, as well as feedback on ARRIVE 2.0 and our implementation strategy.

Supporting information

S1 Table

Noteworthy changes in ARRIVE 2.0.

This table recapitulates noteworthy changes in the ARRIVE guidelines 2.0, compared to the original ARRIVE guidelines published in 2010.

S1 Delphi

Delphi methods and results.

Methodology and results of the Delphi study that was used to prioritise the items of the guidelines into the ARRIVE Essential 10 and Recommended Set.

S1 Data

Delphi data.

Tabs 1, 2, and 3: Panel members’ scores for each of the ARRIVE items during rounds 1, 2, and 3, along with descriptive statistics. Tab 4: Qualitative feedback, collected from panel members during round 1, on the importance and the wording of each item. Tab 5: Additional items suggested for consideration in ARRIVE 2.0; similar suggestions were grouped together before processing. Tab 6: Justifications provided by panel members for changing an item’s score between round 1 and round 2.

S2 Data

Road testing data.

Tab 1: Participants’ demographics and general feedback on the guidelines and the E&amp;E preprints. Tab 2: Outcome of each manuscript’s assessment and justifications provided by participants for not including information covered in the ARRIVE guidelines.

S1 Road Testing

Road testing methods and results.

Methodology used to road test the revised ARRIVE guidelines and E&amp;E (as published in preprint) and how this information was used in the development of ARRIVE 2.0.

S1 Annotated Byline

Individual authors’ positions at the time this article was submitted.

S1 Table

Noteworthy changes in ARRIVE 2.0.

This table recapitulates noteworthy changes in the ARRIVE guidelines 2.0, compared to the original ARRIVE guidelines published in 2010.

Click here to view.(31K, pdf)

S1 Delphi

Delphi methods and results.

Methodology and results of the Delphi study that was used to prioritise the items of the guidelines into the ARRIVE Essential 10 and Recommended Set.

Click here to view.(287K, pdf)

S1 Data

Delphi data.

Tabs 1, 2, and 3: Panel members’ scores for each of the ARRIVE items during rounds 1, 2, and 3, along with descriptive statistics. Tab 4: Qualitative feedback, collected from panel members during round 1, on the importance and the wording of each item. Tab 5: Additional items suggested for consideration in ARRIVE 2.0; similar suggestions were grouped together before processing. Tab 6: Justifications provided by panel members for changing an item’s score between round 1 and round 2.

Click here to view.(136K, xlsx)

S2 Data

Road testing data.

Tab 1: Participants’ demographics and general feedback on the guidelines and the E&amp;E preprints. Tab 2: Outcome of each manuscript’s assessment and justifications provided by participants for not including information covered in the ARRIVE guidelines.

Click here to view.(39K, xlsx)

S1 Road Testing

Road testing methods and results.

Methodology used to road test the revised ARRIVE guidelines and E&amp;E (as published in preprint) and how this information was used in the development of ARRIVE 2.0.

Click here to view.(31K, pdf)

S1 Annotated Byline

Individual authors’ positions at the time this article was submitted.
Click here to view.(439K, docx)

Acknowledgements

The authors would like to thank the members of the expert panel for the Delphi exercise and the participants of the road testing for their time and feedback. We are grateful to the DelphiManager team for advice and use of their software. The authors would like to thank late Doug Altman for his contribution to this project; Doug was a dedicated member of the working group and his input to the guidelines’ revision has been invaluable.This article was originally published in Plos Biology, https://doi.org/10.1371/journal.pbio.3000410, under a CC-BY license.

Funding

This study was funded by National Centre for the Replacement, Refinement and Reduction of Animals in Research.

Funding

This study was funded by National Centre for the Replacement, Refinement and Reduction of Animals in Research.

Experimental Design and Reporting, NC3Rs, London, UK
William Harvey Research Institute, London, UK
Queen Mary University of London, London, UK
Taylor &amp; Francis Group, London, UK
ICF, Durham, North Carolina, USA
Opinion, Nature, San Francisco, California, USA
School of Education, University of Bristol, Bristol, UK
Life Sciences, PLOS ONE, Cambridge, UK
School of Biological Sciences, University of Bristol, Bristol, UK
Quest Center for Transforming Biomedical Research, Berlin Institute of Health, Berlin, Germany
Department of Experimental Neurology, Charite Universitatsmedizin Berlin, Berlin, Germany
National Heart and Lung Institute, Imperial College London, London, UK
Centre for Evidence Synthesis in Global Health, Clinical Sciences Department, Liverpool School of Tropical Medicine, Liverpool, UK
Clinical and Experimental Sciences, University of Southampton, Southampton, UK
Tasmanian School of Medicine, University of Tasmania, Hobart, Tasmania, Australia
Data Sciences &amp; Quantitative Biology, Discovery Sciences, R&amp;D, AstraZeneca PLC, Cambridge, UK
Prioris. ai, Ottawa, Ontario, Canada
Animal Welfare, NC3Rs, London, UK
Open Science, Hindawi Limited, London, UK
Academic Lead for Research Improvement and Research Integrity, University of Edinburgh, Edinburgh, UK
Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, UK
Academia Europaea Knowledge Hub, Cardiff University, Cardiff, UK
Policy, Ethics and Governance, Medical Research Council, London, UK
Department of Anesthesiology, University of Florida College of Medicine, Gainesville, Florida, USA
Faculty of Medicine and Health, The University of Sydney, Sydney, New South Wales, Australia
Research Quality, National Institute of Neurological Disorders and Stroke, Bethesda, Maryland, USA
Janssen Pharmaceutica NV, Beerse, Belgium
Veterinary Public Health Institute, Vetsuisse Faculty, University of Bern, Bern, Switzerland
Corresponding author.
Nathalie Percie du Sert: ku.gro.sr3cn@tresudeicrep.eilahtan
Corresponding author Nathalie Percie du Sert: Head of Experimental Design and Reporting, NC3Rs, London, United Kingdom.
Nathalie Percie du Sert: ku.gro.sr3cn@tresudeicrep.eilahtan
Re-use permitted under CC BY
This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made. See: https://creativecommons.org/licenses/by/4.0/.

Abstract

Reproducible science requires transparent reporting. The ARRIVE guidelines (Animal Research: Reporting of In Vivo Experiments) were originally developed in 2010 to improve the reporting of animal research. They consist of a checklist of information to include in publications describing in vivo experiments to enable others to scrutinise the work adequately, evaluate its methodological rigour and reproduce the methods and results. Despite considerable levels of endorsement by funders and journals over the years, adherence to the guidelines has been inconsistent, and the anticipated improvements in the quality of reporting in animal research publications have not been achieved. Here, we introduce ARRIVE 2.0. The guidelines have been updated and information reorganised to facilitate their use in practice. We used a Delphi exercise to prioritise and divide the items of the guidelines into two sets, the ‘ARRIVE Essential 10’, which constitutes the minimum requirement, and the ‘Recommended Set’, which describes the research context. This division facilitates improved reporting of animal research by supporting a stepwise approach to implementation. This helps journal editors and reviewers verify that the most important items are being reported in manuscripts. We have also developed the accompanying Explanation and Elaboration document, which serves (1) to explain the rationale behind each item in the guidelines, (2) to clarify key concepts and (3) to provide illustrative examples. We aim, through these changes, to help ensure that researchers, reviewers and journal editors are better equipped to improve the rigour and transparency of the scientific process and thus reproducibility.

Abstract

Footnotes

Competing interests AA: editor in chief of the British Journal of Pharmacology. WJB, ICC and ME: authors of the original ARRIVE guidelines. WJB: serves on the Independent Statistical Standing Committee of the funder CHDI foundation. AC: Senior Editor, PLOS ONE. AC, CJMcC, MM and ESS: involved in the IICARus trial. ME, MMcL and ESS: have received funding from NC3Rs. ME: sits on the MRC ERPIC panel. STH: chair of the NC3Rs board, trusteeship of the BLF, Kennedy Trust, DSRU and CRUK, member of Governing Board, Nuffield Council of Bioethics, member Science Panel for Health (EU H2020), founder and NEB Director Synairgen, consultant Novartis, Teva and AZ, chair MRC/GSK EMINENT Collaboration. VH, KL, EJP and NPdS: NC3Rs staff, role includes promoting the ARRIVE guidelines. SEL and UD: on the advisory board of the UK Reproducibility Network, CJMcC: shareholdings in Hindawi, on the publishing board of the Royal Society, on the EU Open Science policy platform. UD, MM, NPdS, CJMcC, ESS, TS and HW: members of EQIPD. MM: member of the Animals in Science Committee, on the steering group of the UK Reproducibility Network. NPdS and TS: associate editors of BMJ Open Science. OHP: vice president of Academia Europaea, editor in chief of Function, senior executive editor of the Journal of Physiology, member of the Board of the European Commission’s SAPEA (Science Advice for Policy by European Academies). FR: NC3Rs board member, shareholdings in GSK. FR and NAK: shareholdings in AstraZeneca. PR: member of the University of Florida Institutional Animal Care and Use Committee, editorial board member of Shock. ESS: editor in chief of BMJ Open Science. SDS: role is to provide expertise and does not represent the opinion of the NIH. TS: shareholdings in Johnson &amp; Johnson. SA, MTA, MB, PG, DWH, and KR declared no conflict of interest.

Contributed by

Author contributions

NPdS: conceptualisation, data curation, formal analysis, funding acquisition, investigation, methodology, project administration, resources, supervision, visualisation, writing—original draft, writing—review and editing; VH: data curation, investigation, methodology, project administration, resources, writing—original draft; SEL, EJP: writing—review and editing; KL: investigation, project administration, writing—review and editing; AA, SA, MTA, MB, WJB, AC, ICC, UD, ME, PG, STH, DWH, NAK, CJMcC, MM, OHP, FR, PR, KR, ESS, SDS, TS, HW: investigation, methodology, resources, writing—original draft, writing—review and editing.

Provenance and peer review Not commissioned; internally peer reviewed.

Footnotes

References

  • 1. Goodman SN, Fanelli D, Ioannidis JPAWhat does research reproducibility mean? Sci Transl Med. 2016;8:341ps12 [PubMed][Google Scholar]
  • 2. Begley CG, Ioannidis JPAReproducibility in science: improving the standard for basic and preclinical research. Circ Res. 2015;116:116–26.[PubMed][Google Scholar]
  • 3. Freedman LP, Venugopalan G, Wisman RReproducibility2020: Progress and priorities. F1000Res. 2017;6:604.[Google Scholar]
  • 4. Kidwell MC, Lazarević LB, Baranski E, et al Badges to acknowledge open practices: a simple, low-cost, effective method for increasing transparency. PLoS Biol. 2016;14:e1002456. [Google Scholar]
  • 5. Else HRadical open-access plan could spell end to journal subscriptions. Nature. 2018;561:17–18.[PubMed][Google Scholar]
  • 6. Nosek BA, Ebersole CR, DeHaven AC, et al The preregistration revolution. Proc Natl Acad Sci U S A. 2018;115:2600–6.[Google Scholar]
  • 7. Chambers CD, Forstmann B, Pruszynski JARegistered reports at the European Journal of Neuroscience: consolidating and extending peer-reviewed study PRE-REGISTRATION. Eur J Neurosci. 2017;45:627–8.[PubMed][Google Scholar]
  • 8. Bate ST, Clark RA The design and statistical analysis of animal experiments. Cambridge University Press; Cambridge, United Kingdom: 2014. p. 310. [PubMed][Google Scholar]
  • 9. Percie du Sert N, Bamsey I, Bate ST, et al The experimental design assistant. PLoS Biol. 2017;15:e2003779. [Google Scholar]
  • 10. Lazic SE Experimental Design for Laboratory Biologists: Maximising Information and Improving Reproducibility. Cambridge University Press; Cambridge: 2016. [PubMed][Google Scholar]
  • 11. Macleod MR, Lawson McLean A, Kyriakopoulou A, et al Risk of bias in reports of in vivo research: a focus for improvement. PLoS Biol. 2015;13:e1002273. [Google Scholar]
  • 12. Macleod MR, Fisher M, O’Collins V, et al Good laboratory practice: preventing introduction of bias at the bench. Stroke. 2009;40:e50–2.[PubMed][Google Scholar]
  • 13. Rice AS, Cimino-Brown D, Eisenach JC, et al Animal models and the prediction of efficacy in clinical trials of analgesic drugs: a critical appraisal and call for uniform reporting standards. Pain. 2009;139:243–7.[PubMed][Google Scholar]
  • 14. McCance IAssessment of statistical procedures used in papers in the Australian Veterinary Journal. Aust Vet J. 1995;72:322–9.[PubMed][Google Scholar]
  • 15. Hackam DG, Redelmeier DATranslation of research evidence from animals to humans. JAMA. 2006;296:1727–2.[PubMed][Google Scholar]
  • 16. Kilkenny C, Parsons N, Kadyszewski E, et al Survey of the quality of experimental design, statistical analysis and reporting of research using animals. PLoS One. 2009;4:e7824. [Google Scholar]
  • 17. van der Worp HB, Howells DW, Sena ES, et al Can animal models of disease reliably inform human studies? PLoS Med. 2010;7:e1000245. [Google Scholar]
  • 18. Glasziou P, Altman DG, Bossuyt P, et al Reducing waste from incomplete or unusable reports of biomedical research. Lancet. 2014;383:267–76.[PubMed][Google Scholar]
  • 19. Begley CG, Ellis LMDrug development: raise standards for preclinical cancer research. Nature. 2012;483:531–3.[PubMed][Google Scholar]
  • 20. Scott S, Kranz JE, Cole J, et al Design, power, and interpretation of studies in the standard murine model of ALS. Amyotroph Lateral Scler. 2008;9:4–15.[PubMed][Google Scholar]
  • 21. Kilkenny C, Altman DGImproving bioscience research reporting: ARRIVE-ing at a solution. Lab Anim. 2010;44:377–8.[PubMed][Google Scholar]
  • 22. Kilkenny C, Browne W, Cuthill IC, et al Animal research: reporting in vivo experiments: the ARRIVE guidelines. J Gene Med. 2010;12:561–3.[PubMed][Google Scholar]
  • 23. Kilkenny C, Browne WJ, Cuthill IC, et al Improving bioscience research reporting: the ARRIVE guidelines for reporting animal research. PLoS Biol. 2010;8:e1000412. [Google Scholar]
  • 24. Kilkenny C, Browne WJ, Cuthill IC, et al Improving bioscience research reporting: the ARRIVE guidelines for reporting animal research. J Pharmacol Pharmacother. 2010;1:94–9.[Google Scholar]
  • 25. NC3Rs Reporting Guidelines Working GroupAnimal research: Reporting in vivo experiments: The ARRIVE guidelines. J Physiol. 2010;588:2519–21.[Google Scholar]
  • 26. Kilkenny C, Browne WJ, Cuthill IC, et al Animal research: Reporting in vivo experiments: the ARRIVE guidelines. Exp Physiol. 2010;160:1573–6.[PubMed][Google Scholar]
  • 27. McGrath JC, Drummond GB, McLachlan EM, et al Guidelines for reporting experiments involving animals: the ARRIVE guidelines. Br J Pharmacol. 2010;160:1573–6.[Google Scholar]
  • 28. Kilkenny C, Browne W, Cuthill IC, et al Animal Research: Reporting In Vivo Experiments--the ARRIVE Guidelines. J Cereb Blood Flow Metab. 2011;31:991–3.[Google Scholar]
  • 29. Kilkenny C, Browne WJ, Cuthill IC, et al Improving bioscience research reporting: the ARRIVE guidelines for reporting animal research. Vet Clin Pathol. 2012;41:27–31.[PubMed][Google Scholar]
  • 30. Kilkenny C, Browne WJ, Cuthill IC, et al Improving bioscience research reporting: the ARRIVE guidelines for reporting animal research. Osteoarthritis Cartilage. 2012;20:256–60.[PubMed][Google Scholar]
  • 31. Avey MT, Moher D, Sullivan KJ, et al The devil is in the details: incomplete reporting in preclinical animal research. PLoS One. 2016;11:e0166733. [Google Scholar]
  • 32. Leung V, Rousseau-Blass F, Beauchamp G, et al Arrive has not ARRIVEd: Support for the ARRIVE (Animal Research: Reporting of in vivo Experiments) guidelines does not improve the reporting quality of papers in animal welfare, analgesia or anesthesia. PLoS One. 2018;13:e0197882. [Google Scholar]
  • 33. Hair K, Macleod MR, Sena ES, et al A randomised controlled trial of an intervention to improve compliance with the ARRIVE guidelines (IICARus) Res Integr Peer Rev. 2019;4:12.[Google Scholar]
  • 34. The NPQIP Collaborative groupDid a change in Nature journals’ editorial policy for life sciences research improve reporting? BMJ Open Science. 2019;3:e000035 [PubMed][Google Scholar]
  • 35. Han S, Olonisakin TF, Pribis JP, et al A checklist is associated with increased quality of reporting preclinical biomedical research: a systematic review. PLoS One. 2017;12:e0183591. [Google Scholar]
  • 36. Ramirez FD, Motazedian P, Jung RG, et al Methodological rigor in preclinical cardiovascular studies: targets to enhance reproducibility and promote research translation. Circ Res. 2017;120:1916–1926.[Google Scholar]
  • 37. Reichlin TS, Vogt L, Wurbel HThe researchers’ view of scientific rigor-survey on the conduct and reporting of in vivo research. PLoS One. 2016;11:e0165999. [Google Scholar]
  • 38. Hurst V, Percie du Sert NThe ARRIVE guidelines survey. Open Science Framework. 2017[PubMed][Google Scholar]
  • 39. Fraser H, Parker T, Nakagawa S, et al Questionable research practices in ecology and evolution. PLoS One. 2018;13:e0200303. [Google Scholar]
  • 40. The Academy of Medical Sciences [cited 16 June 2020];Reproducibility and reliability of biomedical research: improving research practice. 2015 Available from: .[PubMed][Google Scholar]
  • 41. Sena ES, Currie GLHow our approaches to assessing benefits and harms can be improved. Anim Welfare. 2019;28:107–115.[PubMed][Google Scholar]
  • 42. Percie du Sert N, Ahluwalia A, Alam S, et al Reporting animal research: Explanation and Elaboration for the ARRIVE guidelines 2.0. PLoS Biol. 2020;18:e3000411. [Google Scholar]
  • 43. Landis SC, Amara SG, Asadullah K, et al A call for transparent reporting to optimize the predictive value of preclinical research. Nature. 2012;490:187–91.[Google Scholar]
  • 44. Kimmelman J, Anderson JAShould preclinical studies be registered? Nat Biotechnol. 2012;30:488–9.[Google Scholar]
  • 45. Wilkinson MD, Dumontier M, Aalbersberg IJJ, et al The FAIR Guiding Principles for scientific data management and stewardship. Sci Data. 2016;3:160018. [Google Scholar]
  • 46. Moher D, Schulz KF, Simera I, et al Guidance for developers of health research reporting guidelines. PLoS Med. 2010;7:e1000217. [Google Scholar]
  • 47. Chambers K, Collings A, Graf C, et al Towards minimum reporting standards for life scientists. MetaArXiv. 2019[PubMed][Google Scholar]
  • 48. Rands SAInclusion of policies on ethical standards in animal experiments in biomedical science journals. J Am Assoc Lab Anim Sci. 2011;50:901–3.[Google Scholar]
  • 49. Osborne NJ, Payne D, Newman MLJournal editorial policies, animal welfare, and the 3Rs. Am J Bioeth. 2009;9:55–9.[PubMed][Google Scholar]
  • 50. Vasilevsky NA, Minnier J, Haendel MA, et al Reproducible and reusable research: are journal data sharing policies meeting the mark? PeerJ. 2017;5:e3208. [Google Scholar]
  • 51. Giofre D, Cumming G, Fresc L, et al The influence of journal submission guidelines on authors’ reporting of statistics and use of open research practices. PLoS One. 2017;12:e0175583. [Google Scholar]
  • 52. Michel MC, Murphy TJ, Motulsky HJNew author guidelines for displaying data and reporting data analysis and statistical methods in experimental biology. Mol Pharmacol. 2020;97:49–60.[PubMed][Google Scholar]
  • 53. Rowan-Legg A, Weijer C, Gao J, et al A comparison of journal instructions regarding institutional review board approval and conflict-of-interest disclosure between 1995 and 2005. J Med Ethics. 2009;35:74–8.[PubMed][Google Scholar]
  • 54. Ancker JS, Flanagin AA comparison of conflict of interest policies at peer-reviewed journals in different scientific disciplines. Sci Eng Ethics. 2007;13:147–57.[PubMed][Google Scholar]
  • 55. [Accessed 16 Jun 2020];Updated RCUK guidance for funding applications involving animal research 2015. Available:
  • 56. Prager EM, Chambers KE, Plotkin JL, et al Improving transparency and scientific rigor in academic publishing. Brain Behav. 2019;9:e01141. [Google Scholar]
  • 57. Enhancing reproducibility. Nat Methods. 2013;10:367.[PubMed]
  • 58. Curtis MJ, Alexander S, Cirino G, et al Experimental design and analysis and their reporting II: updated and simplified guidance for authors and peer reviewers. Br J Pharmacol. 2018;175:987–93.[Google Scholar]
  • 59. Heaven DAI peer reviewers unleashed to ease publishing grind. Nature. 2018;563:609–10.[PubMed][Google Scholar]
Collaboration tool especially designed for Life Science professionals.Drag-and-drop any entity to your messages.