fNIRS reproducibility varies with data quality, analysis pipelines, and researcher experience

Meryem A Yücel, Robert Luke, Rickson C Mesquita, Alexander von Lühmann, David M A Mehler, Michael Lührs, Jessica Gemignani, Androu Abdalmalak, Franziska Albrecht, Iara de Almeida Ivo, Christina Artemenko, Kira Ashton, Pawel Augustynowicz, Aahana Bajracharya, Elise Bannier, Beatrix Barth, Laurie Bayet, Jacqueline Behrendt, Hadi Borj Khani, Lenaic BorotJordan A Borrell, Sabrina Brigadoi, Kolby Brink, Chiara Bulgarelli, Emmanuel Caruyer, Hsin-Chin Chen, Christopher Copeland, Isabelle Corouge, Simone Cutini, Renata Di Lorenzo, Thomas Dresler, Adam T Eggebrecht, Ann-Christine Ehlis, Sinem B Erdogan, Danielle Evenblij, Talukdar Raian Ferdous, Victoria Fracalossi, Erika Franzén, Anne Gallagher, Christian Gerloff, Judit Gervain, Noy Goldhamer, Louisa K Gossé, Ségolène M R Guérin, Edgar Guevara, S M Hadi Hosseini, Hamish Innes-Brown, Isabell Int-Veen, Sagi Jaffe-Dax, Nolwenn Jégou, João Figueiredo Pereira, Bettina Sorger, Et al.

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

As data analysis pipelines grow more complex in brain imaging research, understanding how methodological choices affect results is essential for ensuring reproducibility and transparency. This is especially relevant for functional Near-Infrared Spectroscopy (fNIRS), a rapidly growing technique for assessing brain function in naturalistic settings and across the lifespan, yet one that still lacks standardized analysis approaches. In the fNIRS Reproducibility Study Hub (FRESH) initiative, we asked 38 research teams worldwide to independently analyze the same two fNIRS datasets. Despite using different pipelines, nearly 80% of teams agreed on group-level results, particularly when hypotheses were strongly supported by literature. Teams with higher self-reported analysis confidence, which correlated with years of fNIRS experience, showed greater agreement. At the individual level, agreement was lower but improved with better data quality. The main sources of variability were related to how poor-quality data were handled, how responses were modeled, and how statistical analyses were conducted. These findings suggest that while flexible analytical tools are valuable, clearer methodological and reporting standards could greatly enhance reproducibility. By identifying key drivers of variability, this study highlights current challenges and offers direction for improving transparency and reliability in fNIRS research.
Original languageEnglish
Article number1149
Number of pages17
JournalCommunications Biology
Volume8
Issue number1
DOIs
Publication statusPublished - 4 Aug 2025

Keywords

  • Spectroscopy, Near-Infrared/methods standards
  • Reproducibility of Results
  • Humans
  • Data Accuracy
  • Brain/diagnostic imaging physiology
  • Research Personnel

Fingerprint

Dive into the research topics of 'fNIRS reproducibility varies with data quality, analysis pipelines, and researcher experience'. Together they form a unique fingerprint.

Cite this