Open Scientific Practices

Empirical studies have shown that many research findings in basic and preclinical biomedicine (Begley & Ioannidis, 2015), pharmacology (e.g. Lancee et al, 2017), psychology (e.g. Open Science Collaboration, 2015) and economy (Ioannidis et al, 2017) cannot be independently replicated. One of the main reasons seem to be that the original findings were, at least partly, based on questionable research practices, such as

  • analyzing the data until the desired results are found (p-hacking, data torture or researcher degrees of freedom; e.g. Simmons et al, 2011; Wicherts et al, 2016),
  • selective reporting of results (publication bias or file drawer problem; e.g. John et al, 2012),
  • incorrectly reporting exploratory analyzes as confirmatory (post hoc hypothesizing or HARKing; Kerr, 1998),
  • collecting more data to reach smaller p-values (optional stopping; e.g. Simmons et al, 2011),
  • conducting low powered studies (typically by employing too small samples; e.g. Button et al, 2013),
  • non-transparent reporting (e.g. hiding that questionable practices were employed; e.g. Simmons et al, 2012).

Questionable research practices are not only unethical but also counterproductive to knowledge accumulation (Nosek & Bar-Anan, 2012). We waste time and money trying to build upon results that gave the impression of being more promising than they actually were. The development and refinement of interventions that potentially could save lives, or improve the quality of lives, are thereby hampered.

These problems were described as early as 1830 by Charles Babbage (Reflections on the Decline of Science in England), showing that bad scientific practices have been employed during a very, very long time. However, we currently see the rise of an academic revolution (De Groot / Wagenmakers et al, 1954/2014): since 2011 empirical scientists are speaking up against the questionable practices and have joined forces to create a real change. The proposed solutions are typically called open scientific practices, such as

Early career scientists (graduate students and post docs) are still experiencing pressure from supervisors, other senior scientists and peers to employ questionable practices (e.g. Bullied Into Bad Science; Mobley et al, 2013). Social exclusion and frustration have already led too many whistleblowers to leave academia. As of 2017, universities still tend to leave it up to individual scientists and students to take on the struggle for good science. Due to this lack of responsibility, we currently see examples of researchers whose academic reputation and careers are destroyed because (1) they were inadequately trained, and (2) the academic employer is not yet well enough informed to handle situations where questionable research practices are disclosed (e.g. the cases of Dr. Jodi Whitaker, Dr. Jens Förster and Prof. Brian Wansink).

IGDORE was founded to promote and enable good scientific practices. We want to harbour and protect whistleblowers in science. We want to be an institution that takes responsibility for educating our affiliated scientists and students in good scientific practices, because it is the only ethical way to conduct science, but also because we want to protect their future careers from being harmed. In addition, we want to make the transition toward a transparent science easier and more efficient to everyone: our scientific openness support is available to all scientists and students, wherever they are and regardless of affiliation (if any).