06/28/2024

What To Do When Your Hypothesis Is Wrong? Publish!

17:08 minutes

a white woman wearing a black turtleneck wearing a lanyard and a necklace mic gesturing with her hands speaking on a stage with abstract graphics behind her
Sarahanne Field, editor in chief at the Journal Of Trial And Error. Credit: Sander Martens

Most scientific studies that get published have “positive results,” meaning that the study proved its hypothesis. Say you hypothesize that a honeybee will favor one flower over another, and your research backs that up? That’s a positive result.

But what about the papers with negative results? If you’re a researcher, you know that you’re much more likely to disprove your hypothesis than validate it. The problem is that there aren’t a lot of incentives to publish a negative result.

But, some argue that this bias to only publish papers with positive results is worsening existing issues in scientific research and publishing, and could prevent future breakthroughs.

And that’s where the Journal of Trial and Error comes in. It’s a scientific publication that only publishes negative and unexpected results. And the team behind it wants to change how the scientific community thinks about failure, in order to make science stronger.

Guest host Anna Rothschild talks with Dr. Sarahanne Field, editor-in-chief of the Journal of Trial And Error, and assistant professor in behavioral and social sciences at University of Groningen.


Further Reading


Donate To Science Friday

Invest in quality science journalism by making a donation to Science Friday.

Donate

Segment Guests

Sarahanne Field

Dr. Sarahanne Field is Editor in chief of The Journal Of Trial And Error, and an assistant professor at the University of Groningen in the Netherlands.

Segment Transcript

ANNA ROTHSCHILD: This is Science Friday. I’m Anna Rothschild.

On a show like this, we feature a lot of studies that have positive results. Say you hypothesize that a honeybee will favor one flower over another, and your research backs that up. That’s a positive result. But what about the papers with negative results? If you’re a researcher, you know that you’re much more likely to disprove your hypothesis than validate it, but there aren’t a lot of incentives to go out and publish your failed experiments.

Now, some argue that this bias to only publish papers with positive results is hindering scientific research and preventing future breakthroughs. That’s where the Journal of Trial and Error comes in. It’s a scientific publication that only publishes papers with negative or unexpected results, and the team behind it wants to change how the scientific community thinks about failure in order to make science better.

Here to tell us more is my guest, Dr. Sarahanne Field, editor in chief at the Journal of Trial and Error and assistant professor in behavioral and social sciences at the University of Groningen in the Netherlands. Welcome to Science Friday.

SARAHANNE FIELD: Thank you. I’m so happy to be here.

ANNA ROTHSCHILD: So you’re a researcher. For those listening who aren’t in science, how much more likely is it that you end up with a negative than a positive result at the end of your study?

SARAHANNE FIELD: Honestly, I would say about one in every two studies are negative.

ANNA ROTHSCHILD: Right. I mean, is that frustrating as a researcher?

SARAHANNE FIELD: No because for me, honestly, it’s expected. This is part of the scientific process. It’s iterative. It’s not linear, as some people might expect. It doesn’t go from hypothesis to method to result to lovely finding on social media as simply as one might expect. There’s a lot of trial and error involved. And so having an unexpected or maybe a disappointing finding is kind of expected in and of itself.

ANNA ROTHSCHILD: Right. And also, I mean, this is something you’re devoting your life to, so even a negative result is a new piece of information.

SARAHANNE FIELD: Absolutely, which is why we’re doing what we’re doing. We want to make sure that we can learn from the mistakes that we make in science because a lot of the time, that’s what they are. They’re mistakes that we’ve made. They’re mistakes that we can learn from. They have information value.

ANNA ROTHSCHILD: And just for our audience to know, you can spend years of research on a study and get a negative result.

SARAHANNE FIELD: Absolutely. Yep.

ANNA ROTHSCHILD: So you’ve said you want to highlight the ugly parts of science. I like that. Why is that?

SARAHANNE FIELD: Well, the gap between what is researched and what is published is much bigger than what the lay public might expect. So we do loads and loads of studies, and only a portion of those, the ones that come out really pretty, are published. But what that means is that we end up with a scientific literature that is all the pretty stuff and none of the stuff that went wrong, and what’s a problem with that is that we can’t learn from the things that went wrong.

So you have researchers that are going out and doing the same studies over and over again when, actually, other people have gone before them and have failed to find results, but they just didn’t publish them. So people are wasting time and money, which in research are very scarce resources, to try and do studies that will never come out right anyway. And so it’s a shame that we can’t learn from those, so we attempt to close this gap between what we research and what we publish.

ANNA ROTHSCHILD: In the past, have you as a researcher tried publishing negative results and run up against pushback?

SARAHANNE FIELD: No, and I am very rare in saying that. The good thing about the discipline that I am in, which is meta science– meaning that we use the scientific method itself to study science. We’re very familiar with the things that can go wrong, and we’re very open to publishing negative results and unexpected results compared to a traditional older discipline in science. So I’m actually really lucky, and, in fact, I’ve published null results quite successfully and easily. So I’m very lucky.

ANNA ROTHSCHILD: Yeah, that’s quite rare. Among your colleagues maybe in different disciplines, how often are they sort of turned away from publishing in a journal if they want to publish something negative?

SARAHANNE FIELD: Oh, constantly, constantly, and the majority of disciplines have this issue, which is exactly where our journal comes in because there are so many good studies– and I mean good-quality, well-designed studies that should be informative that can’t be published simply because they don’t support the hypothesis that was in the study. And so there’s an enormous, enormous amount of literature that should be out there that’s not. So this is very common.

ANNA ROTHSCHILD: Why don’t journals tend to publish negative results?

SARAHANNE FIELD: They’re not sexy. They don’t get as much readership. If you read– OK. Social media, you see a really cool looking study that says something along the lines of drink a glass of red wine. You don’t have to go to the gym today. That’s so much cooler than saying, we have no idea if red wine has any positive effect on the body. That’s so much cooler to read, right? The former, I mean, rather than the latter.

And it’s also just the case that reviewers can make a little bit more sense. So reviewers are the people who look into these papers and check them for mistakes and that kind of thing. Reviewers are much more able to make heads or tails of a study that looks clean and tidy and linear because sometimes when you have a null result or an unexpected result, you kind of go, huh? What? What does this mean? It takes a lot of work to analyze and understand what you can learn from that. Whereas if you see a nice easy, shiny study that went exactly as planned, it’s like, well, yeah, we know what to conclude from this.

ANNA ROTHSCHILD: I mean, and then there’s this issue on the flip side where researchers also don’t want to publish negative studies. Why is that?

SARAHANNE FIELD: Absolutely. So you’re talking about what’s called reporting bias. And, indeed, it’s just a matter of most people know as a researcher that it’s so much harder to get published. So you conduct a study. It went pear shaped for one reason or another, and you think, oh, God. So I can either spend two years of my life trying to get this thing published, which may never happen, or I can just turf it, start over again, give it another go, try a different variable, and then you’re going to waste a lot less time because the thing is a lot of us are having trouble getting tenure and making sure we have job stability, and so we’re really incentivized to have a good CV. And what makes a good scientific CV? At this point, publications. So you want to spend your time working on something that’s going to get published rather than something that’s going to just cost you time and effort and never get on the scientific record anyway.

ANNA ROTHSCHILD: And it takes a long time to actually pitch and submit to a journal only to then be turned down. So you want to send out the stuff that you feel like has a good chance of actually making it, right?

SARAHANNE FIELD: Absolutely.

ANNA ROTHSCHILD: What’s your pitch to researchers to go ahead and try to publish those negative studies anyway?

SARAHANNE FIELD: These kinds of negative studies, they have information value. So what I would say is if you have a good-quality study, a study that’s been designed well, that has a good sample– so that means a lot of participants because that’s going to be more informative than a small participant pool– and it just went pear shaped for one reason or another, write it up properly like you would a normal study. Provide a good reflection on what might have gone wrong, and then send it right on over to the Journal of Trial and Error. So that’s what I would say.

But let me just provide a quick caution. We don’t want to work with authors to produce bad studies that went wrong but no one knows why. We’re not like Rumpelstiltskin where we turn straw into gold. We do want to work with something that’s high quality. So I would just provide that as well.

But not only do we have the Journal of Trial and Error, we also have another journal that operates as well called, I believe, the Journal in Support of the Null Hypothesis. So that’s another outlet you can publish in if you’re thinking about this.

ANNA ROTHSCHILD: Let’s talk about some of the use cases in the future of this. So there’s been a lot of talk about integrating machine learning and AI into research, which can, in theory, analyze a vast trove of data and pull away new insights that would have been too laborious to do by hand in the past, and there’s a lot of promise here for big breakthroughs in medicine, for example. How accurate can machine-learning models be if they’re only trained on positive data?

SARAHANNE FIELD: It’s a real concern because– OK, so say you’re part of the lay public, and say you have access to the literary record, the studies that have been published. What you would theoretically learn is a portion of the full story. You would be learning about a subset of what’s really going on.

And the exact same thing is the case for any kind of machine-learning methods where the model is being trained on what’s available. That’s exactly what you’d get. A completely biased data set is what’s being basically trained on. And that’s a massive problem that concerns the pants off me, to be honest, because I can see so many implications of this, and I don’t know how to get around it, but it’s a massive problem.

ANNA ROTHSCHILD: Could you maybe give a specific example of a study that would maybe use machine learning in this way and where negative results would be really useful?

SARAHANNE FIELD: I think anything that has to do with, for example, drug trials, there would be a potential, I think, for using very large-scale data to get some insights from existing drug trials that the FDA will eventually end up approving. And as it currently stands, so many drug trials are conducted, and loads of them are false. But then you get– say you conduct five drug trials on a particular drug. One of those will be positive, and it will show efficacy of the drug, and the other four do not.

So I can imagine in a case here, you would need that model to really learn as much as possible about the true state of certain drug efficacy, and it just cannot because the information is not there. And so it gets a completely false sense of what drugs actually have efficacy simply because of missing information. I mean, if you think about drug trials, that’s such a heavy thing with such huge importance and impact for the public, and then the likelihood of it actually being working is so low that we just don’t know that.

ANNA ROTHSCHILD: Yeah, there are just huge gaps. I understand that when the Journal of Trial and Error first started in 2008, even the editorial team were having trouble getting researchers to submit negative results to the journal. Has that changed?

SARAHANNE FIELD: Yes, it definitely has. Part of the reason that they had so much trouble in the start– and I say that because I didn’t join it until 2021. They had so much trouble because it was so bizarre for researchers to think about actively submitting their negative results to a journal that was open to receiving them. It’s a matter of getting the message out and saying, hey, we’re this outlet that actually wants your studies that went wrong, not that you’ll have to fight tooth and nail to get even a review. It’s just so outside of what literally decades of developed research culture kind of says you would expect. So it’s very, very, I would say, avant garde in that way, and it still is.

Back in 2018, we were only about seven years out of the crisis of confidence or the reproducibility crisis. And so it wasn’t until that really started to blow up that we started to say, hey, there’s value in null results. There’s value in unexpected findings. And then from then to go, hey, we can actually publish them and learn from them. So it’s a matter of academia being slow on the uptake and just getting word out. Hey, there’s an outlet that does this.

ANNA ROTHSCHILD: Are more mainstream journals now also encouraging people to submit their negative results?

SARAHANNE FIELD: Absolutely. Yeah, we’re seeing that a little bit as well, at least in some of the metascience journals. We have a lot of journals that also accept the registered report format. I’m not sure if that’s come up on SciFri before, but registered reports are a type of article in which you’ve preregistered your plans for a study, and that plan has been peer reviewed before data is being collected.

What that means is that if you conduct– or if you plan a really good study and you get unexpected results, they’ll probably get published anyway. We actually give a guarantee called an IPA. Sounds like a beer, but it’s even better. It’s we will publish your study regardless of what you find.

What that means is that a lot of journals now are publishing null findings simply because those authors had IPA. So those authors had a promise. We will publish this study if it’s of good quality despite the results. And so that just ends up by default that more null results and unexpected findings are actually being published and in good outlets too.

ANNA ROTHSCHILD: That’s so great. Yeah, it really is better than a beer.

This was one example of a sort of change in the publishing industry. What other sort of larger structural forces or incentives in scientific publishing need to change in order for more negative results to be published?

SARAHANNE FIELD: I think one thing that really needs to change is kind of how we see the research life cycle. So like I said earlier on in the piece, we and certainly the public thinks of it this way too. We think of science in a very linear way, like we have theory and hypotheses, and that flows into a plan for a method. And then you collect data, and that’s your results, and then you interpret the results, and then you publish it. That’s how it kind of looks like. That’s how, at least at the start of our careers, we’re kind of taught it goes.

But, in fact, the whole story in real life is very different. There’s a lot of stuff that goes wrong, a lot of stuff that just doesn’t quite work out, a lot of choices that get made and changed throughout the process. And what this means is that the final published study that is supposed to reflect that process is very static. It’s very linear, and it gives a very kind of stiff quality to a process that’s actually quite living, dynamic, and very iterative.

And so what I think needs to change and something that is changing is the publishing format itself. So instead of having these 20-, 30-page PDF that we read, there are other options for publishing. For example, modular publishing is an example of this where you publish modules, so literally chunks of a study that are all connected together, and that’s your study. So they each have DOIs. They’re versioned, and they kind of break up this static block of text into a more living and dynamic structure that reflects better the research process. So that’s an example.

ANNA ROTHSCHILD: So your publication calls itself an open access journal redefining failure. What have you personally learned about redefining failure over your years of being the editor in chief? Do you look at failure differently now?

SARAHANNE FIELD: I absolutely do. I’ve been in academia for about 10 years now, and I was part of the group of people, the majority of scientists, who were sort of worried that studies wouldn’t pan out the way I’d like them. And although working in metascience in that time has kind of helped me come away from that mindset, now that I’m actually face to face, so to speak, with some of these failed studies and having to really draw information out of them, I’m really starting to embrace the messiness of science instead of shying away from it and going, hey, this is actually really cool. It’s fun. It’s interesting. It’s challenging.

ANNA ROTHSCHILD: I love that.

SARAHANNE FIELD: Yeah. For scientists who are listening, when you see initiatives like ours like the Journal of Trial and Error, if there’s any possibility to donate to these causes, that’s really important because we’re a diamond open access journal, for example, in our case. And the only reason that we don’t have to get anyone to pay anything is that we’re helped by donations. So if you’re a scientist and you see an initiative that you think is really important, consider seeing how you can support that initiative to help it keep running because that’s one thing that we really struggle with in metascience and in these kinds of things is to get the support to continue doing what we’re doing.

ANNA ROTHSCHILD: Well, thank you so much for taking the time to explain this to us, Sarahanne.

SARAHANNE FIELD: No worries, not at all.

ANNA ROTHSCHILD: Dr. Sarahanne Field is editor in chief at the Journal of Trial and Error.

Copyright © 2024 Science Friday Initiative. All rights reserved. Science Friday transcripts are produced on a tight deadline by 3Play Media. Fidelity to the original aired/published audio or video file might vary, and text might be updated or amended in the future. For the authoritative record of Science Friday’s programming, please visit the original aired/published recording. For terms of use and more information, visit our policies pages at http://www.sciencefriday.com/about/policies/

Meet the Producers and Host

About D. Peterschmidt

D. Peterschmidt is a producer, host of the podcast Universe of Art, and composes music for Science Friday’s podcasts. Their D&D character is a clumsy bard named Chip Chap Chopman.

About Anna Rothschild

Explore More