The replication games: what are they, and how they may help solve a crisis : Planet Money The world of science has been stuck in an existential crisis over whether we actually know the things we thought we knew. Re-running an old study today doesn't always yield the same result. Same with re-enacting old experiments. Collectively, this is known as the “replication crisis.” 

Economist Abel Brodeur has come up with one way to help fix this crisis: he’s invented an internationally crowdsourced surveillance system, designed to keep social scientists honest. He calls it the “Replication Games.” 

Further Listening:


This episode was hosted by Mary Childs and Alexi Horowitz-Ghazi. It was produced by James Sneed and Emma Peaslee, with help from Willa Rubin. It was edited by Jess Jiang, fact-checked by Sam Yellowhorse Kesler, and engineered by Ko Takasugi-Czernowin. Alex Goldmark is Planet Money’s executive producer. 

Find more Planet Money: Facebook / Instagram / TikTok / Our weekly Newsletter.

Listen free at these links: Apple Podcasts, Spotify, the NPR app or anywhere you get podcasts.

Help support Planet Money and hear our bonus episodes by subscribing to Planet Money+ in Apple Podcasts or at plus.npr.org/planetmoney.

Don't hate the replicator, hate the game

Transcript
  • Download
  • <iframe src="https://www.npr.org/player/embed/nx-s1-5720653/nx-s1-mx-5720653-1" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript

[COIN SPINNING]

ANNOUNCER: This is Planet Money from NPR.

MARY CHILDS: Alexi Horowitz-Ghazi.

ALEXI HOROWITZ-GHAZI: Mary Childs.

CHILDS: Yes. You and I took a little trip up to scenic Montreal, one of the jewels of French Canada, for a little Planet Money mission.

HOROWITZ-GHAZI: Yes, we did. And even though I was a little bit sad that that mission did not entail joining the maple harvest or, you know, like, infiltrating a poutine cartel--

CHILDS: Mm, next time.

HOROWITZ-GHAZI: --dare I say, next time-- it did have much bigger implications for anybody and everybody whose life is impacted by science, which I think is basically all of us.

CHILDS: I think that's right, yeah. We were there to meet a guy named Abel Brodeur. Abel's this very energetic economics professor in his late 30s at the University of Ottawa. And we found him bounding around the halls of this modernist school building in downtown Montreal. He was getting ready to host an event he's become sort of famous for, something called the Replication Games.

ABEL BRODEUR: It's getting exciting now.

HOROWITZ-GHAZI: How are you feeling?

BRODEUR: I'm feeling good. It's the beginning of the event, so this is the moment I'm full of energy and full of enthusiasm. And seven hours from now, it's going to be a different conversation.

HOROWITZ-GHAZI: Abel is going to be tired in seven hours, because at a replication game, he is running around between 16 teams of three to five people in a kind of hackathon. People will work all day to replicate recently published social science papers, to reproduce the results and see if the findings hold up.

CHILDS: Because ever since technology has made it easy to crunch data, we've been able to go back and check old research. And it turns out, it wasn't great. Rerunning an old study today, a lot of the time, does not yield the same result. The research no longer proves its conclusion. And the same thing often happens when we reconduct whole experiments. Altogether, these problems have become known as the "replication crisis."

HOROWITZ-GHAZI: A lot of people across academia have been trying to fix this, so we can trust research, so we can actually know what we know. And this event, the Replication Games, it's part of Abel's attempt to help solve this crisis.

BRODEUR: The idea is to change norms through monitoring. And just giving a small percentage, a small chance that we will monitor--

HOROWITZ-GHAZI: Yeah.

BRODEUR: --can massively change the behavior of everyone-- you know, change the way they behave, change the way they code, change the way they do research. So that's the goal.

CHILDS: After a few minutes, we head into a big lecture hall where Abel takes center stage.

BRODEUR: All right, folks, we're going to get started. Welcome to the Replication Games. Thanks for being here in Montreal with us. Let's get started. Today, we have 16 papers that are being reproduced. A couple of small things--

HOROWITZ-GHAZI: Around the room, dozens of social scientists are gazing up at Abel, looking a little bit nervous. Most of them have come from across Canada, and most of them are first-timers who now have to undergo this kind of awkward initiation rite.

BRODEUR: I'm gonna put the music because I know you guys need, like, you know, a bit of motivation. But you need to do the body movement. Everybody has to do it. All right? Does this sound good?

[CLAPPING]

BRODEUR: So we do it. I need you to do-- it's very easy.

CHILDS: Abel starts didactically clapping like an elder millennial camp counselor, and his audience joins in.

BRODEUR: Guys, thank you so much for being here. I hope you enjoy. This should be fun. And thanks, everyone.

[MUSIC PLAYING]

HOROWITZ-GHAZI: Hello, and welcome to Planet Money. I'm Alexi Horowitz-Ghazi.

CHILDS: And I'm Mary Childs. Over the past couple decades, the world of science has been stuck in an existential crisis over whether we know the things we think we know. It started in psychology, spread to medicine and economics. And now, people across disciplines are trying to figure out how to solve it.

HOROWITZ-GHAZI: Today on the show, the story of one economist, how he set out to learn what exactly has broken in the way social scientists create new knowledge and how he came up with his own daring and kind of wacky way to help fix it by building an internationally crowdsourced surveillance system to keep social scientists honest.

HOROWITZ-GHAZI: OK, so the replication crisis has been a pretty big deal for almost 20 years at this point. We've covered it on Planet Money before. The story of how economist Abel Brodeur first encountered the problem and why he set out to help fix it begins back in 2011.

CHILDS: Abel was getting his master's in economics. And he was writing a paper on whether smoking bans in restaurants and workplaces actually made people smoke less. He collected this huge data set.

BRODEUR: I had, like, amazing data from the CDC, which is public. I had smoking prevalence at the county level.

HOROWITZ-GHAZI: Abel says that all the established research at the time indicated that smoking bans were hugely effective, that they'd gotten lots of people to stop smoking.

CHILDS: But when Abel crunched his numbers?

BRODEUR: I was finding absolutely no effect. None. It was like nobody stopped smoking. I've played with the data for six months, and I find nothing.

HOROWITZ-GHAZI: And Abel was trying to make a name for himself in academia, which means getting his research published in an academic journal. And it's harder to get published if you find no effect, especially given that the existing literature did show an effect. So what Abel needed was something statistically significant.

CHILDS: For the statistically uninitiated, significant means the result would be produced by chance less than 5% of the time. So the probability that the result is just random is 5% or less. That is the cutoff for whether your findings count or not.

BRODEUR: There's this 95%, 5% cutoff that really matters. We're obsessed with these thresholds.

HOROWITZ-GHAZI: So Abel kept tinkering with his data set, changing his computer code to contort the data one way and then another, until eventually, one day, he found a way to analyze one subset of his data that gave him what he'd been looking for, a result demonstrating that smoking bans had decreased smoking and a result that was significant.

BRODEUR: It's like, there you go. I was so happy. I was in the library. I was just-- yeah, I was like, significant! I was so happy.

CHILDS: Finding a significant result meant that if his paper was published, he would get to put a little asterisk or star next to his results. And the more statistically significant the result, the more stars you got to claim.

HOROWITZ-GHAZI: But Abel's happiness did not last long. Because the more he thought about how he'd gotten that significant result, the more it started to seem like it was working against the whole goal of social science-- you know, to actually discover true, new knowledge about human behavior.

CHILDS: For example, policymakers need to know whether smoking bans work to make sound policy decisions. But here he was torturing the data to match the preconceived hypothesis. He thought--

BRODEUR: This is so stupid. What am I doing? I'm writing a piece saying that smoking bans are decreasing smoking prevalence, because I managed to find one that worked. I was like, this is dumb. I'm doing something wrong.

HOROWITZ-GHAZI: Abel ultimately decided not to use his tortured results. He wrote up his paper showing that he'd found no effect, even if it meant his paper was less exciting. And at first, he thought what he'd done to his data might have just been a one-off mistake on his part.

BRODEUR: But then you start talking to other students, and like, people are like, oh, yeah, that's how you publish.

HOROWITZ-GHAZI: Abel started to see that this was a problem of incentives. In order to advance their careers, academics have to publish papers in peer-reviewed journals. And the journals want to publish work that's statistically significant and novel. These papers can win big prizes and define new research agendas for decades.

CHILDS: But because of all that, people were doing what he had done-- trimming and squeezing and coaxing the data towards significant results. And that can easily cross over into a kind of data manipulation called P-hacking, P as in probability. And Abel says, it can happen almost subconsciously.

BRODEUR: Because the project took, like, three or four years of back and forth between co-authors' discussion. Then six months later, you go back. You exclude again these other people, or you do something different. And then over time, all these decisions, actually, when you look at it from the outside, it's like, this is crazy what you've done.

HOROWITZ-GHAZI: To figure out how widespread this problem might be, Abel decided to research the research. He and a couple of his colleagues scraped the significance data from a bunch of the top academic journals, the distribution of stars that published researchers had racked up. And when they looked at the distribution, they found a noticeable hump just above that 5% significance threshold. Now, some of this could be because some people whose research only hit 6% didn't bother submitting. But it could also be because some researchers were tweaking their data analysis to just barely get results that would be more likely to get published.

CHILDS: But when Abel and his colleagues started submitting their research for publication, they got a resounding series of nos. Academic publishing seemed hesitant to open up an empirical reckoning. After a few years, they did manage to publish their paper in 2016. They called it "Star Wars-- The Empirics Strike Back." Do you get it?

HOROWITZ-GHAZI: [IMITATING WOOKIE]

CHILDS: Oh. You definitely get it. Thank you, Alexi. So Abel puts aside this whole idea of an empirical reckoning, and he moves on to other economic projects. He gets tenure. And eventually, he learns that his little paper has become kind of a sleeper hit.

BRODEUR: It took a long time before I realized, actually, the paper was, like, well-known before people started talking to me at conferences, like, are you the Star Wars guy? That's the moment. Like, I needed someone senior to tell me, like, no, this is really important, what you're doing.

HOROWITZ-GHAZI: There had been efforts to solve parts of the replication crisis. Some of the top journals had started asking their contributors to release replication packages with their papers. That's basically the data and code they'd use to find their results. And researchers were also starting to pre-register their hypotheses before actually doing the research, so that if the data didn't support it, they couldn't futz around and pretend like they'd been looking for something else all along.

CHILDS: For his part, Abel wondered if there was anything he could do. Like, not just study the problem, but actually help fix it.

BRODEUR: How do I change the incentives? How do I potentially have an impact on the norms, how people do research? The second I think about the norms, I think about, oh, it needs to be large-scale. Nobody's going to change their behavior if it's a small-scale thing. So it needs to be big.

HOROWITZ-GHAZI: Journals do have peer-review systems where they try to poke holes in research, but they didn't always totally get under the hood to scrutinize all the code and data, so researchers weren't necessarily worried that their stuff would get checked.

BRODEUR: A nice analogy, I think, is imagine you go on a date. You might shave. You might take care of your body. You might take care of yourself-- a bit of deodorant, you know, perfume maybe, if it's your thing. You're going to make an effort to look prettier than you are usually. The other person fully understands that this is a nice version of you or fully aware of that, but I don't know by how much. And perhaps it's not. Or maybe you made a massive effort. And usually, you're a disaster. You never clean nothing. So when, you know, you go to the apartment, it's like, oh, my goodness, this is your apartment? So research is a bit like this.

CHILDS: The published research is the cleaned-up version.

BRODEUR: So when I see a published paper, I know it's been, you know-- it's beautiful. It looks nice. But there's an information asymmetry. I don't know how dirty it is, actually.

HOROWITZ-GHAZI: Abel thought one thing that might help this problem was to make researchers care as much about the cleanliness of their data analysis as the significance of their results. And to do that, he'd have to go full-on Room Raiders on people's published papers, to shine a florescent spotlight on the backrooms of their research. If you could take all of the data that somebody had gathered for a given paper and meticulously retrace their coding steps, you could see if it was possible to replicate their findings. You could make sure there weren't any errors, conscious or unconscious, in what they'd done.

CHILDS: But first, you'd have to get the code. People weren't in the habit, then, of publishing all their data and code. And when he emailed researchers asking, nobody responded. So he decided to create an official-seeming institution.

BRODEUR: It needs to be a big institution with a website with tons of famous people on it. And when you send the email, people will be like, what the hell is this thing? I need to respond. It's legit.

CHILDS: So in 2022, he creates a website for a thing he starts calling the Institute for Replication.

BRODEUR: A friend of mine, his wife did the logo for free, like a design. Like, you know what I mean? Like, just bare bones.

HOROWITZ-GHAZI: He recruits some serious famous economists for the board to put on his legit-looking website. And pretty soon, he does start to get responses to some of his emails. He's able to get some data sets and coding packages. And he convinces some colleagues and junior researchers to start doing some replications one by one, in exchange for a co-author credit on one big paper.

CHILDS: So Abel can get the data and the code, but there's still a second problem, which was the question of scale. Replicating one paper at a time was not going to do much to change the system. What he needed was to create the sense within the academic community that anybody's work could be checked at any time.

HOROWITZ-GHAZI: It's like an IRS for the ivory tower.

BRODEUR: So then I thought, OK, we need to mass-reproduce journals. So then I was like, OK, I need to get maybe a few hundred replications or reproductions per year. Then I'm thinking, how do you do that?

HOROWITZ-GHAZI: The answer, Abel says, came to him kind of by accident. Around the time he got his Potemkin website up and running, he got an unrelated invitation to a conference in Oslo, to a couple of seminars. He was planning his trip about a month ahead of time. And he noticed that he had seminars on a Wednesday and on a Friday.

BRODEUR: And I thought, like, what the hell do I-- am I going to do on Thursday? Like, I've never been to Oslo. I'm sure it's pretty and nice. But a full day, like, I'm going to walk around, and then I'm going to have, like, six, eight hours just to relax. So I just emailed the person who invited me, and I said, could we just, like, do a small workshop?

CHILDS: It would just be, like, 10, maybe 15 people. Abel posted about it on social media.

BRODEUR: You can come to Oslo. It should be fun. If you come, you're going to get co-authorship to a meta paper. We're going to reproduce papers. Let's have fun. And then-- I don't know-- like, 70, 80 people ended up registering really fast. I closed the registration because I have no money. We don't have food. I didn't tell the guy it would be 80. I said it would be 10.

HOROWITZ-GHAZI: So Abel is sitting there a couple months before the conference with this sudden, unexpected surge of interest and no plan.

BRODEUR: I have 80 people, some coming from Ireland, others coming from Sweden, others coming from France. Like, what do I do with these people?

CHILDS: He starts collecting papers that people could replicate. And he puts everyone into teams by their field-- health economics, development economics.

BRODEUR: The first time, I had no idea what was going on. I was super stressed.

HOROWITZ-GHAZI: He had no idea what was going to happen, what they would find. Abel heads to Oslo and convenes the first ever replication game in October of 2022. And when he checks in on one of the first teams of replicators working on the first paper--

BRODEUR: I go talk to them, and they're like, Abel, there's a problem. Like, there's tons of duplicates. I'm like, what? He's like, yeah, one of the data sets, like, there's tons of people with the same age. And then I come back later on, and it's like, OK, 75% of one data set, everybody is 62 years old, all women, all living in the same village, all doing the same thing. It's the same. It's duplicates. And it's a paper about the inequality. If everybody is the same, there is no inequality. And that was driving some of the mechanism.

CHILDS: The underlying data upon which this entire paper rested had been merged improperly, like a big copy-and-paste error. To Abel, this was disconcerting.

BRODEUR: And I was like, oh, boy. That's the first paper. That's the first game. What did I create? It's going to be like this all the time, people finding crazy mistakes. And did I just open a can of worms that actually most papers are just, like, terrible, full of crazy coding errors?

HOROWITZ-GHAZI: Abel was a little afraid he might be about to discover that all papers were full of worms and that science wasn't real. But luckily--

BRODEUR: By the end of the day, like, many teams had, like, good day. Everything was clean and so on. And I was like, oh, it's, like, not terrible.

CHILDS: He could relax. It turns out most of the papers were not terrible. Even better, with that first event in Oslo, Abel had found a way to crowdsource this massive academic auditing project, essentially, for free.

[MUSIC PLAYING]

CHILDS: If he could host enough replication games every year, he just might be able to scare the social sciences into acting right.

HOROWITZ-GHAZI: But what actually happens on the ground during these things? After the break, we enter the 51st Replication Game.

CHILDS: So we are at a replication game in real life in Montreal. Abel Brodeur says that the game part is a little bit of a branding exercise. There are no winners or prizes. It's more like an all-day hackathon.

HOROWITZ-GHAZI: The teams are mostly economists with a few groups of psychologists, and they've already chosen the papers they'll focus on. Using just what they have in the replication package, they will have seven hours to check the code, examine the decisions their paper's authors made, and see if the results reproduce. And then they'll report on whatever they find, so it'll be out there on the record, whether that's a nothingburger or a bombshell.

CHILDS: After everyone claps their rendition of "We Will Replicate You," the researchers start streaming out of the lecture hall, and we run after them.

HOROWITZ-GHAZI: Jolene!

JOLENE HUNT: Yeah.

HOROWITZ-GHAZI: Can I talk to you for a sec?

HUNT: Yeah.

HOROWITZ-GHAZI: Hey. I'm Alexi.

HUNT: Oh, hi, Alexi.

HOROWITZ-GHAZI: Just set the scene for me. Like--

HUNT: So we just finished clapping a cheesy opening song. And we're about to split up into rooms.

CHILDS: The groups are scattering into classrooms across the building to start digging into their papers. Economics PhD student Jolene Hunt and her team are looking at a paper about education. They're all education economists. And so Jolene has sort of a pedagogical view of the day.

HUNT: In PhDs, we often don't get a chance to actually work together.

HOROWITZ-GHAZI: Yeah.

HUNT: We're usually just kind of on your own, in your silo. And then, like, you talk to each other when you're having problems. But it'll be nice to actually work together and see if my friends are actually any good at their jobs. [LAUGHS]

CHILDS: Rolling up their sleeves, getting down to the actual coding. Because they're only going to have seven hours, each group has a little list of the things they've decided they're going to try to get through today. There's one group, led by a guy named Thibaut Duprey, who is sitting alert and ready to unpack a paper about pensions in different countries.

THIBAUT DUPREY: Essentially, the paper focuses on 10, something, countries, but then the data set seems to have a few more countries in there. So why some countries were included, others were not? What if you drop a few countries out of the data set? Maybe there's something to be explored there.

HOROWITZ-GHAZI: And we wanted to understand the stakes for the day-- you know, why people would attend this event to do a full day of, like, manual economic labor for no dollars? So we asked them.

CHILDS: What are you doing here today?

FERYEL LASSOUED: Well, we're trying to see if we can replicate the results from a paper that is-- that took a look into the effects of negotiation.

CHILDS: I've started with a group in the lecture hall, huddled around their laptops. Feryel Lassoued is a researcher at the University of Saskatchewan. And she's in a group of economists focused on agriculture, with Qixia Wu from the University of Ottawa. You want to find that the paper checks out.

QIXIA WU: Mm, yes, you can-- you can think like that. Yeah.

CHILDS: OK. In terms of your personal incentives, would it be cooler to find, like, oh, no, this paper's messed up?

HOROWITZ-GHAZI: Feryel starts laughing, seemingly at the premise of the question.

CHILDS: You're laughing so hard. Why? Why are you--

[LAUGHTER]

LASSOUED: That's mean!

WU: I don't know, like, how to answer it.

LASSOUED: I feel bad for Diego and Juan here.

HOROWITZ-GHAZI: Those are the authors of the paper.

CHILDS: Do you know them?

LASSOUED: No. No.

CHILDS: You just have sympathy for them.

LASSOUED: Yeah, because we've been-- we've all been in their shoes.

CHILDS: OK, fair. But we go up to another group, and they're kind of like, duh.

FELIX FOSU: Yeah, we are trying to find something. [LAUGHS]

CHILDS: That's Felix Fosu, a postdoc at Queen's University. His group is digging into a paper about cartels in Mexico. I tell him what the other researchers said, that maybe it isn't very nice to want to find something terribly wrong in someone else's research. But it seems like, to Felix, I have now misunderstood things in the opposite direction.

FOSU: No, we definitely want to find something. Yeah.

CHILDS: Why?

FOSU: I think replication is something that we have to take very important in economics. We need to make sure that our results are indeed claiming what they claim to be. We need to know what works and what does not work.

HOROWITZ-GHAZI: Now, regardless of their specific goals, the actual work of replication is divided into two main phases. Phase one is the same for every team, pure and simple replication. They will all check the paper's code, the programmed instructions that take some raw data and put it into a bunch of tables that comprise the foundations for the paper's conclusions.

CHILDS: So now, each team takes the original code, copy and pastes it, and basically hits Enter to see if it runs.

HOROWITZ-GHAZI: And one type of mistake that they might find is if the code is really broken. They might find that when they push the button, the code just doesn't run. The computer just says "Error."

CHILDS: Or another kind of mistake they might find, maybe the code runs-- great-- but it spits out a different answer than what the authors wrote-- not so great. Or maybe the raw data is messed up in some way, like cells merged or transposed or erased or accidentally filled down the whole column.

HOROWITZ-GHAZI: So we asked the agriculture team to show us exactly what they are doing.

CHILDS: So I can't code. I don't know what I'm looking at. What am I looking at?

WU: Well, actually, it's kind of nothing here because I just started.

HOROWITZ-GHAZI: This is Qixia again. The paper her team picked, by Diego and Juan Pablo, is about the price of eggs at big firms versus small firms, how much pricing control they have.

CHILDS: I look at her laptop over her shoulder.

WU: So what you can see here is the variables they have. We have the firms. We have the price. We have the day and month and year.

CHILDS: Now, Qixia pulls out her iPad to scroll through the published paper.

WU: So we're going to firstly check whether we can perfectly reproduce all the numbers and using the original data and codes. If I can run part of this, maybe you can see.

CHILDS: OK, she's pushing a little blue arrow, a little Play button.

WU: So basically, if I run this code, you will see the results.

CHILDS: Oh, a little box appeared in a different window.

WU: Yes. So if you check minus 18

CHILDS: Minus 18.11432, and I'm looking at the published version. It says minus 18.114, star, star, star.

WU: Yeah. So they're basically exactly the same.

CHILDS: It's the same.

WU: Yeah, it's the same. That's good, you know.

CHILDS: So we have a win.

WU: Yeah. Yeah, that's one. And we have more to check.

CHILDS: A lot more. But we got one. That's great.

HOROWITZ-GHAZI: Qixia will keep plugging in all the data and checking the results, though, so far, it looks like the paper is checking out.

CHILDS: And if the paper passes the whole first phase, if the code does spit out all the answers that the author said it would, then the replicators move on to phase II, robustness checks.

WU: For robustness checks, we kind of, like, change some parts of the model to see whether the original conclusion, it still kind of makes sense.

CHILDS: This phase is less objective and requires more context and thought. It requires the economist to consider the questions that the paper authors didn't think of or didn't write about, the decisions the authors made, and the decisions they could have made, but didn't. It's like trying to see the negative space in and around the paper.

HOROWITZ-GHAZI: The kind of things they might find in this phase? You know, did the author say that this data set represents something it doesn't? Did they use an appropriate data set? And did they use that data in a way that made sense? Did they include or exclude certain specifications or factors in order to have a result that looked exciting?

CHILDS: There are infinite potential that researchers make or don't make. And the replicators have such limited time, so they're not going to be able to consider and analyze everything. They're just going to get through as much as they can.

HOROWITZ-GHAZI: And as the hours start to tick by, it becomes clear that most teams are not turning up major issues, until mid-afternoon. We check in with this one group looking at a paper about government policies.

SIMON PRIVO: The basic premise is, when people trust the government, do they tend to comply with policy more?

HOROWITZ-GHAZI: This is Simon [? Prevo. ?] He's an econ master's student and a public sector researcher. The paper found that when people trust in government, they comply with policies more readily. So those policies cost the government less money. And Simon and his teammates are now trying to unravel a mystery. Because when they went to look at the raw data that underlies the paper's findings, it looked a little funny. This is Scott Morrier, another econ master's student on the team.

SCOTT MORRIER: There was a folder called Raw for the raw data, but the files were all labeled Clean. So we were a bit, like, confused to how-- you know, it was counterintuitive, right? So [? Florian ?] downloaded the data straight from the source and followed the instructions to create the one data set.

CHILDS: They recreated what should be the same data set, following the instructions that the authors left. They ran the code.

MORRIER: And then that's when we started getting the errors because variables were missing. And then as we kept going through, we kept finding more variables that-- that were being used in the regression, but weren't necessarily included in the supposedly what is meant to be the raw data set.

HOROWITZ-GHAZI: Some variables are missing from the raw data set. The authors seem to have used data in their analysis that they did not account for. Not good.

CHILDS: And then we visited the group looking at that paper about cartel behavior in Mexico. That group has found something, too.

FOSU: So in this paper, they look at the presence of different cartels.

HOROWITZ-GHAZI: They tell us the paper looks at 20 cartels and data about what types of crimes were happening and when, to see if cartels changed the types of crime they did after the government ramped up a big war on drugs.

FOSU: What we found so far is that if you exclude one of the cartels, then the results become insignificant over here.

CHILDS: So it's just the one cartel making the results?

FOSU: One cartel making the results.

: So if you remove only one, then the results collapse, right?

CHILDS: Oh, no. You found something.

FOSU: Yep.

HOROWITZ-GHAZI: They found something in the first test they tried.

CHILDS: Is that luck? Would you call that luck?

FOSU: No, I think it's something that we thought about it. Like, that's why we placed it one on the list. We thought it's a good place to search, so partly luck, but partly because we thought about it carefully. Yeah.

CHILDS: That sounds like not luck.

FOSU: [CHUCKLES]

CHILDS: They're going to keep investigating. And depending on what they find, this paper is maybe not passing this phase, the robustness check phase. Can you draw a big, sweeping conclusion about the effectiveness of a war on drugs from a change in just one cartel? They suspect this paper will not hold up.

HOROWITZ-GHAZI: Over lunch, the cartel team starts puzzling through, like, how does this sort of thing even happen?

DAVID BENATIA: We have to be honest. For sure, when you do these kind of papers, you do these kind of things, right? You check whether-- when you have these-- you know, you do these type of robustness checks.

HOROWITZ-GHAZI: David Benatia, a professor on the team, says this is a robustness check that he would have tried if he had been the author. At the end of the day, our researchers limp back into the auditorium to present what they'd all found.

BRODEUR: So the way we like to finish is to give each team about one minute to tell us how your day went, the different challenges you faced. Maybe we can start from the beginning, move around.

HOROWITZ-GHAZI: We didn't find anything too major. There was a lot of missing variables and attrition. So--

HOROWITZ-GHAZI: --did replicate well, like, all the code run, but--

SPEAKER 5: Everything ran fine. We tried to poke holes in it, but we couldn't really do it.

HOROWITZ-GHAZI: For the 71 replicators in the Montreal game, 14 teams got to uphold science by double-checking some published work. They spent a day coding with their friends and peers, learned some new coding hacks, and new ways to make choices in research. And they'll get a little authorship credit on a meta paper in a real journal.

CHILDS: The other two teams, the group who discovered the missing numbers, the cartels group, they've gotten, like, a toxic golden ticket. Now they'll get to write their report-- polite and formal, but nonetheless, kind of a bombshell-- saying just how flawed the research is.

HOROWITZ-GHAZI: Maybe that makes a splash, and everyone thinks they're brilliant. Or maybe it makes a splash, and everyone hates them.

CHILDS: Next, Abel will write an email to the authors, a somewhat standardized note saying, hey, here's who we are and what we do. We found some mistakes in your paper. Would you like to respond? He does not assume nefarious intentions. And the authors get an opportunity to try to fix the problem and prepare their formal response before anything goes public.

HOROWITZ-GHAZI: And because Abel handles it from his position at the Institute for Replication, it doesn't feel so personal. And the replicators have a little bit of insulation.

CHILDS: We asked Felix from the cartels group what this might mean for him as a more junior person, a person earlier in his career. It's kind of throwing rocks towards the top of the profession. He'd wanted to find something, and now he has.

FOSU: I think it's good work that we are doing, but what the implications are, I don't know. [LAUGHS]

CHILDS: We'll find out.

LASSOUED: Yeah, we'll find out later.

HOROWITZ-GHAZI: So after a few months, Abel sends his neutral-toned official email to the authors of the paper that Felix and his team had replicated in Montreal, saying that the code had worked, but that they found the results don't hold up.

CHILDS: And for the authors of that paper, getting that email--

GIACOMO BATTISTON: When we opened that email, we were actually happy, because we actually read your paper replicates.

CHILDS: This is Giacomo Battiston, a researcher at the ROCKWOOL Foundation in Berlin and one of the four co-authors of the paper. He says they were thrilled to have their coding results publicly validated.

HOROWITZ-GHAZI: And when it came to the bigger problem, the fact that their results had fallen apart when their replicators removed that one cartel--

BATTISTON: We were not particularly worried about the content, because it was kind of self-evident that this was not really challenging.

CHILDS: Not really challenging their findings, because they think the replicators misunderstood the basic hypothesis of their study. They say they started with this idea that there was this one big, new cartel in Mexico, Los Zetas, and it had been doing a lot of crimes, generating a lot of data points. Here's another author, Marco Le Moglie, a researcher at Bocconi University in Milan.

MARCO LE MOGLIE: When we start to think about this project, actually, we had in mind that the specific cartel of Los Zetas.

HOROWITZ-GHAZI: They say they set out to investigate if the cartel Los Zetas had changed the types of crimes they did after the war on drugs, and their paper succeeded at proving that.

CHILDS: What the Montreal replicators did, in the opinion of the paper authors, was to remove the main part of the data set and then say the conclusion was broken. You can do that, but why would you?

PAOLO PINOTTI: To be blunt, it doesn't make any sense.

HOROWITZ-GHAZI: That is Paolo Pinotti, a professor also at Bocconi University. He said it was like doing a study on the effect of spreadsheets on productivity and then saying, oh, but the results don't hold up if you exclude Microsoft Excel.

CHILDS: We looked at their paper, and to be fair to the replicators, the original paper does not say explicitly, hey, it's just Los Zetas we're focusing on. The data from Los Zetas is lumped in with several other new cartels. So if the paper authors meant to study the behavior of just Los Zetas, that was never quite spelled out.

HOROWITZ-GHAZI: Mary, when we first rocked up to the Replication Games back in May, I think we were both excited at the idea that we might watch some junior economists uncover some major problem with a published paper in real-time. But Abel had a different take when we asked him about the problems that the teams there had uncovered, like the team, for example, that had found issues in the government trust paper. That seems like success.

BRODEUR: But success, it depends what you define as success.

HOROWITZ-GHAZI: Well, the process working as it's supposed to.

BRODEUR: I mean, in a world in which science works, I think this should have been picked up before it's published, cited, and disseminated. So I don't think it's a success, right?

HOROWITZ-GHAZI: That's fair.

CHILDS: These papers they're replicating have been published, meaning they got past journal referees, professional economists who were supposed to be gatekeeping the quality of what they publish. Some of the top journals do check that the code runs. They press Play. But in the government trust case, the journal referees apparently didn't catch that numbers were missing. But when the paper said, oh, the documentation is in the replication package, it was pointing to nothing.

HOROWITZ-GHAZI: The journal declined to comment, though they said they have a robust process to investigate concerns.

BRODEUR: To me, this is a failure of the system-- which is fine. There's always going to be failures. I just think that the rate of failures is higher than what a lot of people think.

HOROWITZ-GHAZI: Yeah.

BRODEUR: And it shouldn't happen that often.

HOROWITZ-GHAZI: In every replication game so far, they have found something, though not yet any career-ending fraud. It's more like major data or coding errors or robustness fails.

CHILDS: So the broader system is still broken, even after putting on more than 50 games and replicating about 300 papers.

HOROWITZ-GHAZI: Still, there are signs that the games are having an effect. Several replication gamers told us their experience here will change how they do their research, because they know that their papers, too, might someday end up under Abel's spotlight.

CHILDS: Abel says the more games he can put on, the more the rest of the academic world will start to shift, because the evidence shows that people don't actually change their behavior based on the severity of the potential punishment, like losing their job or public shaming, or whatever. They change behavior based on the odds of enforcement, the odds of actually getting caught. Just the idea that someone might walk through their apartment one day, that's enough of a threat to keep it clean.

[MUSIC PLAYING]

CHILDS: Hey, listeners, what are you doing on the evening of Monday, April 6? Are you free? Because if you are, I think you should come to the 92nd Street Y to hang out with me and some of my friends. It is the first debut stop on our 12-city book tour to celebrate the publication of our first ever book, Planet Money-- A Guide to the Economic Forces That Shape Your Life. Every stop on this tour will be unique with different hosts and guests. And if you get a ticket, you can get a tour-exclusive tote bag with your purchase while supplies last. So at the 92nd Street Y on Monday, April 6, it'll be me, Amanda Aronczyk, Darian Woods, book author Alex Mayyasi, and the economist Emily Oster, who is most famous, I think, for letting pregnant women know that they can actually drink coffee. So please come and bring your very best economic questions for us. We can't wait to hang out. Find the show nearest to you at the link in the show notes, or go to planeymoneybook.com. And thank you.

HOROWITZ-GHAZI: If you want to hear more about the replication crisis, we've done a few episodes about it and the efforts to fix it. We'll link to those in the show notes.

CHILDS: If you want to support our work, you can donate at npr.org/donate. And thank you.

HOROWITZ-GHAZI: This episode was produced by Emma Peaslee and James Sneed, with help from Willa Rubin. It was edited by Jess Chang, fact-checked by Sam Yellowhorse Kesler, and engineered by Ko Takasugi-Czernowin. Alex Goldmark is our executive producer. I'm Alexi Horowitz-Ghazi.

CHILDS: And I'm Mary Childs. This is NPR. Thanks for listening.

Copyright © 2026 NPR. All rights reserved. Visit our website terms of use and permissions pages at www.npr.org for further information.

Accuracy and availability of NPR transcripts may vary. Transcript text may be revised to correct errors or match updates to audio. Audio on npr.org may be edited after its original broadcast or publication. The authoritative record of NPR’s programming is the audio record.