We are all teeming with bacteria that help us digest food or fight disease, but two people might play host to a very different array of bacteria due to diet, where they live, hobbies or even medical histories.
As a result, scientists have struggled to understand which bacteria are linked to disease and which protect against it. Studies comparing people’s bacterial companions – known as the microbiome – to explore what that variation means might disagree because they analyzed different groups or didn’t sample enough people.
Statistics Professor Susan Holmes thinks one way of teasing out which differences are relevant to disease and which are just differences between people could come down to statistics and repeating studies.
At the recent American Association for the Advancement of Science conference in Washington, D.C., Holmes spoke with Stanford Report about the challenges she and her colleagues face when analyzing microbiome data. She argues that scientists should be open about how their studies are done and collaborate to revisit old data to ensure their findings are real.
What do you like about combining your expertise in statistics with the microbiome?
I love learning new things, new science. I also like working on open problems where people know little, and I like teaching. I feel that teaching statistics to biologists is not a waste of time, even if they think they hate it. The people I work with, I like to teach them how to analyze their own data. Usually what happens is they send me the data, and I do a first analysis. I will do all the “dirty” work preparing the data, and then they can run the script and change a few things, make the picture nicer and so on; it’s very empowering. I also like the collaboration where they’re teaching me biology. I forget it all because the names are so hard, but I’m teaching them statistics through this exchange. I have pursued this route in a recent book I published with Wolfgang Huber on reproducible workflows for biologists called Modern Statistics for Modern Biology.
Recently scientists have been talking more about the challenge of reproducing scientific results to make sure they are true. What are the challenges of reproducing results?
The first level is really statistical. If I had the same data, but I was a different statistician and I made different choices – and there are lots of choices you could make – would my results be the same? Would I decide that the most important factor was whether you drank soda or not, and would the same type of bacteria be impacted?
The most important kind of reproducibility is often what we call replicability, where we look at data from a different study. For instance, for a study we did about preterm birth, one of the reviews of the first paper that got published in PNAS said, “Well, yes, but you have all women from the Bay Area. They are mostly Caucasian and Hispanic. That is a problem and it might be an ethnic link.” So, we did a follow-up study, which was an exact replication with all the same design elements, but it was in African American women from Alabama.
We found the same strains of specific bacteria associated with preterm birth, but it’s a replication and was much harder to publish. We also made it so that everybody could reproduce the analysis by publishing all the data and all our code at the Stanford Digital Repository, and people have been rerunning it.
Why do you think researchers have struggled with reproducibility?
There’s been a movement which has said that most research is wrong. It’s making people feel they’re doing something wrong, but that’s not the problem. The problem is that the publication system pushes you because you can only publish if you get a good, that is, small, p-value [a statistical test that indicates whether results could be due to chance]. Researchers then massage the data until they get the p-value and then it’s not reproducible. But if we were much more transparent and said, “You’re allowed to publish things which are significant or not significant because it’s useful down the road and just publish all your data and the code you used for the analyses” – if you’re transparent about what you’re doing, there’s much less opportunity to shoehorn the data into some wrong conclusion.
I feel that people misuse summaries in statistics. They feel as if statistics is going to summarize everything into one value, as if one p-value is going to summarize five years of work. It’s ridiculous. Everything is multidimensional, it’s complex. But if we could publish more of the negative results and all of the data, we would advance science much faster, because people would get insight from the negative results.
What do you think is one of the challenges for reproducible microbiome research?
The biggest source of variability in the microbiome is the person-to-person variability. It’s a problem if you’re looking for causality. That’s a red flag word for us – causality – meaning something about the bacterial community causes some disease. You actually don’t know whether it’s the bacteria or whether the bacteria are a sign of something that happened before. It’s very much individualized, so everybody’s history matters.
For instance, the person-to-person variability is huge for people who travel a lot, people who have eaten all kinds of different food. Diet, history, genetics, culture – all of these things make for a patchwork of causes for this person-to-person variability, but it makes clinical trials very hard. When we do trials, we match people based on their gender, social status, age, but you can’t match everything: their travel history, eating habits, family environment.
How hard is it, then, to prove that bacteria in the microbiome are causing a disease or certain symptoms?
Well, you can prove it in animal models. We also do what are called longitudinal studies. For example, we follow the same patient over a period of time and then we change their diet or give them antibiotics. If you do a perturbation study on that same patient, the patient serves as their own control; it’s already a step in the right direction.
Of course, causality is very hard to prove because there are all these moving parts. The immune system, genetics, the bacteria – everything comes in. It’s a complex system. It’s like this horrible tangled ball and a thread that you’re trying to pull out. It’s a mess, and you pull out one thread at a time, but everything is really tied together.
What do you envision for the future in microbiome research?
I’d say that for the future, I’m most excited to bring to bear what the host immune system is doing. In animal studies the immune system is different to that of humans, and careful clinical trials have to be run to pinpoint the important interactions between the host immune system and the microbiome under controlled conditions. Some researchers specialize in the microbiome and others specialize in the immune system, and few people are studying both systems together. I think these new interdisciplinary teams are going to make the future discoveries at the interface.
Source: Stanford University