
The “Evidence” Against the “Evidence” – 11 Reasons You Can’t Believe Every Nutritional Study You Read
I consider myself to be an evidence-based clinical nutritionist and researcher. I’ve been around long enough to have witnessed too many fads and excessively-hyped superfoods marketing campaigns, to know that scientific studies are a vital component in assessing the efficacy (or otherwise) of foods and supplements.
And yet, I am always acutely aware that the “evidence” is not always accurate or reliable, and that experience in clinical practice will often refute the claims made in supposedly reputable studies.
So that you, too, might have a better understanding and ability to question the latest headlines, I offer this discussion of why it is important to question the “evidence” with an open mind and common sense.
A paper written some 10 years ago by the esteemed Stanford epidemiologist John Ioannidis, was titled “Most Published Research Findings Are False”. The paper examined how issues long-ingrained in the scientific process combined with the way we currently interpret statistical significance, means most published findings are likely to be incorrect.
Richard Horton, the editor of prestigious medical journal, The Lancet, also recently noted: “Much of the scientific literature, perhaps half, may simply be untrue.” He blames: “small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance.” Horton lamented: “Science has taken a turn towards darkness.”
Potentially flawed research is still the better of two evils (the other evil being – no research at all) – but let’s examine some of the reasons why nutritional studies can be flawed, and what to look for.
- Industry-funded studies tend to be distorted and biased. (Well, that’s hardly surprising.) In 2015, researcher Marion Nestle examined 152 industry-funded nutrition studies. Of 152 studies, 140 (a whopping 92%) produced favourable results for the company funding it. An earlier beverage nutrition study in 2007 (Does Industry Sponsorship Undermine the Integrity of Nutrition Research?) made similar findings. The report stated: “The odds that a paper would report a favorable outcome were four to eight times higher when the study was funded by the manufacturer of the beverages in question than when the study was not funded by industry … When an industry is the major sponsor of research on its own product, unfavorable effects of that product are less likely to be investigated. The next step down the slope is adjustment of designs. The dosage of the product and the nature of control treatments may be adjusted so as to increase the chance that the study will demonstrate benefits of the product or that adverse effects will not reach statistical significance. Also, unfavorable data may be deemed less relevant and may be left out of the abstract and the press release, or out of the paper itself. Finally, the whole publication may be cancelled or seriously delayed when the outcome is disappointing to the sponsor.” In more recent years John Ioannidis’ argument has received support from multiple fields. Several years ago, pharmaceutical drugs company Amgen attempted to replicate the “landmark publications” in the field of cancer drug development for a report published in Nature. It was spectacularly unsuccessful – 47 out of 53 – or 88 per cent – of test results could not be replicated. When another drug company, Bayer, attempted a similar project on drug target studies, 65 percent of the studies could not be replicated.
- Financial interests distort results. (Again, not surprising). The mainstream media and internet are quick to publicise the findings of nutritional studies, without reporting who funded it. A recent study declaring that chocolate is “scientifically proven to help with fading concentration” was funded by Hershey. Goji berry studies have been famously funded by a major supplement manufacturer. On a more serious note, tobacco companies have a long history of funding fraudulent health research — being described by the World Health Organization as “the most astonishing systematic corporate deceit of all time”.
- Ego distorts results. Scientists, in particular, have a tendency to like to prove their own theories, not disprove them. It is commonly accepted that a scientist seeking a particular result from an experiment, will be more likely to achieve a “successful” result, than one who is impartial to the findings.
- Publication bias. Journals like to publish studies with positive, exciting, newsworthy results. Researchers like to submit studies with positive, exciting, newsworthy results. So, two things happen here – journals tend to ignore “boring” or “negative” papers. And when they achieve negative results, researchers are far less likely to write up the study and submit it for publication. (After all, researchers rely on funding – and who wants to give funding to a researcher who doesn’t produce “sexy” results?) But negative results are vitally important for the scientific process, future research and more objective assessments.
- Small study groups provide false positives. Big studies cost big dollars and big amounts of time. That’s why you’ll often find study groups of just 10-20 people. The smaller the sample size, the less emphasis we should place on the results. The larger the sample size, the more meaningful the results and the more likely they are to apply to the larger population. Small studies are far more likely to provide statistically significant results that are, in fact, a false positive. They should thus be considered with caution. This is not to suggest that the results of small sample size studies should be ignored. They can provide interesting information and be the launching pad for larger studies. But they should not be an influential factor when it comes to deciding what to eat or how to manage your health.
- No control group. Usually found in small study groups, this keeps the costs down and is more likely to produce positive findings. To accurately measure the effects of an experimental intervention, it is imperative to include a “control group” – a group of people who do not receive the intervention (and are preferably unaware of whether they are receiving the “real thing” or a “dummy”). Without a control group to compare against the group that received the experimental intervention, it is not possible to determine what caused the change.
- Manipulated control groups. The presence of a control group doesn’t necessarily make a good study. The control group has to be a realistic control. As an example, a study in 2015 – “Effects of oatmeal and corn flakes cereal breakfasts on satiety, gastric emptying, glucose, and appetite-related hormones” claimed that eating oatmeal for breakfast promoted feelings of fullness in the control group. But feelings of fullness in comparison to what? Cornflakes? You must be kidding – most of us could have predicted that outcome without an expensive study to back it up. It might have been more meaningful to compare oatmeal to bacon and eggs, toast and Vegemite, or at least another whole-grain cereal.
- Demographics. Studies often neglect to provide details of the participants. Were they men or women, how old were they, their ethnicity, etc. Results of studies can vary significantly depending on these variables.
- Nutrient Synergy. Many nutrition researchers study the effects of specific nutrients – not the whole food from which they are derived – and base their verdict of the value of a food on these results. Foods are much, much more than single nutrients. Almonds and pumpkin seeds are much more than linoleic acid. Bananas are much more than potassium. Yoghurt is not just a probiotic. Garlic is not just an antibiotic. People consume whole foods – not isolated nutrients. Hence, these studies do little to provide us with accurate information about the effects of eating these foods.
- The Source. Species, sub-species, cultivars, country, climate, terrain, organic, wild-crafted, commercially-grown, genetically-modified – all these variables can affect the nutrient profile of any given plant. Most studies do not specify these aspects, or compare the differences.
- Short-term studies. Most nutritional studies are short-term. We don’t usually know if the results are sustainable. We don’t usually know what the long-term benefits (or risks) might be. A two-week study doesn’t really correlate to real-world living.
The Verdict – Nutrition Evidence
Open your mind before you open your mouth or your wallet.
The only way to know whether something works for you is to try it (assuming there is no medical reason why you shouldn’t). The fact that a published study “worked” or “didn’t work”, does not by default make that food or supplement or diet “good” or “bad”. It does not mean the results by default will be the same for you. Regardless of the findings, it probably still requires more research.
And the best research is often first-hand, by trying it out for yourself.