We all like to think we are more logical than we actually are. At some point, we have probably entered into a discussion about a health or fitness related subject with someone and have been quick to cite a study that gives us confirmation bias to support our viewpoint. Then we have perhaps been rebutted with an equally supportive one in the favor of our opposition’s perspective. Or maybe it’s been pointed out that the study or article you’ve cited doesn’t come from a reputable source. Sound familiar?
It is easy to find research that will support, or that can be manipulated to support any ridiculous claim. If you try hard enough, you can draw conclusions from a lot of research to sell something and people will buy into it.
What Is Broscience?
“Broscience” is the misrepresentation of actual research. It is cherry picking tiny portions of research to support ideas while ignoring the data that doesn’t support it. It is the opposite of science. And we are falling for it all the time. We all have our instincts, our own subjective experiences, logical reasoning, and anecdotal evidence when it comes to the area of health and fitness. There are also ancient pearls of wisdom and alternative treatments that have been passed down through the ages through trial and error that, only now, science is starting to either confirm, deny, or has yet to examine. Health and fitness is still partly art and partly science.
Science doesn’t always get it right, but it still seems to be our most reliable method to determine which direction to take.
With all the information out there and with all the experts out there, it is hard to know what to look for—especially when it comes to our health and fitness. I asked my friend and client Nada Cvijanovic PhD—you know, a legit scientist doing research in laboratories—for a little help for we fitness laypeople.
The Nutshell of the Scientific Method
There are many definitions of the scientific method, and it has evolved significantly over time from the days of Aristotle, who was one of the first to propose the necessity of empirical evidence, not just abstract logical reasoning, in acquiring knowledge. But in a nutshell, the scientific method is exactly that: a method. It provides a framework for observing phenomena in the world, using existing knowledge to formulate testable questions (hypotheses) to explain new observations, and then testing these questions (hypotheses) by using systematic approaches to reach a conclusion that either supports, or doesn’t support, the original hypothesis. Only once enough information has been collected from many, many tested hypotheses can an observation be accepted as a scientific theory.
Pros to the scientific method:
- Involves logical reasoning backed up by empirical observation—meaning it can be measured and verified
- Can be replicated
- Controlled (i.e., independent variable is the thing that changes, from which you observe the effect this has on your dependent variable, the thing you are interested in, and try to establish a causal link between the two)
- Outcomes may not be generalizable because often things in the lab cannot be replicated in the “real world.”
- Not everything observed can be tested (e.g., astronomical events can be speculated but only indirectly measured).
- Although it has measures to control bias, it is not immune to it (and thus objectivity is sometimes questionable).
- The rigor involved in testing, particularly human testing, and the “gold standard” randomized controlled trials, means that unfortunately progress can be slow, and many people may miss out on potentially lifesaving interventions because the science hasn’t caught up yet.
As I have mentioned in other articles, what you need to be asking yourself these days is: What isn’t bullshit? Considering whether the views presented are facts or opinions should play a big part in trying to make better decisions about our health and fitness. The thing is that few people take the time to read up and follow up on the actual research in claims, articles, and books.
Know What to Look For
Because the majority of people are not going to read the original source material of studies, you need to know what to look for when reading about science especially in “mainstream” media.
Sensationalism happens in almost every piece of news these days. It’s like the saying “sex sells” except in this case its “sensationalism” sells. Nobody wants to hear a news story saying, “Breaking news, study shows that we now know slightly more about something, but it’s going to take another 10 years or so just to be sure.” So be wary of grand, sweeping exaggerations like “new study shows that eating KFC improves heart health in the morbidly obese.” As a general rule, if it sounds too good to be true, it is. Realistically, the fact is that science is a slow and methodical process. Big discoveries take time, as mentioned above, because they must be built on existing knowledge, so really, they don’t happen overnight. Most science is small incremental steps that build up to a wealth of knowledge. So look for an authority and material that is verifiable from other sources.
Look at what was studied. Was it humans? Or was it animals? Or cells? The latter two are valid research models and necessary for developing our understanding of basic science, but you must be careful in drawing conclusions about what would happen in humans based on studies not performed on humans. As Andrew Lock rightly points out in an old article on lower back rehab and functional training, using a study based on the soleus muscle of a cat with brain damage probably isn’t enough evidence to site to support a particular level of transverse abdominus activation for humans when it comes to lower back rehab.
Was the study peer-reviewed? Any valid, legitimate study would be peer-reviewed by two or more scientists generally within the field (although not always) and goes through checkpoints before being accepted for publication. The “impact factor” of a journal reflects how widely it is read and gives an idea of the journal’s credibility. For example, Nature is a well-known science based journal. However, a small impact factor can also reflect a highly-specialized field, which by default would have a select few readers. Check also to see whether there is a date stamp that suggests the information being used is current.
Look for who sponsored the studies, or where the money came from. This information is easy to find and must be declared in any legitimate publication. While it is easy to dismiss findings based on who is paying for the research, and we see most people demonize pharmaceutically sponsored studies all the time, there are many legitimate companies sponsoring important research.
Given the dire state of government funded research, particularly in Australia, this is often the only option many scientists have to study and be paid to do so. Most people don’t realize that scientists spend most of their time applying for grants, from which comes their salaries. Which means no grants equals no job. Or you could have money for your project, but no salary, or vice versa. It can get quite complicated. It pays (pun intended) to look at who is sponsoring the research because it can sometimes give you an idea of vested interests and bias in relation to the findings of the studies. Consider the author’s reasons for posting the information.
Negative results are important, too. Don’t neglect the studies that don’t show anything. What do I mean? There is a bias towards positive results (e.g., something happened or changed because of an intervention). Negative results are just as important because they still add to existing knowledge. Often people miss the data that doesn’t support the idea being put forward and authors choose to ignore that data, and instead put forward only that data that appeared to support their beliefs. If you read the research quoted deeper sometimes it contradicts the recommendations. Check to see that the information is complete, comprehensive, and verifiable through other resources.
Does the study have appropriate research design? For those who want to head further down the science geek rabbit hole, appropriate research design is the most critical factor in determining the validity and integrity of a study. Sometimes the studies are also terribly difficult to understand unless you are in the field. If the study design is flawed, then the results generated from such studies are also inherently flawed and must be interpreted (if they can be at all) with much greater caution.
Approach Information with Care
A lot of what I see people sharing on Facebook is not from what I would consider a reliable source, but rather emotional pieces written by somebody with no authority, which when researched further show a misunderstanding of the facts as reported by other, more credible sources. This is usually done to sell something, including a particular point of view.
Not only that but now everyone can contribute to published information and this reduces the integrity of the information being made available. I could write a big article on the benefits of an all-bread diet based on no real facts at all and guaranteed there will be someone out there in the world who, with their own confirmation biases, will cling onto it and tout it as proof. We need to be increasingly more careful of what information we trust, given that the distribution of information comes so easily these days.