Neal D. Goldstein, PhD, MBI

About | Blog | Books | CV | Data | Lab


Sep 25, 2018

Causation, association, and the timid epidemiologist

I recently reviewed an article for a well-regarded pediatrics journal where the authors used some fairly strong language to suggest that an exposure in their article was causally related to an outcome. I made the suggestion to soften the language, essentially replacing words like "causes" with "is associated with" or "correlates with." When the journal sent the notification to the authors - a minor revise and resubmit - a second reviewer echoed my comments. Oddly, I felt reassured I had done a proper review, perhaps even vindicated in my mind.

Then this commentary came out. In it, Hernan articulates how we in epidemiology have a fear of the C-word (causation), when, in fact, the majority of research questions we ask in the field are causal questions. Why, he questions, should we shy away from stating in a causal framework that in which we state in a predictive framework?

For clarity, causal models are used when asking a research question such as, "Does X cause Y?" Whereas a predictive model is used to ask a research question like, "Is X associated with Y?" In epidemiology, as in most sciences, we are interested in causal factors to promote health through intervention, and therefore we bring a sound theoretical approach to our statistical analyses. We also use specialized analytical techniques such as inverse probability waiting, G-computation, agent-based models, and so on to specifically test causal hypotheses often through counterfactual theory. Contrast this with a predictive model in which you are searching for pure correlations. These types of models are common in high dimensional data sources and can be addressed through automated variable selection routines or machine learning approaches. In the causal model, we go beyond associations, therefore we seek to know exactly how each variable may be related to the outcome. Unlike a predictive model, we have a strong theoretical basis for the variables entered into the final model.

After reading Hernan's commentary I felt like I did a disservice to the authors of the article and readers of the journal. While the authors did not specifically address building a causal versus predictive model, I think they clearly implied the former, and their writeup indicated as such. In a causal model approach, the research question is causal, the methods are causal*, the results were written without interpretation, and the discussion returned to a causal framework.

*Although the authors used standard regression approaches, they did not employ automated variable selection or hunt for pure correlations:, they had a clear exposure in mind. I suppose one could argue without causal inference approaches as enumerated earlier, this is not a true causal analysis, but in my opinion this does not disqualify their work as non-causal.

In reflecting upon this, I feel a lot has to do with our training in a traditional epidemiology program. We spend an inordinate amount of time critiquing literature, searching out confounders, dealing with bias, and so forth, so at the end of the day we are shy against making any causal claims. This is clearly shared in our field: the second reviewer of the article I mentioned and the impetus for Hernan's commentary. The only time we see concrete causal language is in an randomized controlled trial, yet these results can be unreproducible and the effect may only be in the contrived study population. Is a poorly designed and conducted RCT more causally-sound that a well-designed and conducted observational study? I think few would argue yes.

Acknowledging that perhaps we should do more to stand up for our work, and not shy away from stronger causal language, I began thinking of the corollary to Hernan's thesis: What if we did use more causal language? Is this just an debate over semantics or advocacy for causal analytic approaches? What are the potential detriments? To answer these questions, we need to turn to how epidemiological evidence is used in public health. Often this takes the function of health related policy (for public health) or clinical guidelines (for medical practice). The degree to which we are willing to accept uncertainty about causality can inform the type of policy, or level of evidence, used in dissemination. If the issue is a quantitative one - we are assuming a causal relationship of 60% risk, whereas in actuality the risk relationship is only 40% causal - then the effect of an incorrect quantification is minimal in terms of its potential for a detrimental impact. We are still calling the public's attention to a risk factor. On the other hand, if the issue is a qualitative one - we are assuming a protective factor when in actuality the factor under inquiry magnifies risk for disease - then we are not only doing the public a disservice but potentially harm.

A well-known example of this is the risk of changes to breast tissue and risk for breast cancer as a result of hormone replacement therapy (HRT) for peri-menopausal women. Initially the benefits were thought to outweigh the risks (Nurses' Health Study, HRT was associated with lower rates of cardiovascular disease), and many women initiated HRT. After the Women's Health Initiative detected an increase in breast cancers among women undergoing hormone therapy, the recommendation for HRT was then reversed. (This is a gross simplification of the science; currently the decision to undergo HRT is multi-factorial and the American Cancer Society does not have a position on either side.) But these types of examples are plentiful. Think of the changing science on coffee consumption, alcohol use, benefits of eggs, probiotics, multivitamins, and so on.

Often, in high impact studies such as these, there is a press release and substantial media coverage. It is easy to fall victim to what some have called "science by media" and overstate (or understate) results. The harm may not come from the originals scientific study, but rather the dissemination of it. Therefore does the language used by the epidemiologist in the article ultimately have an impact?

[At this point I was pretty cynical and I shelved this article for a while. Then I read this, which made me think perhaps the field is losing its focus. I dusted off the article and appended this conclusion].

The idea of epidemiology as a soft science and a "form of journalism" perhaps (indirectly) ties in with Hernan's thesis about our fear of the C-word. That is, if we apply euphemism and hand waving to our results we are (again indirectly) enabling this idea of epidemiology as a "less exact" science. The key contributions of our field have happened because of, or perhaps in spite of in some cases, the language we have used to present our findings. The fact that this conversation even occurs is because of, again quoting this article, the "highly complex science" we undertake. If the causal factor of interest is multidimensional, such as the construct of race or socioeconomic status, then it is more difficult to use causal language, because what does it actually mean to say that low SES causes disease. There are many layers to this statement that require unpacking, and it is easier to state that low SES is associated with disease.

In closing, to see reminders in the literature such as Hernan shared, is useful to the field. These messages are not new, but need to be said and repeated. More investigators should apply a causal framework and use causal language to describe the findings, and not be chastised for doing so by reviewers. If we are asking a causal question then a causal answer is required. Science moves incrementally with multiple studies necessary to effect a public health (or clinical) practice change. Perhaps by taking a more aggressive approach in representing our work (through more use of the word "causes") can we move beyond being a soft science.


Cite: Goldstein ND. Causation, association, and the timid epidemiologist. Sep 25, 2018. DOI: 10.17918/goldsteinepi.


About | Blog | Books | CV | Data | Lab