A new study of e-cigarettes’ efficacy in quitting smoking has not only pitted some of vaping’s most outspoken scientific supporters against one of its fiercest academic critics, but also illustrates most of the pitfalls facing researchers on the topic and the ones – including policy-makers – who must interpret their work.
The furore has erupted over a paper published in The Lancet Respiratory Medicine and co-authored by Stanton Glantz, director from the Center for Tobacco Control Research and Education in the University of California, San Francisco, in addition to a former colleague – Sara Kalkhoran, now of Harvard Medical School, who is in fact named as first author but will not enjoy Glantz’s fame (or notoriety) in tobacco control and vaping circles.
Their research sought to compare the success rates in quitting combustible cigarettes of smokers who vape and smokers who don’t: put simply, to find out whether utilization of e-cigs is correlated with success in quitting, which could well imply that vaping can help you give up smoking. To achieve this they performed a meta-analysis of 20 previously published papers. That is, they didn’t conduct any new information right on actual smokers or vapers, but rather tried to blend the final results of existing studies to find out if they converge on a likely answer. This can be a common and well-accepted method of extracting truth from statistics in numerous fields, although – as we’ll see – it’s one fraught with challenges.
Their headline finding, promoted by Glantz himself online along with through the university, is that vapers are 28% not as likely to stop smoking than non-vapers – a conclusion which would suggest that vaping is not only ineffective in quitting smoking, but usually counterproductive.
The result has, predictably, been uproar from the supporters of Ecigs in the scientific and public health community, especially in Britain. Amongst the gravest charges are the types levelled by Peter Hajek, the psychologist who directs the Tobacco Dependence Research Unit at Queen Mary University of London, calling the Kalkhoran/Glantz paper “grossly misleading”, and also by Carl V. Phillips, scientific director from the pro-vaping Consumer Advocates for Smoke-Free Alternatives Association (CASAA) inside the United states, who wrote “it is obvious that Glantz was misinterpreting the information willfully, rather than accidentally”.
Robert West, another British psychologist and also the director of tobacco studies with a centre run by University College London, said “publication with this study represents a significant failure in the peer review system in this journal”. Linda Bauld, professor of health policy on the University of Stirling, suggested the “conclusions are tentative and often incorrect”. Ann McNeill, professor of tobacco addiction in the National Addiction Centre at King’s College London, said “this review is not really scientific” and added that “the information included about two studies i co-authored is either inaccurate or misleading”.
But what, precisely, are definitely the problems these eminent critics discover in the Kalkhoran/Glantz paper? To answer a number of that question, it’s essential to go beneath the sensational 28%, and examine that which was studied and how.
Meta-analysis is a seductive idea. If (say) you may have 100 separate studies, each of 1000 individuals, why not combine those to create – essentially – just one study of 100,000 people, the final results from which needs to be a lot less susceptible to any distortions which may have crept into an individual investigation?
(This could happen, for example, by inadvertently selecting participants with a greater or lesser propensity to give up smoking due to some factor not considered through the researchers – a case of “selection bias”.)
Needless to say, the statistical side of the meta-analysis is rather modern-day than simply averaging out your totals, but that’s the general concept. As well as from that simplistic outline, it’s immediately apparent where problems can arise.
Whether its results have to be meaningful, the meta-analysis must somehow take account of variations in the design of the person studies (they could define “smoking cessation” differently, for example). If this ignores those variations, and tries to shoehorn all results in to a model that some of them don’t fit, it’s introducing its own distortions.
Moreover, when the studies it’s based on are inherently flawed in any respect, the meta-analysis – however painstakingly conducted – will inherit those same flaws.
This can be a charge made by the Truth Initiative, a Usa anti-smoking nonprofit which normally takes an unwelcoming look at e-cigarettes, about a previous Glantz meta-analysis which will come to similar conclusions for the Kalkhoran/Glantz study.
In a submission last year to the U.S. Food and Drug Administration (FDA), responding to that federal agency’s require comments on its proposed electronic cigarette regulation, the Truth Initiative noted it had reviewed many studies of e-cigs’ role in cessation and concluded they were “marred by poor measurement of exposures and unmeasured confounders”. Yet, it said, “many of them have been included in a meta-analysis [Glantz’s] that states demonstrate that smokers who use e-cigarettes are more unlikely to give up smoking when compared with those who tend not to. This meta- analysis simply lumps together the errors of inference from all of these correlations.”
It also added that “quantitatively synthesizing heterogeneous studies is scientifically inappropriate and the findings of the meta-analyses are therefore invalid”. Put bluntly, don’t mix apples with oranges and expect to receive an apple pie.
Such doubts about meta-analyses are far from rare. Steven L. Bernstein, professor of health policy at Yale, echoed the reality Initiative’s points as he wrote within the Lancet Respiratory Medicine – the identical journal that published this year’s Kalkhoran/Glantz work – that the studies contained in their meta-analysis were “mostly observational, often without control group, with tobacco use status assessed in widely disparate ways” though he added that “this is no fault of [Kalkhoran and Glantz]; abundant, published, methodologically rigorous studies just do not exist yet”.
So a meta-analysis can only be as good as the study it aggregates, and drawing conclusions from this is simply valid if the studies it’s according to are constructed in similar approaches to one another – or, at least, if any differences are carefully compensated for. Needless to say, such drawbacks also apply to meta-analyses which can be favourable to e-cigarettes, including the famous Cochrane Review from late 2014.
Other criticisms in the Kalkhoran/Glantz work exceed the drawbacks of meta-analyses in general, and focus on the specific questions posed by the San Francisco researchers as well as the ways they tried to answer them.
One frequently-expressed concern has been that Kalkhoran and Glantz were studying a bad people, skewing their analysis by not accurately reflecting the actual variety of e-cig-assisted quitters.
As CASAA’s Phillips indicates, the electronic cigarette users inside the two scholars’ number-crunching were all current smokers who had already tried e-cigarettes once the studies on their quit attempts started. Thus, the research by its nature excluded those that had started vaping and quickly abandoned smoking; if these people exist in large numbers, counting them might have made e-cigarettes seem a more successful path to smoking cessation.
A different question was raised by Yale’s Bernstein, who observed that does not all vapers who smoke are trying to give up combustibles. Naturally, people who aren’t attempting to quit won’t quit, and Bernstein observed that when these people kndnkt excluded from your data, it suggested “no effect of e-cigarettes, not too e-cigarette users were more unlikely to quit”.
Excluding some who did find a way to quit – then including anyone who has no intention of quitting anyway – would most likely manage to affect the results of research purporting to measure successful quit attempts, despite the fact that Kalkhoran and Glantz debate that their “conclusion was insensitive to a wide range of study design factors, including whether or not the study population consisted only of smokers interested in quitting smoking, or all smokers”.
But there is also a further slightly cloudy area which affects much science – not simply meta-analyses, and not merely these particular researchers’ work – and, importantly, is often overlooked in media reporting, along with by institutions’ publicity departments.