Effects of Nutritional Supplements and Dietary Interventions on Cardiovascular Outcomes: An Umbrella Review and Evidence Map
Submit a Comment
Contributors must reveal any conflict of interest. Comments are moderated. Please see our information for authorsregarding comments on an Annals publication.
Abstract
Background:
Purpose:
Data Sources:
Study Selection:
Data Extraction:
Data Synthesis:
Limitations:
Conclusion:
Primary Funding Source:
Get full access to this article
View all available purchase options and get full access to this article.
Supplemental Material
References
Comments
Sign In to Submit A CommentInformation & Authors
Information
Published In

History
Keywords
Copyright
Authors
Metrics & Citations
Metrics
Citations
If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. For an editable text file, please select Medlars format which will download as a .txt file. Simply select your manager software from the list below and click Download.
For more information or tips please see 'Downloading to a citation manager' in the Help menu.
Effects of Nutritional Supplements and Dietary Interventions on Cardiovascular Outcomes: An Umbrella Review and Evidence Map. Ann Intern Med.2019;171:190-198. [Epub 9 July 2019]. doi:10.7326/M19-0341
View More
Login Options:
Purchase
You will be redirected to acponline.org to sign-in to Annals to complete your purchase.
Access to EPUBs and PDFs for FREE Annals content requires users to be registered and logged in. A subscription is not required. You can create a free account below or from the following link. You will be redirected to acponline.org to create an account that will provide access to Annals. If you are accessing the Free Annals content via your institution's access, registration is not required.
Create your Free Account
You will be redirected to acponline.org to create an account that will provide access to Annals.
Trials of nutritional supplements – do not ignore the subgroups
Other subgroups have been identified in trials with nutrients. Nutrients often interact with genes and common genetic polymorphisms, such as ApoE 4, can influence the response to nutrients such as vitamin B12 (2) and omega-3 fatty acids (3). The biological effect of a nutrient may be influenced by common medications, for example, anti-platelet drugs interact with the effect of B vitamins on stroke (4). Different nutrients interact with each other: for example the beneficial effect of B vitamins on the brain requires a good omega-3 fatty acid status (5).
It is important that trials of nutritional supplements should recognise the heterogeneity of the population and stratify for different subgroups in case one or other subgroup responds differently. Until this is done, we should not dismiss the use of supplements for improving health outcomes since we might well be denying benefit to sections of the population.
1. Morris MC, Tangney CC. A potential design flaw of randomized trials of vitamin supplements. JAMA. 2011;305(13):1348-9. doi: 10.1001/jama.2011.383
2. Vogiatzoglou A, Smith AD, Nurk E, Drevon CA, Ueland PM, Vollset SE, et al. Cognitive function in an elderly population: Interaction between vitamin B12 status, depression, and apolipoprotein E E4: The Hordaland Homocysteine Study. Psychosom Med. 2013;75(1):20-9. doi: 10.1097/PSY.0b013e3182761b6c
3. Hennebelle M, Plourde M, Chouinard-Watkins R, Castellano CA, Barberger-Gateau P, Cunnane SC. Ageing and apoE change DHA homeostasis: relevance to age-related cognitive decline. Proc Nutr Soc. 2014;73(1):80-6. doi: 10.1017/s0029665113003625
4. Arshi B, Ovbiagele B, Markovic D, Saposnik G, Towfighi A. Differential effect of B-vitamin therapy by antiplatelet use on risk of recurrent vascular events after stroke. Stroke. 2015;46(3):870-3. doi: 10.1161/strokeaha.114.006927
5. Jernerén F, Elshorbagy AK, Oulhaj A, Smith SM, Refsum H, Smith AD. Brain atrophy in cognitively impaired elderly: the importance of long-chain omega-3 fatty acids and B vitamin status in a randomized controlled trial. Am J Clin Nutr. 2015;102(7):215-21. doi: 10.3945/ajcn.114.103283
Does statistics in meta-analyses matter?
The width of CI is s shorter with random-effects (RE) model used by Khan and coworkers compared to fixed-effect (FE) model used in the Cochrane review (2). Indeed, in some situations the RE model by Hartung-Knapp/Sidik-Jonkman (HKSJ) can result narrower CI compared to the FE model, and an ad hoc modification has been introduced (3, 4). Contrary, with this modification 95% CI would be 0.34 to 2.36 based on reanalysis of three studies [Analysis 1.2 in (2)]. However, confidence intervals can be too wide, especially when there are few studies (4).
A between-study heterogeneity was estimated to be zero, which is implausible even for outcomes like all-cause mortality based on what we know from a large sample of meta-analyses conducted within Cochrane reviews (5). Therefore, I conducted a Bayesian RE meta-analysis with R bayesmeta package using a prior distribution for between-study variance of 0.030 [i.e. all-cause mortality as outcome and non-pharma vs. non-pharma as comparison according to (5)]. Further, a uniform prior for mean effect was used. In sensitivity analysis, a normal prior of RR 1 with standard deviation of 0.35 (i.e. 5% probability for RR to be outside 95% interval of 0.5 to 2.0) was used.
The Bayesian RE model yielded a posterior mean for RR (95% credible interval) of 0.90 (0.53 to 1.54) based on reanalysis of three studies [Analysis 1.2 in (2)]. There was 65% probability that salt reduction has some benefit but also 35% probability for some harm in the context of premature deaths. In sensitivity analysis, posterior mean for RR was 0.94 (95% credible interval 0.62 to 1.42).
In conclusion, robustness of results in meta-analyses should be examined with a more than one statistical method. The certainty of evidence considering salt reduction in normotensive persons and all-cause mortality should be downgraded.
References
1. Khan SU, Khan MU, Riaz H, et al. Effects of Nutritional Supplements and Dietary Interventions on Cardiovascular Outcomes: An Umbrella Review and Evidence Map. Ann Intern Med. 2019 Jul 9. doi: 10.7326/M19-0341.
2. Adler AJ, Taylor F, Martin N, Gottlieb S, Taylor RS, Ebrahim S. Reduced dietary salt for the prevention of cardiovascular disease. Cochrane Database Syst Rev. 2014;12:CD009217.
3. Wiksten A, Rücker G, Schwarzer G. Hartung-Knapp method is not always conservative compared with fixed-effect meta-analysis. Stat Med. 2016;35:2503-15.
4. Röver C, Knapp G, Friede T. Hartung-Knapp-Sidik-Jonkman approach and its modification for random-effects meta-analysis with few studies. BMC Med Res Methodol. 2015;15:99.
5. Turner RM, Jackson D, Wei Y, Thompson SG, Higgins JP. Predictive distributions for between-study heterogeneity and simple methods for their application in Bayesian meta-analysis. Stat Med. 2015;34:984-98.
Subgroup Analyses: Take It with a Pinch of Salt
Randomized controlled trials are generally powered to examine the efficacy or safety of an intervention in heterogenous group of patients and subgroup analyses are usually performed as post hoc analyses1. Multiple subgroup analyses can potentially generate false positive results; which may further complicate the interpretation of the results when analyses are conducted based on dichotomization of variables, such as low nutrient status versus high nutrient status in this case1. As such a process can lead to divergent results, one way to ensure that probability of a false-positive finding is not >5%, correction should be made for multiple testing (e.g. Bonferroni adjustment of P-values)2. While, prespecifying the subgroups in a trial protocol might provide some credibility to the subgroups results, even the prespecified analyses are at a higher risk of false positive results secondary to multiple testing, and should be interpreted cautiously1. For instance, subgroup analyses based on baseline 25-OH vitamin D levels were specified a priori in VITAL trial, but multiple hypothesis testing and formal adjustment of P-values or confidence intervals were not done. Regardless, vitamin D supplementation showed null effect compared with control among participants with baseline serum 25-OH vitamin D (<20 ng/mL and ≥ 20 ng/mL, or <median of 31 ng/mL or ≥ 31 ng/mL; p=interaction 0.75 and 0.42, respectively)3.
In view of these issues, generating subgroup analyses based on trial level information in meta-analyses can be extremely misleading. Therefore, we refrained from generating subgroup analyses and focused on providing a broad-based evidence for nutritional supplements and dietary interventions4. We stated in our discussion that “because the focus of our study was to provide broad-based evidence for various nutritional supplements and dietary interventions using existing meta-analyses and trial-level information, we could not analyze interventions according to important subgroups, such as sex, body mass index, lipid values, blood pressures thresholds, diabetes, and history of CVD.” We also acknowledged the limitations related to health and socioeconomic status, interventions and lack of dose-response analyses and discussed results in the context of these limitations. For instance, in case of folate supplementation, we discussed that it remains unclear that whether the benefit of folate supplementation can be generalized to US population, which has folate fortification. This umbrella review examined the effects of “routine use of supplements and dietary interventions on cardiovascular outcomes” and highlights the issue that the current evidence reporting the effects of these interventions was suboptimal.
1. Wang R, Lagakos SW, Ware JH, Hunter DJ, Drazen JM. Statistics in medicine--reporting of subgroup analyses in clinical trials. The New England journal of medicine. 2007;357(21):2189-2194.
2. Jafari M, Ansari-Pour N. Why, When and How to Adjust Your P Values? Cell J. 2019;20(4):604-607.
3. Manson JE, Cook NR, Lee IM, et al. Vitamin D Supplements and Prevention of Cancer and Cardiovascular Disease. The New England journal of medicine. 2019;380(1):33-44.
4. Khan SU, Khan MU, Riaz H, et al. Effects of Nutritional Supplements and Dietary Interventions on Cardiovascular Outcomes: An Umbrella Review and Evidence MapEffects of Supplements and Dietary Interventions on Cardiovascular Outcomes. Annals of internal medicine. 2019.
Deciphering discrepancy in meta-analytic results: The model matters
The considerably wider CI in the Cochrane review raises immediate suspicion. The Cochrane review used a fixed-effects model and Khan and colleagues used a random-effects model, and it does not make sense for the fixed-effects model to yield a wider CI than the random-effects model.
This oddity is due to Khan and colleagues' use of the Hartung-Knapp adjustment. In many cases, use of the Hartung-Knapp adjustment achieves its intended effect: more conservative estimates (i.e., estimates with wider CIs). However, this is not always the case, which has understandably resulted in some recommending sensitivity analyses if one uses the Hartung-Knapp adjustment, or perhaps even an "adaptive meta-analysis", where the underlying data drive decisions about which analytic model to present as the primary model.(3,4)
Repeating the analysis with Khan and colleagues' methods but excluding the Hartung-Knapp adjustment yields results identical to the Cochrane review (analyses carried out with R version 3.6.1 and meta version 4.9-6; data and analytic code available on request). The consistency of the results is expected given no statistical heterogeneity is identified in the analysis. The fact that τ^2 = 0 is also a probable cause for why the Hartung-Knapp adjustment resulted in a narrower CI than the fixed-effects model in the Cochrane review.(3,4) Results without the Hartung-Knapp adjustment should be considered more reliable here.
I encourage Khan and colleagues to consider conducting additional analyses for their results where relevant based on the above finding. This might be considered an example of "vibration in effects" due to model specification, and although not severe enough to result in a Janus phenomenon, the inferential difference that might result if considering Khan and colleagues' results versus the Cochrane review’s results feels much more substantial than a "vibration".(5)
References
1. Khan SU, Khan MU, Riaz H, Valavoor S, Zhao D, Vaughan L, et al. Effects of Nutritional Supplements and Dietary Interventions on Cardiovascular Outcomes: An Umbrella Review and Evidence Map. Ann Intern Med. 2019 Aug 6;171(3):190.
2. Adler AJ, Taylor F, Martin N, Gottlieb S, Taylor RS, Ebrahim S. Reduced dietary salt for the prevention of cardiovascular disease. Cochrane Database Syst Rev. 2014 Dec 18;(12):CD009217.
3. Wiksten A, Rücker G, Schwarzer G. Hartung-Knapp method is not always conservative compared with fixed-effect meta-analysis: Hartung-Knapp method is not always conservative compared with fixed-effect meta-analysis. Statist Med. 2016 Jul 10;35(15):2503–15.
4. Jackson D, Law M, Rücker G, Schwarzer G. The Hartung-Knapp modification for random-effects meta-analysis: A useful refinement but are there any residual concerns? Statist Med. 2017 Nov 10;36(25):3923–34.
5. Patel CJ, Burford B, Ioannidis JPA. Assessment of vibration of effects due to model specification can demonstrate the instability of observational associations. Journal of Clinical Epidemiology. 2015 Sep 1;68(9):1046–58.
The Choice of Meta-Analytic Model- Setting the Goals of the Analyses
Bayesian approaches in meta-analysis offer some advantages, such as the ability to include external evidence (from other data or from expert’s opinion), ability to adjust for bias (from outside sources) and overall model flexibility. Bayesian approaches are helpful in dealing with missing data and a limited number of studies (1). However, the Bayesian framework requires some strong assumptions; the most controversial of which is identifying reliable prior distributions. Different priors can yield inconsistent results. In this scenario, what constitutes a reasonable prior for salt reduction on all-cause mortality is unsettled.
In terms of the HK adjustment, this method assumes that variances are derived from small samples and constructs confidence limits based on the t distribution. Thus, KH adjustments should usually generate conservative (wider) confidence intervals (CIs). The CIs reduce towards fixed effects model results when results are more homogenous (2). It is thought to be more appropriate for meta-analyses with a small number of studies, binary outcomes and high degree of heterogeneity (3, 4).
Knowing that we were dealing with substantial clinical and methodological heterogeneity, small number of studies and binary outcomes, we set a priori the RE Paule-Mandel (PM) model with HK adjustments when number of trials were <10. We can certainly argue back and forth for a long time about which model is best. However, our decision was made a priori and repeating the analysis with a different model will be a data driven approach that opens the door for biased personal believes and data dredging. There is no gold standard and no single preferred approach (5). In the original paper, we noted the moderate certainty for the estimated effects, and we acknowledge that different results provided by different models can limit the certainty in the evidence.
REFERENCES
1. van de Schoot R, Kaplan D, Denissen J, Asendorpf JB, Neyer FJ, van Aken MAG. A gentle introduction to bayesian analysis: applications to developmental research. Child Dev. 2014;85(3):842-60.
2. Wiksten A, Rucker G, Schwarzer G. Hartung-Knapp method is not always conservative compared with fixed-effect meta-analysis. Stat Med. 2016;35(15):2503-15.
3. Veroniki AA, Jackson D, Viechtbauer W, Bender R, Bowden J, Knapp G, et al. Methods to estimate the between-study variance and its uncertainty in meta-analysis. Res Synth Methods. 2016;7(1):55-79.
4. Hartung J, Knapp G. On tests of the overall treatment effect in meta-analysis with normally distributed responses. Stat Med. 2001;20(12):1771-82.
5. Cornell JE, Mulrow CD, Localio R, Stack CB, Meibohm AR, Guallar E, et al. Random-effects meta-analysis of inconsistent effects: a time for change. Ann Intern Med. 2014;160(4):267-70.
Sticking with a hammer when you find out you are working with a screw is not the best choice – The analytic tool should be appropriate for the data.
They note I believe a "RE [random effects] model without Hartung-Knapp (HK) adjustment should have been considered" for the analysis in question (salt reduction in patients without hypertension for the outcome of all-cause mortality). They are correct about this. However, it seems they missed or misunderstood the concepts in my original letter. The HK adjustment has several advantages and should be considered accordingly in meta-analyses.[1,2] However, it is – like anything else – a tool, and people must use tools appropriately, even if their original belief about the "best" tool turns out to be incorrect.[1,2] As noted in my original letter, the HK adjustment typically yields more conservative estimates (i.e., wider confidence intervals). However, for the analysis in question, it does exactly the opposite. This effect is so pronounced that Dr. Khan and colleagues’ analysis (with a Paule-Mandle RE estimator and the HK adjustment) yields a narrower confidence interval than the fixed-effects (FE) analysis in the Cochrane review.[3,4] Although I noted this originally, it bears repeating, because this simple observation makes my point: The fact that the same data analyzed with a FE model would yield a wider confidence interval than a RE model lacks even face validity, and I explain why this is likely occurring in my original letter.
Rather than engaging with the concerns in my letter, Dr. Khan and colleagues seem to disregard the importance of ensuring model specifications are appropriate for the data at hand; instead, they suggest this "opens the door for biased personal believes[sic] and data dredging". Although presumably inadvertent, this constructs a "straw man" that acts as a "red herring". It is clear in my original letter I am not calling for an "anything goes" framework; what I am calling for, however, is avoidance of inappropriately stringent adherence to a priori specifications when it becomes clear that those specifications are inappropriate for the data at hand. If you tell everyone you are going to use a hammer because you anticipate needing to drive in nails and later find out you actually have one or more screws that need to be put in place, using the hammer for the screws because that was what you told people you would use is imprudent.
References
1. Jackson D, Law M, Rücker G, Schwarzer G. The Hartung-Knapp modification for random-effects meta-analysis: A useful refinement but are there any residual concerns? Statist Med. 2017 Nov 10;36(25):3923–34.
2. Wiksten A, Rücker G, Schwarzer G. Hartung-Knapp method is not always conservative compared with fixed-effect meta-analysis: Hartung-Knapp method is not always conservative compared with fixed-effect meta-analysis. Statist Med. 2016 Jul 10;35(15):2503–15.
3. Khan SU, Khan MU, Riaz H, Valavoor S, Zhao D, Vaughan L, et al. Effects of Nutritional Supplements and Dietary Interventions on Cardiovascular Outcomes: An Umbrella Review and Evidence Map. Ann Intern Med. 2019 Aug 6;171(3):190.
4. Adler AJ, Taylor F, Martin N, Gottlieb S, Taylor RS, Ebrahim S. Reduced dietary salt for the prevention of cardiovascular disease. Cochrane Database Syst Rev. 2014 Dec 18;(12):CD009217.
DATA FLAWS
Khan et al report no effect of reducing dietary saturated fat (SFA) on myocardial infarction, CV mortality or total mortality in RCTs. However, these findings are based on a previously published meta-analysis of 15 RCTs with a median duration of 4.7 years that was viewed as weak by the authors of the 2017 AHA Presidential Advisory on Dietary Fats because it omitted a high quality RCT yet included others limited by short durations, low adherence, or dietary contamination with trans-fats (1,2). Despite these limitations, this cited review did demonstrate a significant 27% reduction in the relative risk (RR) of major CV events (HR 0.83, 95% CI 0.72-0.96) that was even greater when dietary SFA was replaced by polyunsaturated fat (PUFA), a finding that led the original authors to conclude that this “appears to be a useful strategy” for reducing CV risk, but that Khan et al do not mention. Similarly, Khan et al fail to acknowledge that the meta-analysis of RCTs of the Mediterranean diet that they cite as having no impact on CV outcomes showed significant reductions in the RR of major vascular events (HR 0.69, 95% CI 0.55-0.86) and stroke (HR 0.66, 95% CI, 0.48–0.92) (3).
The authors also identify the use of “data only from RCTs and their meta-analyses” as one of the strengths of their study. This view fails to recognize the totality of the diet-heart disease science base, which also includes carefully conducted basic science, metabolic, translational, and prospective cohort studies that have yielded largely concordant data, including evidence that replacing dietary SFA with PUFA, or adhering to a Mediterranean dietary pattern, reduces CV and/or total mortality. The view that RCTs are the best way to examine the effects of dietary exposures on CV outcomes also conflicts with those of noted nutritional epidemiologists (4), and of the ACC/AHA, who recognize that “many important clinical questions addressed in guidelines do not lend themselves to clinical trials.”
Thus, we believe this meta-analysis contains flaws and omissions. Moreover, both the analysis and editorial ignore sound epidemiologic, cohort and translational nutrition evidence, and downgrade its importance in the prevention of CV disease, in stark contrast to expert panels of the AHA/ACC, the Dietary Guidelines Advisory Committee, and the National Lipid Association, all of whom report strong and consistent evidence that a healthy dietary pattern decreases CV risk, particularly when SFA are replaced by PUFA.
REFERENCES
1. Sacks, FM, Lichtenstein, AH, Wu, JHY et al. Dietary fats and cardiovascular disease. Circulation 2017 DOI: 10.1161/CIR.0000000000000510
2. Hooper, N, Martin, N, Abdelhamid, A et al. Reduction in saturated fat intake for cardiovascular disease. Cochrane Database Syst Rev. 2015:doi:10.1002/14651858.CD011737.
3. Liyanage T, Ninomiya T, Wang A, et al. Effects of the Mediterranean diet on cardiovascular outcomes: a systematic review and meta-analysis. PLoS One. 2016;11:e0159252. [PMID: 27509006] doi: 10.1371/journal.pone.0159252
4. Satija, A, Stampfer, MJ, Rimm, EB, Willett, W, and Hu, F. Are large simple trials the solution for nutrition research? Adv Nutr 2018;9:378-387 doi.org/10.1093/advances/nmy030.
Choosing a Random-effects Estimator in the Post-DerSimonian-Laird Era
Kivelä’s (4) and Mayer’s (5) critique centers on the effects of reduced salt intake on all-cause mortality among normotensive patients reported in a Cochrane Review by Alder, et al. (6) The estimated treatment effect is based on 3 relatively small trials. The estimated between-study variance is 𝜏2=0 (95% 𝐶𝐼:0.00 𝑡𝑜 0.55). Kivelä (4) and Mayer (5) correctly point out that the standard HKSJ adjustment can seriously underestimate the uncertainty relative to the fixed effect or DerSimonian-Laird (DL) random effect method in this particular case. This occurs because the factor (q) used to adjust the pooled variance can be less than 1 under these circumstances; shrinking rather than increasing the variance estimate. Knapp and Hartung (2) recommend using a modified HKSJ that sets 𝑞∗ =max{1,𝑞} to avoid this problem. The modified HKSJ with the PM yields a 𝑅𝑅=0.90 (95% 𝐶𝐼:0.35 𝑡𝑜 2.36), which is identical to the fixed confidence interval based on the t distribution with df = k – 1.
It is difficult to obtain robust estimates of 𝜏2 with k < 5 trials, and small sample adjustments like the modified HKSJ are often viewed as very conservative. So, what are our alternatives? The answer depends upon conditions in the data. Bayesian methods incorporate uncertainty in estimating both the average treatment effect and between-study variance, and these methods allow us to use informative priors for the model parameters. Informative priors based on established evidence are recommended for meta-analyses summarizing rare events and/or small number of trials. Sensitivity analyses over a range of reasonable priors are essential to establish the robustness of Bayesian estimated treatment effects. Other potential estimators include higher-order profile likelihood methods that adjust for small samples (3).
The HKSJ combined with a robust between-study variance estimator tends to have the best performance when k < 10 studies, except in the case where k < 5 and 𝜏2 = 0 (3). We believe the conditions in the data should guide decisions about the best estimator to use in any meta-analysis, and that sensitivity analyses are necessary to assess the robustness (certainty) of any finding. The modified HKSJ, Bayesian, and profile likelihood methods appear to perform well in a number of circumstances, particularly when conditions in the data pose problems for the HKSJ. We recommend that authors consider conducting additional analyses using one or more of these methods and that they reevaluate the robustness of findings when there were very few studies with no apparent heterogeneity.
References
1. Khan SU, Khan MU, Riaz H, et al. Effects of Nutritional Supplements and dietary Interventions on Cardiovascular Outcomes: An Umbrella Review and Evidence Map. Ann Intern Med. 2019; 171: 190-198.
2. Knapp G, Hartung J. Improved tests for a random effects meta-regression with a single covariate. Stat Med. 2003; 22: 2693-2710.
3. Veroniki AA, Jackson D, Bender R, et al. Methods to calculate uncertainty in the estimated overall effect size from a random-effects meta-analysis. Res Syn Meth. 2019; 10: 23-43.
4. Kivelä JM. JM. Does statistics in meta-analysis matter? (citation needed)
5. Mayer M. Deciphering discrepancy in meta-analytic results: The model matters.(citation needed)
6. Adler AJ, Taylor F, Martin N. et al. Reduced dietary salt for the prevention of cardiovascular disease. Cochrane Database Syst Rev. 2014; 12: CD009217.
John E. Cornell PhD
Annals Associate Editor, Statistics
Cynthia Mulrow MD, MSc
Annals Senior Deputy Editor
Tackling the Claims of Data Flaws
They further pointed out that we failed to acknowledge relative risk (RR) reduction of major cardiovascular events (MACE: RR, 0.83 [0.72-0.96]; Analysis 1.3 in Hopper et al.)(2), that was even greater when dietary SFA was replaced by polyunsaturated fatty acid (PUFA), as well as lack of significant reductions in RR of MACE and stroke by Mediterranean diet by Liyanage et al (4). Here, Aspry and colleagues clearly missed the purpose of an umbrella review, which is to re estimate data abstracted from meta-analyses and individual studies using more robust statistical methodology to generate evidence map based on certainty and quality of evidence. While we clearly stated that abstracted data was re-estimated using Paule-Mandel (PM) method with Hartung-Knapp small sample adjustments (<10 trials), reviewers also failed to recognize that our pre-specified secondary endpoint was coronary heart disease (CHD) instead of MACE. Additionally, we did not specify analysis based on replacement of dietary fats with other fats modifications. Accordingly, Hooper et al. did not show a significant risk reduction in CHD events with SFA (RR, 0.87 [0.74-1.03; Analysis 2.5), which was consistent with our results.
However, for the sake of argument, we reanalyzed MACE via random effects (PM method) model using data of Analysis 2.5 in Hooper et al, to revalidate our conclusion regarding lack of cardiovascular benefit with SFA (RR, 0.80 [0.62-1.02]; Figure 1.A). Furthermore, interestingly, the argument of RR reduction by replacing SFA with PUFA (RR, 0.73 [0.58-0.92]) versus replacement with monounsaturated fatty acid, carbohydrate and protein hangs on a P-interaction value of 0.14 (Table 9; Hopper et al.). In the same framework, consistent to our report, Liyange et al. showed no significant benefit of Mediterranean diet on CHD (RR, 0.73 [0.51-1.05]. Again, we reanalyzed stroke and MACE and results remained non-significant for Mediterranean diet (stroke: RR, 0.65; [0.39-1.11] and MACE: RR. 0.48 [0.10-2.28]; Figure 1.B & C, respectively). That said, we did compare our results with prior meta-analyses and acknowledged that changes in results are due to different statistical methodology, where necessary. We also acknowledged limitations of included RCTs, including variable duration of follow-up across the studies.
Finally, Aspry and colleagues suggested that we ignored the wealth of observational data regarding nutritional science. We purposefully excluded epidemiological studies due to their lack of a priori hypothesis, residual confounding and lack of adjustment for confounders and recall bias with dietary measurement leading to seriously inaccurate conclusions (5). Authors believe that in the presence of quality RCTs, shaping guidelines based on observational studies can be extremely misleading, and our umbrella review was an attempt to fill this evidence free zone (6).
REFERENCES
1. Khan SU, Khan MU, Riaz H, Valavoor S, Zhao D, Vaughan L, et al. Effects of Nutritional Supplements and Dietary Interventions on Cardiovascular Outcomes: An Umbrella Review and Evidence Map. Ann Intern Med. 2019;171(3):190-8.
2. Hooper L, Martin N, Abdelhamid A, Davey Smith G. Reduction in saturated fat intake for cardiovascular disease. Cochrane Database Syst Rev. 2015(6):Cd011737.
3. Turpeinen O, Karvonen MJ, Pekkarinen M, Miettinen M, Elosuo R, Paavilainen E. Dietary prevention of coronary heart disease: the Finnish Mental Hospital Study. Int J Epidemiol. 1979;8(2):99-118.
4. Liyanage T, Ninomiya T, Wang A, Neal B, Jun M, Wong MG, et al. Effects of the Mediterranean Diet on Cardiovascular Outcomes-A Systematic Review and Meta-Analysis. PLoS One. 2016;11(8):e0159252.
5. Ioannidis JPA. The Challenge of Reforming Nutritional Epidemiologic Research. Jama. 2018;320(10):969-70.
6. Nissen SE. U.S. Dietary Guidelines: An Evidence-Free Zone. Ann Intern Med. 2016;164(8):558-9.
Figure legends
Figure 1. Effects of Reduced Saturated Fat (SFA) on Major Adverse Cardiovascular Outcomes (MACE); and Mediterranean Diet on MACE and Stroke
Sensitivity Analyses on Cardiovascular Effects of Nutritional Supplements and Dietary Interventions: Taking A Fresh Look
Accordingly, we performed sensitivity analyses for all interventions and outcomes, where k ≥ 2 but < 5 and 𝜏2= 0 using modified HKSJ with PM method. Statistical analyses were conducted using “meta” commands from Stata version 16. Consistent with already published statistical plan (6), statistical significance was set at 0.05 and effect sizes were reported as risk ratios (RRs) with 95% CIs. We used I2 statistics to estimate the extent of unexplained heterogeneity; I2 greater than 50% was considered a high degree of between-study heterogeneity.
Twenty-nine estimates were re-analyzed. As expected, CIs widened with adjustment. Two estimates changed from protective to nonsignificant: 1) low salt diet on all-cause mortality (RR, 0.90 [0.34, 2.36]; low certainty of evidence) in hypertensives; and 2) cardiovascular mortality (RR, 0.67 [0.28, 1.64]; low certainty of evidence) in normotensive patients. There was no important change in conclusions or certainty in evidence for the other 27 estimates (Table ) Sensitivity Analyses and Implications on Outcomes
These sensitivity analyses further validate the lack of cardiovascular effects of various nutritional supplements and dietary interventions. Editorialists Drs. Pandey and Topol highlighted the controversial finding of better mortality outcomes with low salt diet in view of limited evidence (7). We also discussed about inconsistent data supporting cardiovascular benefits of low salt diet (6). Therefore, in view of these sensitivity analyses, the certainty of evidence regarding cardiovascular effects of reduced salt intake should be downgraded. Essentially, our updated findings support the 2019 National Academics Consensus Study regarding insufficient evidence to establish estimates average requirements (EAR) or recommended dietary allowances (RAD) for sodium (8). Meta-analyses with a small number of included studies remain a challenge. It is a difficult decision to choose between the a priori selected model vs a new post hoc data-driven model. In this case, the new model likely yields the more conservative and appropriate results.
References
1. Veroniki AA, Jackson D, Bender R, Kuss O, Langan D, Higgins JPT, et al. Methods to calculate uncertainty in the estimated overall effect size from a random-effects meta-analysis. Res Synth Methods. 2019;10(1):23-43.
2. Knapp G, Hartung J. Improved tests for a random effects meta-regression with a single covariate. Stat Med. 2003;22(17):2693-710.
3. John E. Cornell CM. Choosing a Random-effects Estimator in the Post-DerSimonian-Laird Era. Annals of Internal Medicine. 2019.
4. Kivelä JM. Does statistics in meta-analyses matter? Annals of Internal Medicine. 2019.
5. Mayer M. Deciphering discrepancy in meta-analytic results: The model matters. Annals of Internal Medicine. 2019.
6. Khan SU, Khan MU, Riaz H, Valavoor S, Zhao D, Vaughan L, et al. Effects of Nutritional Supplements and Dietary Interventions on Cardiovascular Outcomes: An Umbrella Review and Evidence Map. Ann Intern Med. 2019;171(3):190-8.
7. Pandey AC, Topol EJ. Dispense With Supplements for Improving Heart Outcomes. Ann Intern Med. 2019;171(3):216-7.
8. National Academies of Sciences E, and Medicine. Dietary Reference Intakes for Sodium and Potassium. Washington, DC: National Academies Program, 2019.