Research and Reporting Methods2 June 2015

The PRISMA Extension Statement for Reporting of Systematic Reviews Incorporating Network Meta-analyses of Health Care Interventions: Checklist and Explanations

FREE
    Author, Article, and Disclosure Information

    Abstract

    The PRISMA statement is a reporting guideline designed to improve the completeness of reporting of systematic reviews and meta-analyses. Authors have used this guideline worldwide to prepare their reviews for publication. In the past, these reports typically compared 2 treatment alternatives. With the evolution of systematic reviews that compare multiple treatments, some of them only indirectly, authors face novel challenges for conducting and reporting their reviews. This extension of the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) statement was developed specifically to improve the reporting of systematic reviews incorporating network meta-analyses.

    A group of experts participated in a systematic review, Delphi survey, and face-to-face discussion and consensus meeting to establish new checklist items for this extension statement. Current PRISMA items were also clarified. A modified, 32-item PRISMA extension checklist was developed to address what the group considered to be immediately relevant to the reporting of network meta-analyses.

    This document presents the extension and provides examples of good reporting, as well as elaborations regarding the rationale for new checklist items and the modification of previously existing items from the PRISMA statement. It also highlights educational information related to key considerations in the practice of network meta-analysis. The target audience includes authors and readers of network meta-analyses, as well as journal editors and peer reviewers.

    Systematic reviews and meta-analyses are fundamental tools for the generation of reliable summaries of health care information for clinicians, decision makers, and patients. Systematic reviews provide information on clinical benefits and harms of interventions, inform the development of clinical recommendations, and help to identify future research needs. In 1999 and 2009, respectively, groups developed the Quality of Reporting of Meta-Analyses (QUOROM) statement (1) and the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) statement (2, 3) to improve the reporting of systematic reviews and meta-analyses. Both statements have been widely used, and coincident with their adoption, the quality of reporting of systematic reviews has improved (4, 5).

    Systematic reviews and meta-analyses often address the comparative effectiveness of multiple treatment alternatives. Because randomized trials that evaluate the benefits and harms of multiple interventions simultaneously are difficult to perform, comparative effectiveness reviews typically involve many studies that have addressed only a subset of the possible treatment comparisons. Traditionally, meta-analyses have usually compared only 2 interventions at a time, but the need to summarize a comprehensive and coherent set of comparisons based on all of the available evidence has led more recently to synthesis methods that address multiple interventions. These methods are commonly referred to as network meta-analysis, mixed treatment comparisons meta-analysis, or multiple treatments meta-analysis (6–8). In recent years, there has been a notable increase in the publication of articles using these methods (9). On the basis of our recent overview (10) of reporting challenges in the field, as well as findings from our Delphi exercise involving researchers and journal editors, we believe that reporting guidance for such analyses is sorely needed.

    In this article, we describe the process of developing specific advice for the reporting of systematic reviews that incorporate network meta-analyses, and we present the guidance generated from this process.

    Development of the PRISMA Network Meta-analysis Extension Statement

    We followed an established approach for this work (11). We formed a steering committee (consisting of Drs. Hutton, Salanti, Moher, Caldwell, Chaimani, Schmid, Thorlund, and Altman); garnered input from 17 journal editors, reporting guideline authors, and researchers with extensive experience in systematic reviews and network meta-analysis; and performed an overview of existing reviews of the reporting quality of network meta-analyses to identify candidate elements important to report in network meta-analyses (10). We also implemented an online Delphi survey of authors of network meta-analyses in mid-2013 (215 invited; response rate, 114 [53%]) by using Fluid Surveys online software (Fluidware, Ottawa, Ontario, Canada) to determine consensus items for which either a new checklist item or an elaboration statement would be required, and to identify specific topics requiring further discussion.

    Next, we held a 1-day, face-to-face meeting to discuss the structure of the extension statement, topics requiring further consideration, and publication strategy. After this meeting, members of the steering committee and some of the meeting participants were invited to contribute specific components for this guidance. All participants reviewed drafts of the report.

    Scope of This Extension Statement

    This document provides reporting guidance primarily intended for authors, peer reviewers, and editors. It may also help clinicians, technology assessment practitioners, and patients understand and interpret network meta-analyses. We also aim to help readers develop a greater understanding of core concepts, terminology, and issues associated with network meta-analysis.

    This document is not intended to be prescriptive about how network meta-analyses should be conducted or interpreted; considerable literature addressing such matters is available (6, 12–51). Instead, we seek to provide guidance on important information to be included in reports of systematic reviews that address networks of multiple treatment comparisons. For specific checklist items where we have suggested modification of instructions from the PRISMA statement, we have included examples of potential approaches for reporting different types of information. However, modified approaches to those presented here may also be feasible.

    How to Use This Document

    This document describes modifications of checklist items from the original PRISMA statement for systematic reviews incorporating network meta-analyses. It also describes new checklist items that are important for transparent reporting of such reviews. We present an integrated checklist of 32 items, along with elaborations that demonstrate good reporting practice. The elaboration (Appendix,) describes each item and presents examples for new or modified items. Although new items have been added in what was deemed the most logical place in the core PRISMA checklist, we do not prescribe an order in which these must be addressed. The elaboration also includes 5 boxes that highlight methodological considerations for network meta-analysis.

    The Table presents the PRISMA network analysis checklist that authors may use for tracking inclusion of key elements in reports of network meta-analyses. The checklist has been structured to present core PRISMA items and modifications of these items where needed, as well as new checklist items specific to network meta-analysis. Checklist items are designated "New Item" in the main text if they address a particular aspect of reporting that is novel to network meta-analyses; these are labeled S1 through S5. The heading "Addition" indicates discussion of an issue that was covered by the original PRISMA statement but requires additional considerations for reviews incorporating network meta-analyses. Examples with elaborations have been provided for checklist items in these 2 categories.

    Table. Checklist of Items to Include When Reporting a Systematic Review Involving a Network Meta-analysis

    Table.
    What Is a Treatment Network?

    Systematic reviews comparing the benefits and harms of multiple treatments are more complex than those comparing only 2 treatments. To present their underlying evidence base, reviews involving a network meta-analysis commonly include a graph of the network to summarize the numbers of studies that compared the different treatments and the numbers of patients who have been studied for each treatment (Figure 1). This network graph consists of nodes (points representing the competing interventions) and edges (adjoining lines between the nodes that show which interventions have been compared among the included studies). The sizes of the nodes and the thicknesses of the edges in network graphs typically represent the amounts of respective evidence for specific nodes and comparisons. Sometimes, additional edges are added to distinguish comparisons that may be part of multigroup studies that compare more than 2 treatments.

    Figure 1. Overview of a network graph.

    A network graph presenting the evidence base for a hypothetical review of 4 interventions is shown. Treatments are represented by nodes and head-to-head studies between treatments are represented by edges. The sizes of edges and nodes are used to visually depict the available numbers of studies comparing interventions and the numbers of patients studied with each treatment.

    The graphs also allow readers to note particular features of the shape of a treatment network. This includes the identification of closed loops in the network; a closed loop is present in a treatment network when 3 or more comparators are connected to each other through a polygon, as in Figure 1 for treatments A, B, and C. This shows that treatments A, B, and C have all been compared against each other in existing studies, and thus each comparison in the closed loop (AB, AC, BC) is informed by both direct and indirect evidence (see the Box for definitions of direct and indirect evidence and Figure 2 for a graphical representation of terms in the Box).

    Box. Terminology: Reviews With Networks of Multiple Treatments

    Terms are discussed further in the Box.Top. Adjusted indirect treatment comparison of treatments B and C based on studies that used a common comparator, treatment A. Middle. A network of 8 treatments and a common comparator, with a mix of comparisons against the control treatment and a subset of all possible comparisons between active treatments. Bottom. A treatment network similar to that shown in the middle panel, but with study data available for an additional 4 comparisons in the network which form closed loops.

    Figure 2. Graphical overview of the terminologies that are related to the study of treatment networks.
    Discussion

    All phases of the clinical research cycle generate considerable waste, from posing irrelevant questions to inappropriate study methods, bad reporting, and inadequate dissemination of the completed report. Poor reporting is not an esoteric issue. It can introduce biased estimates of an intervention's effectiveness and thus affect patient care and decision making. Journals regularly publish new evidence regarding some aspect of inadequate reporting (52). Improving the completeness and transparency of reporting research is a low-hanging fruit to help reduce waste, and possibly explains the rise in developing reporting guidelines (53, 54) and such initiatives as the EQUATOR Network.

    The PRISMA statement was aimed at improving the reporting of traditional pairwise systematic reviews and meta-analyses; it has been endorsed by hundreds of journals and editorial groups. Some extensions have been developed, including PRISMA for reporting abstracts (55) and equity (56). Other extensions are in various stages of development, including those for individual patient–data meta-analyses and harms.

    Here, we describe a PRISMA extension for reporting network meta-analyses, which includes a 32-item checklist and flow diagram. This extension adds 5 new items that authors should consider when reporting a network meta-analysis, as well as 11 modifications to existing PRISMA items. Some of these are minor, whereas others are more complex, such as items 20 and 21, which ask authors to describe the results of individual studies and the corresponding syntheses thereof.

    For network meta-analysis, in which it is likely that more studies and treatments will be included compared with traditional pairwise reviews, this added reporting might require authors to prepare several supplemental files as part of the manuscript submission process. Journal editors will need to make allowances for these additional materials.

    Certain modifications included in some of the checklist items (for example, assessment of model fit, rationale for lumping of interventions, and presentation of tabulated study characteristics) involve considerations that are equally applicable to traditional meta-analyses of 2 treatments. Although it could be suggested that these do not warrant listing as modifications, we believe this is worthwhile; several of these items were not explicitly addressed in the PRISMA statement and could be more commonly encountered when dealing with networks of treatments. Several coauthors of this reporting guidance are also members of the authorship team of the PRISMA statement and will bring these items forward when the PRISMA statement is updated in the future.

    Optimally, we would like journals to endorse this extension in much the same way they have done for the PRISMA statement. Endorsement is probably best achieved through unambiguous language in the journal's instructions to authors; example wording is provided in the Appendix.

    Endorsement is important, but it is less potent without implementation. At the simplest level, implementation can involve asking authors to populate the PRISMA network meta-analysis checklist with appropriate text from their report, and not accepting a submission unless this is provided. Some editors—particularly of those smaller journals, where most systematic reviews are published (57)—may perceive any endorsement and implementation as a barrier to receiving network meta-analyses reports. There are few data to support this perception. Editors can promote reporting guideline endorsement and implementation as an important way to improve the completeness and transparency of what they publish (58, 59), thus upholding one of the central tenets of the Declaration of Helsinki (60). In addition, this will reduce waste in reporting research.

    There has been a steep upward trajectory of published network meta-analysis (8, 9) and methods research as the field rapidly gains momentum and interest. To help keep this PRISMA extension as up-to-date and evidence-based as possible, we invite readers to let us know about emerging evidence to help inform future updates.

    Appendix: The PRISMA Network Meta-analysis Extension Statement
    Title and Abstract
    Item 1: Title

    Addition

    Identify the report as including the evaluation of a network of multiple treatment comparisons (for example, "network meta-analysis").

    Examples

    Different combined oral contraceptives and the risk of venous thrombosis: systematic review and network meta-analysis. (61)

    Network meta-analysis on randomized trials focusing on the preventive effect of statins on contrast-induced nephropathy. (62)

    Elaboration

    Recent literature has documented the rapid increase in the publication of reviews incorporating networks of treatments and highlights a need to develop appropriate identification of such publications in literature databases (8). Consistent inclusion of the appropriate term in journal article titles will increase the ability to identify network meta-analyses.

    Item 2: Structured Summary

    Addition

    Guidance from the PRISMA statement is transferable to reviews incorporating network meta-analyses, although some additional considerations are worthy of inclusion. The abstract from a recent systematic review of treatments for prevention of asthma exacerbations by Loymans and colleagues (63) highlights these features.

    Examples

    Objective. To determine the comparative effectiveness and safety of current maintenance strategies in preventing exacerbations of asthma.

    Design. Systematic reviewand network meta-analysisusing Bayesian statistics.

    Data Sources. Cochrane systematic reviewson chronic asthma, complemented by an updated search when appropriate.

    Eligibility Criteria. Trials of adults with asthma randomised to maintenance treatments of at least 24 weeks duration and that reported on asthma exacerbations in full text. Low dose inhaled corticosteroid treatment was the comparator strategy. The primary effectiveness outcome was the rate of severe exacerbations. The secondary outcome was the composite of moderate or severe exacerbations. The rate of withdrawal was analysed as a safety outcome.

    Results. 64 trials with 59,622 patient years of follow-up comparing 15 strategies and placebo were included. For prevention of severe exacerbations, combined inhaled corticosteroids and long acting β-agonists as maintenance and reliever treatment and combined inhaled corticosteroids and long acting β-agonists in a fixed daily dose performed equally well and were ranked first for effectiveness. The rate ratios compared with low dose inhaled corticosteroids were 0.44 (95% CrI 0.29 to 0.66) and 0.51 (0.35 to 0.77), respectively. Other combined strategies were not superior to inhaled corticosteroids and all single drug treatments were inferior to single low dose inhaled corticosteroids. Safety was best for conventional best (guideline based) practice and combined maintenance and reliever therapy.

    Conclusions. Strategies with combined inhaled corticosteroids and long acting β-agonists are most effective and safe in preventing severe exacerbations of asthma, although some heterogeneity was observed in this network meta-analysisof full text reports.

    Elaboration

    The inclusion of some additional information is worthwhile for systematic reviewsthat include network meta-analyses. The design or methods section of the structured abstract should mention that a network meta-analysiswas conducted. Given that in some reviews treatment networks may be large and involve many pairwise comparisons between treatments, authors may summarize findings using estimates versus a particular treatment of interest (for example, the apparent "best" treatment, placebo, and so forth). When treatments are ranked by efficacy or safety (Appendix Box 1), it is also recommended that authors describe the relative effects. Selective focus on particular comparisons alone—for example, only those meeting statistical significance—should be avoided. Authors are also encouraged to briefly note any concerns (for example, violations of analytical assumptions as described in Appendix Boxes 2 and 3) that may have an important effect on the interpretation of findings.

    Appendix Box 1. Probabilities and Rankings in Network Meta-analysis
    Appendix Box 2. The Assumption of Transitivity for Network Meta-analysis
    Appendix Box 3. Network Meta-analysis and Assessment of Consistency of Findings
    Introduction
    Item 3: Rationale

    Addition

    Briefly state why consideration of a network of multiple treatments is essential to the review (63–66).

    Example

    Although progress has been achieved in the field and patients live longer, the relative merits of the many different chemotherapy and targeted treatment regimens are not well understood. Hundreds of trials have been conducted to compare treatments for advanced breast cancer, but because each has compared only two or a few treatments, it is difficult to integrate information on the relative efficacy of all tested regimens. This integration is important because different regimens vary both in cost and in toxicity. Therefore, we performed a comprehensive systematic review of chemotherapy and targeted treatment regimens in advanced breast cancer and evaluated through a multiple-treatments meta-analysis the relative merits of the many different regimens used to prolong survival in advanced breast cancer patients. (67)

    Elaboration

    Authors should briefly clarify to readers why a systematic reviewusing a network meta-analysisapproach was chosen to answer the research question. Possible rationales may include a lack of head-to-head randomized trialscomparing treatments of interest, or the need to assess several treatments in developing a clinically meaningful understanding of the relative effectiveness or harms of different treatment options.

    Item 4: Objectives

    Guidance from the original PRISMA statement applies. State the research question being addressed in the systematic reviewin terms of the PICOS criteria (population, intervention, comparators, outcome[s], study design).

    Methods
    Item 5: Protocol and Registration

    Guidance from the original PRISMA statement applies. The protocol for the review should be registered.

    Item 6: Eligibility Criteria

    Addition

    The PRISMA statement outlines that authors provide a description of essential study characteristics (for example, PICOS details and duration of follow-up) and report characteristics (such as eligible publication years and eligible publication languages) that were used as eligibility criteria for the review. In network meta-analyses, authors should also clearly describe inclusion and exclusion criteria for treatment regimens (that is, nodes) and should provide justification when treatment nodes are merged to form single comparators (a practice sometimes described as "lumping" of interventions; see example below). Authors should describe the included treatments and adherence to and assessment of the transitivity assumption (Appendix Box 2).

    Example: Lumping of Interventions

    Our analysis classified fluids as crystalloids (divided into balanced and unbalanced solutions) and colloids (divided into albumin, gelatin, and low- and high-molecular weight hydroxyethyl starch [HES] [threshold molecular weight, 150 000 kDa]). We considered fluid balanced if it contained an anion of a weak acid (buffer) and its chloride content was correspondingly less than in 0.9% sodium chloride. The relevant analyses were a 4-node NMA [network meta-analysis] (crystalloids vs. albumin vs. HES vs. gelatin), a 6-node NMA (crystalloids vs. albumin vs. HES vs. gelatin, with crystalloids divided into balanced or unbalanced and HES divided into low or high molecular weight), and a conventional direct frequentist fixed effects meta-analytic comparison of crystalloids versus colloids. (68)

    Elaboration

    Often, one has to decide whether to lump or split treatments—that is, whether to combine different doses of the same drug, alternative forms of administration of the same drug, or varying durations of administration, or different controls. Lumping requires treatments to have similar treatment effects, and although this technique is appropriate in some cases, it should be supported by a clear rationale when performed.

    Specification of the patient and study characteristics of interest should also be clarified in this section. Although this remains similar to guidance from the PRISMA statement, it is important to provide additional detail with regard to the interventions and comparators included to define the network structure. For example, older "legacy" treatments may no longer be considered relevant if they have been abandoned in clinical practice; however, their inclusion in the treatment network may be useful if they introduce connections to other treatments that are of primary interest. The most common example would be the inclusion of placebo, an intervention that will increase the amount of information available for many networks (32).

    Issues of transitivity (that is, the existence of comparable distributions of patient characteristics across studies in the treatment network [Appendix Box 2]) can be discussed when describing eligibility criteria. Ideally, all evidence comparing relevant interventions in the target population of interest should be included in order to provide clinically useful results. However, the larger the network, the more likely it becomes that some of its pieces may not be exchangeable, owing to important differences in effect-modifying factors (for example, specific patient population or study design features); that is, the assumption of transitivity may become more difficult to defend. Accordingly, authors are encouraged to report relevant information on potentially influential patient and study characteristics to inform readers' judgments about the assumption of transitivity. Arguments in favor of defining the evidence base in a way that maximizes the plausibility of transitivity have been outlined elsewhere (18, 69), however, these are not shared by all meta-analysts. Known and well-validated effect modifiers are sparse in the medical literature, and therefore many meta-analysts feel that it is important to be maximally inclusive and allow the meta-analysisto explore for the presence of differences in effect sizes due to differences in potential effect modifiers.

    Item 7: Information Sources

    Guidance from the PRISMA statement regarding description of the information sources for a systematic reviewremains relevant for the reporting of network meta-analyses.

    Item 8: Search

    Guidance from the original PRISMA statement applies.

    Item 9: Study Selection

    Guidance from the original PRISMA statement applies.

    Item 10: Data Collection Process

    Guidance from the original PRISMA statement applies.

    Item 11: Data Items

    The guidance provided in the PRISMA statement remains applicable for network meta-analyses. Authors may also report whether additional information regarding possible effect modifiers was collected. This may be especially important in network meta-analyses involving interventions whose corresponding evidence base spans a broad time frame where co-interventions (or other aspects of care), diagnostic criteria, or other aspects of the patient population may have changed over time. Providing clarity of such information to readers will enhance their ability to appraise the validity of the network meta-analysis.

    Item S1 (New Item): Review of Network Geometry

    Describe the methods used to evaluate the geometry of the network of evidence and potential biases related to it. This should include how the evidence base has been graphically summarized.

    Example

    We analyzed published and unpublished randomized trials performed in patients with pulmonary hypertension. At the level of drug classes, we examined whether head-to-head comparisons are between agents in the same class or between agents in different classes. At the level of companies, we examined whether trials involve only agents (as active comparators or backbones) owned by the same company, or include treatments by different companies. In the networks of drug comparisons, each drug is drawn by a node and randomized comparisons between drugs are shown by links between the nodes. When a drug is compared against the same agent in different dose or formulation, this is represented by an auto-loop. In the networks of companies, nodes stand for companies and auto-loops around these nodes represent trials involving agents of a single company. Links between different nodes characterize trials comparing agents that belong to different companies. ( 70)

    Elaboration

    This new checklist item recommends that authors reporting network meta-analyses should evaluate the geometry(71) of the network (Appendix Box 4). Generation of a network graph is important and is of considerable help in reviewing network geometry. The graphical representation of all comparisons can help to determine whether a network meta-analysisis feasible (for example, whether the network of interventions is connected), and whether the network contains closed loops of treatments such that inconsistency (the agreement between the effects estimated from direct and indirect sources) can be assessed (Appendix Box 3)(72–74). The assessment of geometry can be qualitative (that is, a narrative summary of these features) and can optionally be supplemented with quantitative measures described elsewhere (33, 75) (Appendix Box 4).

    Appendix Box 4. Network Geometry and Considerations for Bias

    Considerations can be made to address networks according to classes based on mechanism of action, line of treatment, sponsorship (as described in the above example), or other sources that may reflect biases on the choice of treatment comparisons made. For example, drug sponsors have little incentive to compare agents other than those they manufacture (76–78); drug treatments may not be compared against surgical or invasive treatments because they are used by different specialists (79); and first-line treatments, such as neglected tropical diseases, may not be adequately compared against each other (73).

    Item 12: Risk of Bias in Individual Studies

    As in the original PRISMA statement, researchers are encouraged to describe the level of assessment for each included study (at the study level itself, or for each outcome within the study) and the assessment tool used (for example, the Cochrane Risk of Bias Scale [80]). They should also mention how findings from risk of bias assessments will be used to inform data analyses and interpretation.

    Item 13: Summary Measures

    Addition

    As outlined in the PRISMA statement, the chosen summary measures of effect to express comparisons between interventions (for example, odds ratios or mean differences) should be specified (81). Because the number of included studies can be considerably larger in network meta-analysesthan in traditional meta-analyses, and because a single analysis can generate considerably more pairwise comparisons, modified approaches to summarize findings may be required and should be mentioned in the methods section of the review (see item 21, which includes examples of treatment-level forest plots, league tables, and others). Additional summary measures of interest, such as treatment rankings or surface under the cumulative ranking curve (Appendix Box 1), may be described in the main text or supplements as deemed appropriate. Guidance about how to draw interpretations for all summary measures should be provided.

    Example

    For each pairwise comparison and each outcome at each time point, we used odds ratios (OR) with 95% confidence intervals (95% CIs) as a measure of the association between the treatment used and efficacy. As the outcomes are negative, ORs >1 correspond to beneficial treatment effects of the first treatment compared with the second treatment.

    As a measure that reflects ranking and the uncertainty, we used the Surface Under the Cumulative RAnking curve (SUCRA) as described in Salanti 2011. This measure, expressed as percentage, showed the relative probability of an intervention being among the best options. (82)

    Elaboration

    Conventional effect measures (such as mean differences and odds ratios) that are also used in pairwise meta-analysisare the primary measures of comparative efficacy between pairwise comparisons of interventions. These should be reported with an associated measure of uncertainty, typically 95% CIs for frequentist analyses and 95% credible intervals (CrIs) for Bayesian analyses. An additional output of network meta-analysis may be a relative ranking of the competing interventions included in the meta-analysis. If authors include rankings, they need to describe the approaches and measures used to rank the treatments and how findings based on these measures are interpreted (Appendix Box 1). Reviewers who evaluate more than 1 outcome are encouraged to report the relative ranking for every outcome.

    Similar to guidance from the PRISMA statement, authors should keep in mind that differences in relative effects do not necessarily imply clinical or policy relevance. As such, reporting absolute differences alongside relative measures of effect may aid in interpretation of findings. Regarding probabilities associated with treatment rankings, authors are encouraged to report not only the probability of each intervention being best, but also a more complete presentation of rankings that includes the probability of being second best, third best, and so forth. This provides a picture of the uncertainty associated with the rankings.

    Item 14: Planned Methods of Analysis

    Addition

    Although much of the guidance from the PRISMA statement applies, additional information is needed to enable complete understanding or replication of a network meta-analysis. In addition, the PRISMA statement did not discuss the reporting of considerations for Bayesian meta-analyses (Appendix Box 2).

    Example

    The network meta-analysis was based on a bayesian random effects Poisson regression model, which preserves randomised treatment comparisons within trials. The model uses numbers of patients experiencing an event and accumulated patient years to estimate rate ratios. The specification of nodes in the network was based on the randomised intervention or in case of strategy trials, such as COURAGE [Clinical Outcomes Utilizing Revascularization and Aggressive Drug Evaluation] or FAME-2 [Fractional flow reserve versus Angiography for Multi-Vessel Evaluation], on the intervention received by the majority of patients in a trial arm. Analyses were performed using Markov-Chain Monte-Carlo methods. The prior distribution for treatment effects was minimally informative: a normal distribution with a mean of 1 and a 95% reference range from 0.01 to 100 on a rate ratio scale. The prior for the between trial variance τ2, which we assumed to be equal across comparisons, was based on empirical evidence derived from semi-objective outcomes of head to head comparisons: a log normal distribution with a geometric mean of τ2 of 0.04 and a 95% reference range from 0.001 to 1.58. Rate ratios were estimated from the median and corresponding 95% credibility intervals from the 2.5th and 97.5th centiles of the posterior distribution. Convergence was deemed to be achieved if plots of the Gelman-Rubin statistics indicated that widths of pooled runs and individual runs stabilised around the same value and their ratio was around 1. (83)

    Elaboration

    Many network meta-analyses to date have used Bayesian methods for 2 reasons. First, much of the initial development of the technique (as well as related software) used a Bayesian approach. Second, Bayesian methods are often practical in complex or sparse data problems when non-Bayesian (frequentist) methods are not. Recently, statisticians have implemented non-Bayesian techniques in statistical software packages, such as Stata and R (84, 85). It is important to justify the assumptions made for the analyses for the inferential method used.

    Regardless of the chosen approach, it is important to check that the model fits the data well. Bayesian models often make use of the deviance information criterion to compare models and assess overall goodness of fit (86). Non-Bayesian models often use hypothesis tests based on deviance statistics. Users of Bayesian models must describe and justify the prior distributions used and describe the method by which they checked for convergence of the Markov chain if using a Markov-chain Monte Carlo simulation of the posterior distribution (86, 87). Authors are also encouraged to report on additional considerations, including whether arm-based or contrast-based analyses are used, whether study effects are considered to be fixed or random, and so forth.

    Item S2 (New Item): Assessment of Inconsistency

    When performing a network meta-analysis, we rely on the assumption of consistency of treatment effects (that is, the equivalency of treatment effects from direct and indirect evidence [Appendix Box 3]) across the different comparisons in the network.

    Example

    Consistency was mainly assessed by the comparison of the conventional network meta-analysis model, for which consistency is assumed, with a model that does not assume consistency (a series of pairwise meta-analyses analysed jointly). If the trade-off between model fit and complexity favoured the model with assumed consistency, this model was preferred. Moreover, we calculated the difference between direct and indirect evidence in all closed loops in the network; inconsistent loops were identified with a significant (95% CrI that excludes 0) disagreement between direct and indirect evidence. A loop of evidence is a collection of studies that links treatments to allow for indirect comparisons; the simplest loop is a triangle formed by three direct comparison studies with shared comparators. (88)

    Elaboration

    It is generally recommended to evaluate the consistency assumption by using both global and local approaches (Appendix Box 3). At the network level, one can check this assumption statistically by fitting a pair of related network meta-analysis models and comparing how well they fit to the data: one analysis wherein the model assumes consistency of direct and indirect evidence, and a second where the model does not make this assumption. Deviance information criteria can be used as mentioned earlier to consider model fit (19). If the models have a similar fit to the data, one can argue that consistency seems to hold.

    To judge local consistency for particular contrasts of interventions that are part of a closed loop, one can use the method of Bucher and colleagues (89), or the node (or edge) splitting models presented by Dias and associates (16). The method of Bucher and colleagues assesses inconsistency in every available closed loop in the network separately and tests whether differences in treatment effects from direct and indirect evidence are present. Providing readers with a description of findings from an investigation to explore for inconsistency in the treatment network is important in order to shed light on the appropriateness of the assumption of consistency of evidence, which has implications for determining strength of confidence in the overall findings.

    Item 15: Risk of Bias Across Studies

    Guidance from the PRISMA statement applies. Authors should describe efforts taken to assess the risk of bias of included studies that may affect the cumulative evidence under study. Classical methods used to assess the risk of bias of included studies, such as use of the Cochrane Risk of Bias Scale, remain relevant and should be considered for each pairwise comparison in the treatment network; traditional approaches to presenting this information, as well as emerging approaches, such as color representation of bias in network diagrams (90), are possible. Given the complex structure of a network, identification of publication bias is more complex in a network meta-analysisowing to limited numbers of studies for each pairwise comparison, heterogeneity, and other limitations. Methods have been proposed that extend tests: for example, asymmetry testing and excess significance from pairwise meta-analyses also in the network space. The applicability of tests that evaluate the entire network has to be carefully considered in each network (91–93).

    Item 16: Additional Analyses

    Addition

    The PRISMA statement notes that authors should describe all additional analyses that are performed to elucidate the robustness of primary findings, including meta-regressions, subgroup analyses, and sensitivity analyses. These and other efforts undertaken to establish the robustness of findings of a network meta-analysisshould be described.

    Examples

    We considered how decisions to group glaucoma treatments could affect the transitivity assumption and interpretation of the analysis. (27) [See Appendix Figure 1.]

    Appendix Figure 1. Example figures: alternative geometries of a network of interventions for glaucoma.

    Example of alternative geometries of a treatment network for the treatment of glaucoma based on the splitting (A) versus lumping (B) of treatment regimens in the treatment network. A sensitivity analysis considering alternative geometries should be considered when lumping treatment nodes. Depending on quantity, results may be best in appendices. APRAC = apraclonidine; BETAX = betaxolol; BIMAT = bimatoprost;BRIM = brimonidine; BRIN = brinzolamide; CART = carteolol; DOR = dorzolamide; NO TRT = no treatment; NR = not reported; LATAN = latanoprost; LEVO = levobudolol; PL = placebo; TIMO = timolol; TRAV = travoprost.

    We a priori had selected allocation concealment, assessor blinding, treatment fidelity and imputation of numbers of responders as potentially important effect modifiers to be examined in sensitivity analyses to limit the included studies to those at low risk of bias. We conducted additional meta-regression analyses using random effects network meta-regression models to examine potential effect moderators such as the mean age of participants, the type of rating scales (clinician-rated versus self-rated), publication status (published versus dissertation), and therapy format (individual vs group). (94)

    Random effects network meta-analyses with informative priors for heterogeneity variances were conducted for the analyses. We also conducted fixed and random effects models with vague priors. (95)

    Elaboration

    Various types of sensitivity analyses may be conducted to study the robustness of findings from a network meta-analysis. For example, network meta-analysis may be conducted by using alternative formulations of the treatment network, as in the example above. These analyses may potentially change clinical interpretations. If analyzed with Bayesian models, results may be sensitive to the specification of prior distributions, particularly for variance parameters (39). Sensitivity and subgroup analyses, as well as meta-regression models adjusting for covariates (34), also can affect findings. These alternative models should be described and the sensitivity of results to them reported. Although these analyses should be noted in the main text, the results may, if extensive, need to be reported in supplements.

    The treatments of interest in a network meta-analysisshould be specified a priori. However, peripheral treatments may be included if, for example, they are a standard reference treatment not of direct interest that can connect an otherwise sparse network. Empirical evidence suggests that inclusion or exclusion of treatment nodes can affect estimates and treatment rankings (32).

    Unless there is a clinical or analytical requirement in reference to the PICOS summary of the research question, the primary analysis should be restricted to specific doses of treatments and cotreatments. This is because lumping of different doses or cotreatments can introduce heterogeneity and inconsistency (96). However, as described earlier where a class effect may be considered, or different doses are considered to have the same efficacy, sensitivity analyses should be reported that take into account the alternative geometries.

    Meta-regression analyses and subgroup analyses represent commonly used approaches to evaluating the effect of potential effect modifiers in traditional meta-analyses and remain applicable for network meta-analyses. The existing literature surveys methods for performing meta-regression analyses by using study-level covariates in network meta-analysis(20, 44), whereas subgroup analyses addressing the effect of effect modifiers, such as study-specific risk of bias (for example, low versus moderate to high risk of bias) or date of publication (for example, publication before versus after a particular year of interest), can be performed by repeating the analysis after limiting the network to include only studies meeting the criteria of interest.

    Bayesian analyses should address choice of the prior distribution by reporting sensitivity analyses, particularly for variance parameters, which often have a large effect on results (39, 97).

    Results
    Item 17: Study Selection

    As noted in the original PRISMA statement, there should be clear specification of the number of studies screened from the literature search, screened for eligibility from full-text reports, and subsequently included in the systematic review, with a corresponding flow diagram to summarize the study selection process

    Item S3 (New Item): Presentation of Network Geometry

    A network meta-analysis comparing all interventions of interest forms a network of treatments that are connected to each other on the basis of the pattern of comparisons made among the trials included in the review. The treatment comparisons for which trial data exist for an outcome of interest should be presented and summarized in a graph that enables readers to easily appraise the structure of existing evidence.

    Example

    Appendix Figure 2 shows a network graph comparing antipsychotic agents for prevention of schizophrenia relapse (12).

    Appendix Figure 2. Example figure: presentation of network graph on antipsychotics for schizophrenia relapse.

    The size of treatment nodes reflects the number of patients randomly assigned to each treatment. The thickness of edges represents the number of studies underlying each comparison.

    Elaboration

    Figure 1 shows a generic example of a network graph that introduces its use as a visualization tool. The network graph in Appendix Figure 2 shows the evidence base comparing 9 treatments for prevention of relapse of schizophrenia (12). As mentioned earlier, the size of the treatment nodes reflects the proportionate numbers of patients randomly assigned to each of the treatments, whereas edge thickness indicates the number of studies supporting each comparison. Such visualizations can be generated by using statistical software, such as Stata and R (90), and can provide readers with insights on the evidence base under study (that is, the network geometry); these insights are discussed in items S1 and S4 and Appendix Box 4. It is optimal to illustrate these figures with as few overlapping lines as possible in order to facilitate interpretations regarding the network geometry. Network graphs can provide insight into parts of the evidence base that are informed by small versus large amounts of data, and thus can inform the consideration of interventions that may benefit from further research in terms of accumulating additional evidence.

    In cases where the network is small (for example, networks of 3 treatments for which data are present for all comparisons), provision of a table of the data and a short narrative description may be sufficient. Proportionate sizing of nodes and edges in a network diagram may not be desirable in cases where there are large divergences in the numbers of patients and studies across interventions, because they may produce network graphs that are difficult to interpret.

    Item S4 (New Item): Summary of Network Geometry

    Provide a summary of the structure of the evidence base constructed from study selection.

    Example

    A total of 2,545 pulmonary hypertension patients received active pulmonary hypertension medication. The studied agents were more commonly bosentan (n = 13 trials; patients receiving treatment = 633) and sildenafil (n = 13 trials; patients receiving treatment = 593). Placebo was used as the comparator arm in 38 studies (patients receiving placebo = 1,643). Of the patients that received placebo, 52 participants were part of crossover studies with sildenafil. The most frequently used comparisons were bosentan versus placebo (n = 11) and sildenafil versus placebo (n = 11). Studies that used placebo as the comparator arm (n = 38) were for the most part sponsored by the pharmaceutical company that owned the product (n  = 28 studies [74%]). The only two published head-to-head comparisons of different medications (sildenafil against bosentan) were not sponsored by pharmaceutical companies, but by the British Heart Foundation and the Italian Health Authority. (70)

    Elaboration

    Such geometry features as identification of a lack of information in relation to specific treatments and comparisons in the network should be described. Evaluations of network geometry may suggest specific biases related to the choice of treatments to be tested, their preferred (or avoided) comparisons, the effect of sponsoring on the selection of treatments and comparisons, and other biases that might affect the geometry of the network. These biases may have important implications for the strength of interpretation of the evidence.

    Authors may choose whether to report specific measures of geometry described in Appendix Box 4. The graphical presentation of a network (for example, Appendix Figure 3 ) can be supplemented with a table (or text) describing the number of patients, number of studies, and number of events for each comparison or node. In instances where there are low numbers of events or low power, results should be interpreted with caution (38). In these instances, alternative network configurations may be considered (for example, lumping of interventions). Additional empirical work to clarify the role of network structure for interpreting findings from network meta-analyses is likely to be helpful and may lead to more specific reporting guidance in the future.

    Appendix Figure 3. Example figure: network geometry of published and unpublished randomized studies on U.S. Food and Drug Administration–approved medications for pulmonary hypertension.

    Each intervention is shown by a circular node, with the same color used to group interventions which belong to the same drug class. An auto-loop represents studies where different doses of the same medication have been compared. IV = intravenous; SC = subcutaneous.

    Item 18: Study Characteristics

    As reflected in the PRISMA statement, authors should present the characteristics of all included trials (PICOS-related information, study time frame, sample size, patient demographics) in the systematic review. This still applies to reviews that evaluate a network of treatments. This is commonly accomplished through both a summary in the main text and tables that provide detailed information for all included studies. Authors may wish to structure information tables by using subheadings such that subgroups of trials included in the treatment network are presented together (for example, all A versus B trials, then all A versus C trials). Authors should especially try to report effect modifiers collected to monitor for variations in treatment effects that may have arisen owing to broad time frames of research, because these may be particularly important in judging the appropriateness of the transitivity assumption.

    Because systematic reviews incorporating network meta-analyses will often include data from studies of many different comparisons and many studies, authors should plan to make use of supplemental appendices (as described in a section below) in order to provide readers with adequate information for review of study characteristics.

    Item 19: Risk of Bias Within Studies

    As outlined in the PRISMA statement, we recommend that findings from risk of bias assessment of the included studies be reported at the level of the individual study, and not only in terms of aggregate counts of studies at lower or higher risk of bias. A summary of bias assessments presented in a table or graph format remains most convenient, and because network meta-analyses commonly include a large number of studies, this may be most simply summarized in an online supplemental appendix to the main report. An additional consideration may be to also present a network graph incorporating risk of bias coloring, as is commonly used in Cochrane systematic reviews (for example, where green indicates low risk of bias, red high risk of bias, and yellow unclear risk of bias) to demonstrate the perceived level of risk within different parts of the treatment network (90).

    Item 20: Results of Individual Studies

    Addition

    The PRISMA statement recommends that for each outcome studied, the summary outcome data for each study's intervention groups (such as number of events and sample size for binary outcomes, and mean, standard deviation, and sample size for continuous outcomes) be provided. Use of a forest plot is recommended as ideal for traditional meta-analyses. Some modifications are needed for systematic reviewsincorporating a network meta-analysis.

    Example

    The Appendix Table presents an example of one possible approach to provision of data on mortality observed with five different interventions for treatment of left ventricular dysfunction (medical resynchronisation, cardiac resynchronisation, implantable defibrillator, combined resynchronisation and defibrillator, and amiodarone) as described elsewhere (98).

    Appendix Table. Example Table: Presentation of Outcome Data, by Included Study*

    Appendix Table.

    Elaboration

    For network meta-analysesin which many studies and many treatments may be considered, provision of outcome data at the study level in a forest plot or table in the main text may be unwieldy. Authors may alternatively report this information in one of several possible formats by using an online supplemental Web appendix (see the section on this topic below). This could include a table of data by study, provision of the data sets used for network meta-analyses (as shown in the example), or provision of forest plots that may have been prepared to study information within each of the edges of the treatment network. The sample tabular approach presented in the Appendix Table is intuitive; however, it can be inconvenient when dealing with many treatments or when outcomes are not counts, and varied approaches may be required.

    Item 21: Synthesis of Results

    Addition

    The PRISMA statement advocates reporting of the main results of the review, including findings from meta-analyses and the corresponding measures of heterogeneity. This guidance applies to reviews incorporating network meta-analyses, although some additions beyond conventional practice for pairwise meta-analysis are needed given the potentially sizable increase in the amount of data to present.

    Example

    Two examples of reporting of comparative treatment efficacy from a review comparing efficacy of treatments for multiple sclerosis with regard to progression of disability are presented in Appendix Figures 4 and 5(82).

    Appendix Figure 4. Example figure: league table presenting network meta-analysis estimates

    (lower triangle) and direct estimates (upper triangle) of efficacy (disability progression over 36 months) of immunomodulators and immunosuppressants for multiple sclerosis.

    Treatments are reported in order of relative ranking for efficacy. Comparisons between treatments should be read from left to right, and their odds ratio is in the cell in common between the column-defining treatment and the row-defining treatment. Odds ratios less than 1 favor the column-defining treatment for the network estimates and the row-defining treatment for the direct estimates. IFN = interferon.

    Appendix Figure 5. Example figure: forest plot for efficacy (disability progression over 36 months) of immunomodulators and immunosuppressants for multiple sclerosis versus placebo.

    Summary estimates are reported for only a subset of all possible pairwise comparisons, namely active interventions versus placebo. Treatments are ranked according to their surface under the cumulative ranking values. OR = odds ratio; CrI = credible interval; IFN =interferon.

    Elaboration

    Reviews comparing 2 interventions commonly contain forest plots which present 1) the summary measures of effect for each included trial, and 2) the summary measure of effect generated by meta-analyzing data from the included trials (supplemented with outcome data from each trial, I2 values quantifying statistical heterogeneity between study-level summary measures, and so forth). Network meta-analyses may include large numbers of treatments (and thus many pairwise comparisons to summarize) as well as studies (and thus a burdensome number of study-level summaries to present).

    For these reasons, forest plots often summarize findings from a network meta-analysis inefficiently. Instead, a large number of treatment comparisons may require 1) an alternative visual, such as a league table (a tabular approach used to succinctly present all possible pairwise comparisons between treatments, as shown in Appendix Figure 4) or 2) emphasis on a subset of all possible treatment comparisons in forest plots or other graphs of summary estimates (Appendix Figure 5). A network meta-analysis may focus on reporting odds ratios of a specific new intervention of interest versus all older interventions, or on comparisons of each active intervention against placebo.

    Finally, the challenge of summarizing comparative efficacy and safety succinctly between multiple interventions has popularized the use of supplementary measures in the form of treatment rankings and relative probabilities of superiority (36). Appendix Figure 6 presents examples of tabular and graphical approaches to summarizing such information. Simultaneous presentation of treatment hierarchy (as based on ranking measures) and summary effect measures may be considered as the most appropriate way of reporting the 2 outputs. League tables (Appendix Figure 4) containing the competing treatments in the diagonal cells can be used and treatments can be ordered according to their hierarchy for the respective outcome. These figures can be challenging to interpret, and it is recommended that authors provide a clear description when used to maximize transparency to readers.

    Appendix Figure 6. Examples: tabular (

    top) and graphical ( bottom) reporting of treatment rankings regarding comparison of treatment-associated risks of grade 3 or 4 hematologic toxicities for resected pancreatic adenocarcinoma.

    Rankings nearer 1 suggest greater risk. 5-FU = 5-fluorouracil.

    In addition, forest plots showing the summary effects of all treatments versus a common reference intervention can be generated in a manner such that they provide information on the relative ranking of treatments—for example, by organizing the order in which comparisons are presented to correspond to the values of a measure, such as the surface under the cumulative ranking curve for each treatment (Appendix Figure 5). Full relative ranking results (such as the estimated ranking probabilities for all treatments) may be reported as supplementary material. Tan and colleagues (99) present additional considerations for the presentation of findings from network meta-analysis.

    Item S5 (New Item): Exploration for Inconsistency

    Provide a description of findings from investigations performed to assess for the presence of inconsistency in the evidence base analyzed.

    Example

    The assumption of consistency was generally supported by a better trade-off between model fit and complexity when consistency was assumed than when it was not. Significant disagreement between direct and indirect estimates (inconsistency) was identified in only very few cases: for efficacy seven of 80 loops; for all-cause discontinuation three of 80 loops; for weight gain one of 62 loops; for extrapyramidal side-effects one of 56 loops; for prolactin increase three of 44 loops; for QTc prolongation two of 35 loops; and for sedation none of 49 loops were inconsistent (appendix pp 105-14). Data were double-checked and we could not identify any important variable that differed across comparisons in these loops. The number of included studies in the inconsistent loops was typically small, so the extent of inconsistency was not substantial enough to change the results. (88)

    Elaboration

    The approach to presenting inconsistency results depends on the method used to evaluate inconsistency. Results of global approaches (Appendix Box 5) are usually summarized in a specific value, which can be the P value of a chi-square test from a chosen model (such as a design-by-treatment model [21], Q test for inconsistency [101], or composite test for inconsistency [102]), the value of the I2measure for inconsistency, the difference in a measure of model fit or parsimony between consistency and inconsistency model (19), or the magnitude of inconsistency variance (random inconsistency models [22, 28]). Such values can be reported in tables or graphs that are primarily used to present the summary effects from a network meta-analysis. Local approaches (including the loop-specific approach [103], node-splitting (16), or "net-heat" approach [104]) require the presentation of inconsistency estimates for each different evaluated part of the network, which can result in huge tables or graphs (such as forest or matrix plots), particularly in the case of large networks.

    Appendix Box 5. Differences in Approach to Fitting Network Meta-Analyses.

    One option for a more concise presentation would be to show the inconsistency results only for loops or comparisons that might be possible sources of inconsistency on the basis of findings from statistical tests. Review authors are recommended to consider both global and local methods for the evaluation of inconsistency. More detailed reporting of findings from use of local approaches to explore for inconsistency should be included in the supplementary material.

    Item 22: Risk of Bias Across Studies

    Guidance from the PRISMA statement remains applicable. Authors are recommended to present the results of any assessments made to explore the potential for risk of bias across included studies (see item 15). The incorporation of risk of bias assessments (and their effect across a network of treatments) into judgment of the strength, credibility, and interpretability of findings from a network meta-analysisis an area of current research. Recent publications have included efforts to achieve this objective (56), and works describing approaches to integrate strength of evidence have appeared in the literature (51, 105).

    Item 23: Results of Additional Analyses

    Addition

    The PRISMA statement suggests describing results obtained from additional subgroup analyses, sensitivity analyses, meta-regression analyses, or different models (for example, fixed versus random effects) that were performed as part of the systematic review. This remains applicable for network meta-analyses but may include additional considerations, such as alternative geometry.

    Example: Alternative Network Geometry

    Standard adjusted dose vitamin K agonist (VKA) (odds ratio 0.11 (95% credible interval 0.04 to 0.27)), dabigatran, apixaban 5 mg, apixaban 2.5 mg, and rivaroxaban decreased the risk of recurrent venous thromboembolism, compared with ASA [acetylsalicylic acid]. Compared with low dose VKA, standard adjusted dose VKA reduced the risk of recurrent venous thromboembolism (0.25 (0.10 to 0.58)).

    An appendix presents a detailed explanation for the potential discrepancy between ASA and placebo results. Results for most class level analyses also aligned with those reported previously in the treatment level analysis. Subgroup analyses, performed to account for heterogeneity due to study duration, yielded results that were more favourable for ASA than those obtained from the primary analysis. However, results for ASA were still less pronounced than those reported for other treatments (standard adjusted dose VKA, low intensity VKA, and dabigatran) that remained in the evidence network. Sensitivity analysis excluding ximelagatran from the analysis did not change the results reported. (95 )

    Example: Subgroup Analysis

    Table 2 presents an investigation into potential sources of variation in people with diabetes in the network. Estimates of relative risk comparing sirolimus eluting stents with paclitaxel eluting stents depended to some extent on the quality of the trials, the length of followup, and the time of completion of patient recruitment (table 2), but 95% credibility intervals were wide and tests for interaction negative (P for interaction ≥0.16). The estimated relative risk of death when sirolimus eluting stents were compared with bare metal stents was greater when the specified duration of dual antiplatelet therapy was less than six months (2.37, 95% credibility interval 1.18 to 5.12) compared with six months or longer (0.89, 0.58 to 1.40, P for interaction 0.02), however. (106)

    Example: Meta-regression Analysis

    None of the regression coefficients of the meta-regression examining possible effect moderators turned out to be statistically significant [-0.024 (95% CI -0.056 to 0.006) for age, -0.899 (95% CI -1.843 to 0.024 for rating scale), -0.442 (95% CI -1.399 to 0.520) for publication status, and 0.004 (95% CI -0.798 to 0.762) for therapy format]. (94)

    Elaboration

    Performance of additional analyses retains an important role in establishing the robustness of findings from any meta-analysis. This includes consideration of various ways to structure the treatment network (such as lumping and splitting in relation to dose levels versus any exposure, method of administration, or exclusion of certain doses), accounting for the effect of covariates on summary effect measures (such as meta-regression or subgroup analysis), use of different statistical models (especially involving a Bayesian approach, where different prior distributions may be chosen), and so forth. Authors are encouraged to report findings from such analyses so that readers have all available information for judging robustness of primary findings. Use of supplemental appendices to the main text may be required to present this information.

    Selection of a statistical model for network meta-analyses where comparisons between treatments are largely based on single studies can also represent a challenge. We refer readers to the appendices of a recent review of antithrombotic agents, which illustrate a possible approach to reporting results when dealing with such a challenge (107).

    Discussion
    Item 24: Summary of Evidence

    The PRISMA statement recommends that authors provide a summary of the main findings obtained from the review with regard to each outcome assessed, and that this be done in a way that reflects consideration of the review's key audiences, including clinicians, researchers, and policymakers. This guidance remains entirely applicable to the reporting of network meta-analyses. As with traditional systematic reviews, mention of how findings are similar to or different from past network meta-analyses can be helpful for readers and is encouraged.

    Item 25: Limitations

    Addition

    The PRISMA statement recommends referral to limitations at the level of individual studies and outcomes in the review (including risk of bias concerns), as well as the review level. This guidance remains applicable in the context of network meta-analysis, with some potential modifications to address the nuances associated with network meta-analysis. A recent example studying pharmacotherapies for schizophrenia addresses such a collection of items (88).

    Example

    Our study has several limitations. The network could be expanded to old drugs such as perphenazine and sulpiride, which have had good results in effectiveness studies, but only a few relevant perphenazine trials have been done.

    Reporting of side-effects is unsatisfactory in randomised controlled trials in patients with psychiatric disorders, and some side-effects were not recorded at all for some drugs. The meta-regression with percentage of withdrawals as a moderator could not rule out all potential bias associated with high attrition in schizophrenia trials.

    Our findings cannot be generalised to young people with schizophrenia, patients with predominant negative symptoms, refractory patients, or stable patients, all of whom were excluded to enhance homogeneity as required by multiple-treatments meta-analysis. A funnel plot asymmetry was seen, which is not necessarily the expression of publication bias, but rather of higher efficacy in small trials than in larger ones, for various reasons. For example, sample size estimates for drugs with low efficacy might have needed higher numbers of participants to attain statistical significance than in trials with more effective drugs. However, accounting for trial size did not substantially change the rankings. Finally, because multiple-treatments meta-analysis requires reasonably homogeneous studies, we had to restrict ourselves to short-term trials. Because schizophrenia is often a chronic disorder, future multiple-treatments meta-analyses could focus on long-term trials, but these remain scarce. In any case, for clinicians to know to which drugs patients are most likely to respond within a reasonable duration such as 6 weeks is important. (88)

    Elaboration

    The risk for violating the assumption of transitivity may be increased in network meta-analyses when dealing with larger treatment networks or broad variation in dates of study performance (which may reflect important changes in co-medication use, improved expertise in disease management, modifications of diagnostic criteria or disease severity, or other factors). It is helpful for readers when the study authors provide insight on such information. Important considerations resulting from quantitative explorations for inconsistency of direct and indirect information should be noted; identified sources of inconsistencies and efforts taken to resolve them should be noted. Authors should also mention important changes in findings that may be related to sensitivity analyses, such as meta-regressions or modifications of the network structure. Weaknesses of the evidence base that informed data analyses (for example, limited amounts of information from head-to-head trials, or high risk of bias for particular edges or comparisons within the network) are also worthy of mention. Subtle or moderate changes in characteristics of study populations that may have implications regarding to whom results may apply should also be noted.

    Item 26: Conclusions

    The PRISMA statement's guidance proscribes stating an overall interpretation of the review's results while considering other related evidence, as well as a brief mention of the review's implications for future research. This guidance remains applicable for reviews including network meta-analyses.

    Item 27: Funding

    Sources of funding and related conflicts of interest should be stated, along with information about the involvement of funders, if any, in the design, analysis, and publication of the network meta-analysis. Traditional meta-analyses have long been influential tools for decision making and policy. Therefore, it is not surprising that potentially conflicted stakeholders may fund meta-analyses, and this remains a consideration for network meta-analyses.

    There is evidence that industry-sponsored meta-analyses tend to have more favorable conclusions than other meta-analyses (108, 109). Therefore, it is essential that reports summarizing reviews of networks of treatments describe in detail both their funding and any related potential conflicts of interest, and explain whether funders had any involvement in study design, analysis or interpretation of the results, drafting of the manuscript, or the decision to publish the results.

    Use of Supplemental Appendices for Complete Reporting of Network Meta-analyses

    Supplemental appendices are key tools for aiding reproducible research and ensuring transparent reporting of network meta-analyses. Given the nature of questions they address, reports of network meta-analyses often contain large amounts of information on methods used, evidence studied, and results produced. Transparent reporting of the data and the steps underpinning a network meta-analysis can thus be challenging. Journals often have limits on word counts for the text and on the number of tables and figures that may be included, and desire that information be distilled for their readership. They may require a "palatable" presentation focusing on main findings rather than on detailed reporting of data underpinning the review and explanation of the statistical modeling techniques used.

    Throughout this guidance, we have noted areas where authors might present information in supplements (for example, partial versus full reporting of summary estimates, study characteristics, or explorations of heterogeneity). Although we have attempted to provide comprehensive guidance on reporting network meta-analyses and we feel that the highlighted elements are needed to maximize their transparency, there will probably be a need to distribute this information between the main text and supplements differently, depending on the target journal. We suggest that readers consult good examples of reviews balancing reporting between main text and data supplements when considering the reporting of their own network meta-analyses. Future updates are likely to include further discussion on this aspect of presenting reviews that incorporate networks of treatments.

    Software for Implementing Network Meta-analysis

    Several software packages are available for implementing network meta-analysis. The choice of software package will depend on the statistical method under consideration.

    For Bayesian statistics, WinBUGS (Imperial College and Medical Research Council, London, United Kingdom) (110) is the most widely used software package, although JAGS (111) and OpenBUGS can also be used. The NICE Decision Support Unit (112) published a series of technical support documents with code for conducting network meta-analysis for various outcomes within a Bayesian framework.

    The Web sites of the Multi-Parameter Evidence Synthesis Group (113) and the IMMA project (University of Ioannina, Ioaninna, Greece) (114) also provide code that can be used to perform network meta-analysis in WinBUGS, OpenBUGS, or JAGS. These packages can be used directly or indirectly via widely general purpose software, such as R (R Foundation for Statistical Computing, Vienna, Austria) (115–118), STATA (Stata Corp., College Station, Texas) (116), SAS (SAS Institute, Cary, North Carolina) (116), or Microsoft Excel (Microsoft Corp., Seattle, Washington) (116), or via specialized software packages, such as ADDIS (ADDIS, Groningen, the Netherlands) (117–119).

    For frequentist statistics, network meta-analyses can be conducted by using R (83, 120), STATA (121, 122), or SAS (26), whereas simple indirect comparisons can be conducted using the CADTH Indirect Treatment Comparison calculator (Canadian Agency for Drugs and Technologies in Health, Ottawa, Ontario, Canada) (123, 124). In addition to the statistical packages mentioned above, complementary software packages for developing graphical tools for network meta-analysis(90) and evaluating inconsistency ( 104) are available.

    Example Wording for Endorsing This PRISMA Extension

    [Journal name] requires a completed PRISMA 2015 network meta-analysis checklist as a condition of submission when reporting the results of a network meta-analysis. Templates can be found at [give hyperlink to location if relevant] or on the PRISMA Web site www.prisma-statement.org, which also describes other PRISMA extensions. You should ensure that your article, at minimum, reports content addressed by each item of the checklist. Meeting these basic reporting requirements will greatly improve the value of your network meta-analysis report and may enhance its chances for publication.

    References

    Comments

    0 Comments
    Sign In to Submit A Comment
    Brian Hutton (1,2), Chris Cameron (1,3), David Moher (1,2)23 July 2015
    PRISMA Considerations and Searching the Literature
    We thank Drs Ge, Tian, Li and Yang(1;2) for their interest regarding the PRISMA extension for network meta-analysis.(3) They suggest there are some potential additional considerations regarding information sources and search strategies to consider.

    While we agree with the practices described Drs. Tian, Ge and Li describe regarding searching, we feel they are entirely applicable to traditional systematic reviews and are addressed in the PRISMA Statement’s explanation and elaboration article.(4) The main intent of the PRISMA extension statement for reporting of network meta-analysis is to focus on items that were not addressed in PRISMA and differ substantially from practices for traditional systematic reviews and meta-analyses.

    Regarding their first suggestion, we feel ensuring the need for a proposed review is equally important for traditional reviews, and in all cases searching for existing literature should be preceded by in-depth consideration of the clinical importance of the research question in a PICOS (Population-Intervention-Comparator(s)-Outcome(s)-Study design) framework.

    Regarding their second suggestion, we believe it has long been common for researchers undertaking reviews of multiple forms to inspect bibliographies of past reviews and included studies as a source for potentially relevant studies; the PRISMA Statement’s Explanation and Elaboration document addresses this issue in Item 7, suggesting ‘In addition to searching databases, authors should report the use of supplementary approaches to identify studies, such as hand searching of journals, checking reference lists, searching trials registries or regulatory agency Web sites, contacting manufacturers, or contacting authors.’(4)

    Lastly, we agree that the increased number of interventions in a network meta-analysis can heighten the challenge of completing the systematic search strategy for a review. However, this may also often be true of other reviews not involving a multi-treatment question; for example, reviews involving more than one indication of relevance, or reviews involving complex interventions. In these and other scenarios, we support the practice of peer review of literature searches to maximize their quality. This is addressed in Item 8 in the PRISMA Explanations and Elaborations Statement: ‘We encourage authors to state whether search strategies were peer reviewed as part of the systematic review process.’(4)

    Therefore, we support these practices noted by Tian, Ge, and Li. However we feel their importance and existing practice amongst researchers extend to many additional types of reviews, and that guidance from the PRISMA Statement remains highly relevant.

    The authors also suggest a potential need for a second guidance document addressing reporting for Bayesian network meta-analyses; we disagree with this perspective at this time. The examples and elaborations provided in our guidance address reporting considerations for the key items that were suggested, while we do not foresee a need for certain suggested components such as specification of starting values or the number of iterations used (we recommend provision of details for convergence assessment already). We are unclear as to the authors’ intended meaning of suggesting sample size, however we hypothesize this is a reference to statistical power in network meta-analyses. We agree this can be of interest, and may be especially so for outcomes with few events. Some research has been conducted in this area,(5) although additional research is needed to inform considerations for reporting guidance.

    We believe the current guidance provides a strong set of minimum reporting items for Frequentist and Bayesian NMAs, while authors are certainly encouraged to provide additional information of relevance to readers to support their reviews. As methodologies continue to evolve in this rapidly developing area, we will continue to gather materials for a possible future update of this extension statement which may include guidance that additional statistical considerations be reported.

    Brian Hutton, PhD; David Moher, PhD
    Ottawa Hospital Research Institute, Ottawa, Canada;
    University of Ottawa School of Epidemiology, Public Health and Preventive Medicine, Ottawa, Canada

    Chris Cameron, PhD
    Ottawa Hospital Research Institute, Ottawa, Canada;
    Cornerstone Research Group Inc., Burlington, Canada


    Reference List

    (1) Tian J, Ge L, and Li L. Searching for previous published and unpublished or ongoing systematic reviews/meta-analyses is very important (Commentary). Annals of Internal Medicine. 2015.

    (2) Ge L, Tian J, Li L, and Yang K. The PRISMA Extension Statement for Statistical Analysis Reporting of Network Meta-Analysis is Needed (Commentary). Annals of Internal Medicine. 14-6-2015.

    (3) Hutton B, Salanti G, Caldwell D, Schmid C, Chaimani A, Cameron C, Ioannidis J, and et al. The PRISMA Extension Statement for Reporting of Systematic Reviews Incorporating Network Meta-Analyses of Healthcare Interventions: Checklist and Explanations. Annals of Internal Medicine 162(11), 777-784. 2015.

    (4) Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche P, Ioannidis J, Clarke M, Devereau PJ, Kleijnen J, and Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. Annals of Internal Medicine 151(4), W65-W94. 2009.

    (5) Thorlund K and Mills E. Sample size and power considerations in network meta-analysis. Systematic Reviews 1(41. doi: 10.1186/2046-4053-1-41). 2012.



    Jin-hui Tian, PhD, Long Ge, MD, Lun Li, PhD22 June 2015
    Searching for previous published and unpublished or ongoing systematic reviews/meta-analyses is very important.
    The much-anticipated PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) extension statement for reporting of network meta-analysis (NMA) has been published by Hutton B and colleagues (1). We have noted that two items including “information sources” (Item 7) and “search” (Item 8) remained the original PRISMA statement. However, we all know that the searching for the evidence in NMA is more important and more complex than traditional systematic reviews and pairwise meta-analyses (2). For example, the first step for NMA is a thorough and rigorous search for previous systematic reviews/meta-analyses, to ensure the research question has not been carried out previously (2). In addition, the reference lists of previous published systematic reviews/meta-analyses should be tracked to avoid missing important studies. Unfortunately, only 40% of published NMAs searched the reference lists of previous systematic reviews/meta-analyses (3). Moreover, it is very important to peer review the quality of the searches of previous systematic reviews/meta-analyses by a specialist librarian and to determine whether the conducting of NMA is based on previous systematic reviews/meta-analyses. Therefore, more details of evidence searching should be needed to guide NMA reviewers.

    We declare that we have no conflicts of interest.

    Jin-hui Tian, PhD
    Evidence-Based Medicine Center of Lanzhou University, Lanzhou 730000, China.
    Key Laboratory of Evidence-based Medicine and Clinical Translational Research of Gansu Province, Lanzhou 730000.

    Long Ge,MD; Lun Li, PhD
    The First Clinical Medicine College of Lanzhou University, Lanzhou 730000, China;
    Evidence-Based Medicine Center of Lanzhou University, Lanzhou 730000, China;
    Key Laboratory of Evidence-based Medicine and Clinical Translational Research of Gansu Province, Lanzhou 730000.


    (1)Hutton B, Salanti G, Caldwell DM, Chaimani A, Schmid CH, Cameron C, et al. The PRISMA Extension Statement for Reporting of Systematic Reviews Incorporating Network Meta-analyses of Health Care Interventions: Checklist and Explanations.Ann Intern Med. 2015;162(11):777-84.
    (2)Golger S, Wright K. Searching for evidence.In: Biondi-Zoccai G, editor. Network meta-analysis: evidence synthesis with mixed treatment comparison. New York: Nova Science, 2014. P. 63-76.
    (3)Li L, Tian J, Tian H, Moher D, Liang F, Jiang T, et al. Network meta-analyses could be improved by searching more sources and by involving a librarian. J Clin Epidemiol. 2014; 67(9):1001-7. 
    Long Ge, Jin-hui Tian, Lun Li, Ke-hu Yang15 June 2015
    The PRISMA Extension Statement for Statistical Analysis Reporting of Network Meta-Analysis is Needed
    Hutton B and colleagues (1) have published the PRISMA extension statement for reporting of network meta-analyses (NMAs). We believe the PRISMA extension statement adding this very important items to improve the reporting of network meta-analyses. We all know that the validity of NMAs results highly depends on some key basic assumptions. Some previous reviews have focused on the reporting of published NMAs. Their results indicated that there were serious reporting flaws, especially regarding assessment of assumptions and reporting of statistical analysis applied (2). Tan SH et al. (3) established a guidance based on 19 published NMAs to guide reporting of statistical methods applied, but some key reporting items such as convergence assessment were missing. Therefore, specifically detailed checklists for reporting and conducting of statistical analysis are needed. Based on published literature, we suggest reporting for statistical analysis of Bayesian NMAs:

    Methods section:
    The detail of sample size calculation
    Direct comparison: Assessment of heterogeneity, model of pooling data,summary measure, assessment of publication bias, sensitivity analysis, other analysis, software applied.
    Network meta-analysis: Assessment of heterogeneity, adjustment for covariates, adjustment of multiple arms, code applied, selection of prior distribution, selection of fixed or random effect model, selection of consistency or inconsistency, assessment of inconsistency, assessment of convergence, summary measure (including treatment ranking), assessment of publication bias, sensitivity analysis (based on prior distribution or other), other analysis, software applied.

    Results section:
    The results of sample size calculation
    Direct comparison: Results of heterogeneity assessment, model applied, results of direct comparison, publication bias assessment, sensitivity analysis and other analysis.
    Network meta-analysis: Methods and results of heterogeneity, results of model fit tested, number of chains, the staring values for sampling, number of iterations per chain, number of iteration used for final results, results of convergence assessment, prior distributions used, results of indirect comparison, results of network meta-analysis, results of inconsistency assessment, results of ranking, publication bias assessment, sensitivity analysis, other analysis.

    The details for each item should also be described in published papers. However, those items are based on published literature. Delphi survey, and face-to-face discussion and consensus meeting should be established to develop another PRISMA extension statement for reporting of statistical analysis and assumptions of NMA (PRISMA-S). We strongly believe that it will play very important parts to improve the quality of NMAs.

    We declare that we have no conflicts of interest.

    Long Ge
    The First Clinical Medicine College of Lanzhou University, Lanzhou 730000, China;
    Evidence-Based Medicine Center of Lanzhou University, Lanzhou 730000, China.

    Jin-hui Tian, Lun Li, Ke-hu Yang*
    Evidence-Based Medicine Center of Lanzhou University, Lanzhou 730000, China;
    Key Laboratory of Evidence-based Medicine and Clinical Translational Research of Gansu Province, Lanzhou 730000.
    *[email protected]

    (1)Hutton B, Salanti G, Caldwell DM, Chaimani A, Schmid CH, Cameron C, et al. The PRISMA Extension Statement for Reporting of Systematic Reviews Incorporating Network Meta-analyses of Health Care Interventions: Checklist and Explanations.Ann Intern Med. 2015;162(11):777-84.
    (2)Li L, Tian JH, Yang KH. Current situation of reporting statement for network meta-analysis.Chin J Evid Based Pediatr.2014;9(6):467-71.
    (3)Tan SH, Bujkiewicz S, Sutton A, Dequen P, Cooper N.Presentational approaches used in the UK for reporting evidence synthesis using indirect and mixed treatment comparisons.J Health Serv Res Policy. 2013;18(4):224-32.