Data Sources, Searches, and Study Selection
This update focuses on 2 of the key questions addressed in our living review (as modified to reflect the inclusion decision described in the previous paragraph). This review used rapid methods, primarily in the screening stages of the review. Details of the plan for updating each question and a summary of our methods are described in our protocol (
3). A final update of all key questions in this review will be produced in early 2022. The PROSPERO record for our original review is CRD42020207098.
For this update, our search strategies focused on identifying longitudinal controlled studies of risk for reinfection published before 22 September 2021. We searched Ovid MEDLINE ALL, the World Health Organization global literature database, ClinicalTrials.gov, COVID19reviews.org, and reference lists of reviews (
Supplement Item 1).
We included longitudinal studies that compared the risk for reinfection for individuals who had a documented infection with SARS-CoV-2 (the “positive” cohort) with the risk for new infection in those with no prior infection (the “negative” cohort) (
4). Studies in the general population, health care workers, college students, and long-term care facilities were eligible, as were registry-based studies of patients with a specific condition. Studies without an uninfected comparison cohort were ineligible.
We used the Joanna Briggs Institute cohort study checklist (
5) to screen for methodological limitations that would almost certainly invalidate the study findings (
Supplement Table 1). Using this tool, we excluded 2 studies (
6,
7) that used invalid criteria to allocate participants to the positive and negative cohorts or did not follow participants for an adequate length of time for potential reinfection.
Data Extraction and Quality Assessment
We extracted the following information by study: study design, population, data sources, study inclusion and exclusion criteria, age, race, gender, comorbid conditions, immunoassay type and brand (when applicable), definition of reinfection, follow-up test type and frequency of follow-up testing, primary infection symptom status, waiting period (if applicable), counts for all infection events and nonevents, and main findings.
For included studies, we identified potential biases in the following 4 areas: sampling, cohort assignment, case definition, and ascertainment of cases during follow-up. We abstracted information relevant to these methodological features from each study, recording variations in methods that could affect the observed effect. Considerations include the following.
Sampling. We assessed whether selection bias could arise from the data sources used to identify eligible persons. Selection bias could spuriously influence effect size if some groups were less likely to be recruited, if the cohorts were differentially enriched with persons who had unusual risk profiles, or if cohort inception was poorly delineated.
Cohort assignment. Within a given sample, the “positive” (infected) and “negative” (not infected) cohorts form the denominators for follow-up and analyses. To assess misclassification, we considered which tests were used (serologic, virologic, and clinical assessment), when they were done in relation to illness onset, and whether they were applied to all participants.
Outcome ascertainment. We assessed the methods used to ascertain new infections during follow-up, such as scheduled surveillance with PCR tests, clinical surveillance, or identification of cases in clinical care. In assessing ascertainment, we also considered whether surveillance for symptoms or access to medical evaluation differed among cohorts and (if applicable) adherence to scheduled testing. Bias could also occur if the follow-up period was too short.
Classification of potential cases of reinfection during the follow-up period. In most studies, reinfection was diagnosed when an individual had a positive result on a PCR test after a “waiting period” intended to give time for the initial episode to resolve clinically and virologically. Bias can occur if a positive PCR result due to persistent viral shedding is counted as a reinfection or if adjudication of reinfections is not equally rigorous in the positive and negative cohorts.
In each of these 4 categories, we identified methodological variations that are likely to be associated with higher or lower quality (risk of bias). In some cases, we did sensitivity analyses to assess how the overall protection estimate would change because of study-level factors. Such factors include study duration, the waiting period between cohort inception and the first reinfection assessment (
8), median participant age, underlying prevalence (proxied by the proportion of new infections in the negative cohort), whether criteria for diagnosis of the initial infection would select only symptomatic infections, and whether serology, PCR, or both were used for cohort allocation.
In our original review and in each update, we report on studies identified by surveillance, particularly those that are not yet fully reported but may eventually be eligible for inclusion and those that are ineligible but can provide perspective on our results, such as uncontrolled studies of risk factors for reinfection in special populations or in the setting of emerging variants of concern. For this update, we summarize surveillance through 30 November 2021.
Data Synthesis and Strength of Evidence
The outcomes of interest were the effects of previous infection on the risk for symptomatic reinfection, risk for any reinfection, severity of reinfection, and duration of protection. These outcome metrics, termed “protection,” are analogous to the end points used in studies of vaccine efficacy (
9). Here, however, incident infections detected during the follow-up period in the positive cohort are reinfections, and those in the negative cohort are primary infections. The category “any reinfection” includes asymptomatic persons in whom virus has been detected.
Although many studies reported hazard ratios or relative rates of infection per person-time (often adjusted for various factors), our meta-analysis used absolute counts of events in both groups to obtain a relative risk estimate. We subsequently found a high degree of concordance between our calculated risk estimates and the rates reported in studies.
The primary analyses focused on the magnitude of protection against reinfection, quantified as the proportion or percentage of prevented infections. Each included study provided counts of reinfected individuals from the positive cohort and newly infected individuals from the negative cohort, which together yield an estimate of protection from reinfection—the difference in the proportion of incident infections between the negative and positive cohorts relative to the proportion observed in the negative cohort. We pooled these estimates via meta-analysis, both unstratified and stratified by population composition (general population, health care workers only, young adults only, or elderly persons only), to obtain combined effect estimates and corresponding 95% CIs. We used a continuity correction of 0.5 for 2 studies that reported 0 reinfections; this approach imparts a small but acceptable null bias to the meta-analysis, leading to conservative inference. We generated uncorrected estimates for comparison. The empirical Bayes random-effects meta-analysis model was chosen for its robustness properties and low bias in small-sample settings (
10,
11). Study heterogeneity within strata was assessed using the
I 2 statistic (
12). We assessed heterogeneity across strata using the Cochran
Q b statistic (
13). Analysis was done using Stata, version 16.1 (StataCorp). (
Supplement Item 2 provides further details.)
For some factors, including demographic variables, symptom status, health behaviors, vaccination, and variants, we could not examine their quantitative impact on effect sizes within a meta-analytic framework because of inconsistent reporting among studies. We abstracted information from study-specific sensitivity analyses and regression analyses when available, and we summarize these findings qualitatively.
Study-level factors that might influence estimates of protection include study duration, waiting interval between reinfection assessments, median participant age, underlying prevalence (proxied by the proportion of new infections in the negative cohort), and rigor in assessing positivity of infection (for example, whether asymptomatic infections were identified by surveillance and whether validation testing was done). We assessed these visually for relationships with effect sizes using scatter plots and nonparametric mean-smoothing of trends. We used meta-regression techniques to estimate R
2 values to examine each potential factor that may explain between-study heterogeneity. We also produced a L’Abbé and funnel plot as visual assessments of bias and sensitivity to study characteristics. The Harbord test was used to evaluate the evidence for asymmetry in the funnel plot (
14).
We graded the strength of evidence to describe our confidence in effect estimates as high, moderate, low, or insufficient. The assessment is based on our analysis of the study limitations, directness, consistency, precision, dose-response, plausible confounding, and strength of association (
15).
Comments
0 Comments