2 What is Meta-Analysis?
What is Meta-Analysis?
A meta-analysis is more than just a summary of studies—it is a statistical method used to quantitatively synthesize the results of multiple independent studies addressing the same research question. Rather than treating each study in isolation, meta-analysis aggregates their findings, increasing overall statistical power and helping to identify consistent patterns, relationships, or effects.
In environmental health, where exposure definitions vary and populations are heterogeneous, meta-analysis offers a structured way to combine data while accounting for differences in study design. Done well, a meta-analysis can reveal signals that individual studies are too underpowered to detect—and provide more robust evidence for policy and practice.
Why Use Meta-Analysis in Environmental Health Research?
Environmental exposures like PM₂.₅ rarely affect only one study population. Meta-analysis allows us to:
-
Increase statistical power by pooling data
-
Estimate average effect sizes across different settings
-
Explore heterogeneity (e.g., by region, trimester, or study design)
-
Identify gaps or inconsistencies in the literature
-
Strengthen causal inference by looking for dose-response trends or temporal patterns
For complex outcomes like preterm birth or stillbirth, where both biological mechanisms and social determinants play a role, meta-analysis can help disentangle patterns in a way that narrative reviews cannot.
Effect Sizes: OR, RR, HR – What’s the Difference?
In this project, you’ll encounter several types of effect estimates reported across studies:
-
Odds Ratio (OR): Compares the odds of an event (e.g., preterm birth) occurring in an exposed group versus an unexposed group. ORs are commonly used in logistic regression and case-control studies.
-
Risk Ratio (RR): Compares the probability (not the odds) of an outcome between exposed and unexposed groups. RRs are typically more intuitive and are used in cohort studies and randomized trials.
-
Hazard Ratio (HR): Derived from survival analyses, HRs describe the instantaneous risk of an event occurring at a given point in time. They account for time-to-event and censoring.
While these effect sizes are conceptually different, they often approximate each other—especially when the outcome is rare. In environmental health studies, adverse birth outcomes like stillbirth and preterm birth tend to occur infrequently in the general population, which means RRs and HRs can sometimes be treated as approximate ORs under certain assumptions.
Key Concepts You Need To Know
Term | Expanded Meaning & Context |
---|---|
Effect Size | A quantitative measure of the strength or direction of an association between exposure and outcome. In our project, we often use odds ratios (ORs). For example, OR = 1.2 indicates that the exposed group has 20% higher odds of the outcome compared to the unexposed group. Other effect sizes include risk ratios (RRs) and hazard ratios (HRs), which may be converted to ORs for comparability. |
Confidence Interval (CI) | The range of values within which we can be reasonably confident the true effect lies. A 95% CI means that if the study were repeated 100 times, the true effect would fall within that interval in 95 of those repetitions. Narrow CIs suggest precision; wide CIs suggest uncertainty. If the CI crosses 1.0 in an OR/RR/HR, the result is not statistically significant. |
Heterogeneity (I²) | A measure of inconsistency among studies included in a meta-analysis. It quantifies how much of the variation between study results is due to real differences (not chance). I² values range from 0% (no heterogeneity) to 100% (extreme heterogeneity). Moderate to high I² is expected in environmental health meta-analyses and typically justifies the use of a random effects model. |
Between-Study Variance (τ²) | The estimated variance of true effect sizes across studies in a random effects model. While I² describes how much heterogeneity there is, τ² quantifies the absolute variability. Higher τ² means more divergence between true effects, and it directly affects the width of confidence intervals. |
Weighting | In meta-analysis, each study contributes to the pooled effect based on its precision, which is often a function of its sample size and variance. In a fixed effects model, large studies dominate. In a random effects model, weighting accounts for both within-study and between-study variance, giving smaller studies more influence than in fixed effects models. |
Fixed Effects Model | Assumes all studies are estimating the same underlying effect. Differences in results are attributed solely to sampling error. This model produces narrower confidence intervals and is appropriate only when studies are highly similar with low heterogeneity (I² ≈ 0%). |
Random Effects Model | Assumes the true effect size varies across studies and models that variation explicitly. It incorporates both within-study error and between-study variance (τ²). This model is more conservative and generalisable, and is appropriate when studies differ in population, exposure, setting, or methods—as is common in LMIC-based environmental health research. |
Publication Bias | A form of bias where studies with significant or positive findings are more likely to be published. This can skew the pooled estimate in a meta-analysis. Tools like funnel plots, Egger’s test, and Trim-and-Fill are used to detect and adjust for this bias. |
Forest Plot | A graphical summary of individual study results and the overall pooled effect. Each study is represented by a line (confidence interval) and square (effect size and weight), while the pooled result is shown as a diamond. Forest plots help visualise consistency and heterogeneity. |
Funnel Plot | A scatterplot of effect size against study precision (often standard error). In the absence of bias, the plot should resemble a symmetrical inverted funnel. Asymmetry may indicate publication bias or small-study effects. |
We’ll have many conversations going forward about these ideas, but for now, think of a meta-analysis as a conversation among studies. Our job is to guide that conversation, statistically and thoughtfully ,to produce a clearer picture than any single study can offer.