Open Access
November 2014 Standardization and Control for Confounding in Observational Studies: A Historical Perspective
Niels Keiding, David Clayton
Statist. Sci. 29(4): 529-558 (November 2014). DOI: 10.1214/13-STS453

Abstract

Control for confounders in observational studies was generally handled through stratification and standardization until the 1960s. Standardization typically reweights the stratum-specific rates so that exposure categories become comparable. With the development first of loglinear models, soon also of nonlinear regression techniques (logistic regression, failure time regression) that the emerging computers could handle, regression modelling became the preferred approach, just as was already the case with multiple regression analysis for continuous outcomes. Since the mid 1990s it has become increasingly obvious that weighting methods are still often useful, sometimes even necessary. On this background we aim at describing the emergence of the modelling approach and the refinement of the weighting approach for confounder control.

Citation

Download Citation

Niels Keiding. David Clayton. "Standardization and Control for Confounding in Observational Studies: A Historical Perspective." Statist. Sci. 29 (4) 529 - 558, November 2014. https://doi.org/10.1214/13-STS453

Information

Published: November 2014
First available in Project Euclid: 15 January 2015

zbMATH: 1331.62287
MathSciNet: MR3300358
Digital Object Identifier: 10.1214/13-STS453

Keywords: $2\times2\times K$ table , causality , decomposition of rates , epidemiology , expected number of deaths , G. U. Yule , H. Westergaard , Log-linear model , marginal structural model , National Halothane Study , odds ratio , rate ratio , transportability

Rights: Copyright © 2014 Institute of Mathematical Statistics

Vol.29 • No. 4 • November 2014
Back to Top