RCTs stopped early for benefit are becoming more common, often fail to adequately report relevant information about the decision to stop early, and show implausibly large treatment effects, particularly when the number of events is small. These findings suggest clinicians should view the results of such trials with skepticism.
Accompanying editorial by Dr. Stuart Pocock, a statistics researcher specialized in RCT monitoring.
Generally speaking, in RCTs, allowing nature enough time to take its course is important for us to sort out variations due to chance alone from variations due to our intervention. Here is a heuristic argument: When estimating a quantity, say the average weight of 1-month-old babies, precision is inversely proportional to the square root of the number of subjects. When estimating the probability of developing an event, the precision is inversely proportional to the length of the observation itself (and not the square root).
Another issue that arises with repeated interim analysis is that the more we look, the more likely we're to be fooled by apparently impressive variations that are due to chance alone. There is a lot research in statistics on sequential methods which allow for interim analysis of clinical trials while controlling type I and II errors.
A watched pot never boils. Or not.
Technorati Tags:
medicine statistics clinical trials interim analysis JAMA
No comments:
Post a Comment