Abstract:
Sequential methods provide a formal framework by which clinical trial data can be
monitored as they accumulate. The results from interim analyses can be used either
to modify the design of the remainder of the trial or to stop the trial as soon as
sufficient evidence of either the presence or absence of a treatment effect is available.
The circumstances under which the trial will be stopped with a claim of superiority for
the experimental treatment, must, however, be determined in advance so as to control
the overall type I error rate. One approach to calculating the stopping rule is the
group-sequential method. A relatively recent alternative to group-sequential approaches
is the adaptive design method. This latter approach provides considerable flexibility in
changes to the design of a clinical trial at an interim point. However, a criticism is that
the method by which evidence from different parts of the trial is combined means that
a final comparison of treatments is not based on a sufficient statistic for the treatment
difference, suggesting that the method may lack power.
The aim of this paper is to compare two adaptive design approaches with the groupsequential
approach. We first compare the form of the stopping boundaries obtained
using the different methods. We then focus on a comparison of the power of the
different trials when they are designed so as to be as similar as possible. We conclude
that all methods acceptably control type I error rate and power when the sample size
is modified based on a variance estimate, provided no interim analysis is so small that
the asymptotic properties of the test statistic no longer hold. In the latter case, the
group-sequential approach is to be preferred. Provided that asymptotic assumptions
hold, the adaptive design approaches control the type I error rate even if the sample
size is adjusted on the basis of an estimate of the treatment effect, showing that the
adaptive designs allow more modifications than the group-sequential method.