The approach tries to overcome a huge chicken-and-egg problem in medical research: Not enough people volunteer for studies of new treatments partly because researchers can't promise the studies will help them - but without enough volunteers, researchers can't study new treatment options.
But a new "adaptive" way of designing medical studies could help. In a recent paper in the Journal of the American Medical Association, and in several clinical trials now being planned at the University of Michigan Health System and partner institutions, adaptive design has come to the fore.
Experts from U-M and other major medical centers say that the approach makes the most sense in situations where time is of the essence - such as emergency care - or where the medical stakes are high and there are few good treatment options - such as some forms of cancer. They also note there are plenty of situations where adaptive design isn't feasible or needed.
But for patients who are being asked to participate in research studies, an adaptive design could help tip the balance between saying yes and saying no. It could also help patients who enter trials have a clearer understanding of what the stakes are for them personally, not just for the generation of patients who will come after them.
"It takes more preparation for the researchers up front, and more sophisticated statistical analysis as the trial is going on, but in the end more study volunteers will be more likely to get the best option for them, and the results will still be scientifically sound," says William Meurer, M.D., a U-M emergency physician who is lead author of the recent JAMA viewpoint article.
The "adaptive" approach to clinical trial design centers around how patients are randomly assigned to one of the two or more groups in a study. In a non-adaptive trial, everyone who volunteers from the first patient to the last gets assigned with what amounts to a coin toss, and the groups end up being of similar size.
But in an adaptive trial, the trial's statistical algorithm constantly monitors the results from the first volunteers, and looks for any sign that one treatment is better than another. It doesn't tell the patients or the study doctors what they're seeing, but they do start randomly assigning slightly more patients into the group that's getting the treatment that is starting to look better. In other words, the trial "learns" along the way.
"It's a way of assigning patients at slightly less than random chance, allowing us to do what might be in the best interest of each patient as the trial goes along," says Meurer, an assistant professor of emergency medicine and neurology at the U-M Medical School.
By the end of the trial, one of the groups of patients will therefore be larger, which means the statistical analysis of the results will be trickier and the results might be a little less definitive. But if the number of patients in the trial is large, and if the difference between treatments is sizable, the results will still have scientific validity, Meurer says.
A clinical trial of post-stroke blood sugar treatment is one example of this kind of approach. It's being coordinated by the Neurological Emergencies Treatment Trial network based at U-M, and conducted at dozens of centers including, in coming months, U-M's own emergency department and inpatient stroke unit.
The study, called SHINE, uses an adaptive method of assigning stroke survivors to a target blood sugar level in the first day after their stroke – with the goal of finding out how much impact blood sugar control has on how well the patients do overall. The study was designed by researchers at the University of Virginia, Medical College of Georgia, University of Texas Southwestern, and the NETT Statistical and Data Management Center at the Medical University of South Carolina.
Other emergency treatment studies now being planned at U-M with collaborators from throughout the country with adaptive design include one for therapeutic hypothermia after cardiac arrest, and hypothermia after spinal cord trauma. In both cases, it's not yet known what length of time for cooling produces the best outcome for patients, though cooling is being used for both types of patients already. In addition, the team has also developed an adaptive comparative effectiveness trial to evaluate three different medications to stop ongoing seizures in patients who have failed first line treatment.
When time is of the essence and patients or their loved ones are being asked to make a decision about being in a clinical trial at the very time when they are in a health crisis, and the difference between treatment options could be large, adaptive design can be most powerful, Meurer says. Pharmaceutical companies and medical device manufacturers have been faster to adopt adaptive design for their trials, but academic centers that conduct huge numbers of non-industry trials have not.
But when researchers just want to compare two standard treatments to make sure one isn't grossly inferior, or when they want to pinpoint the precise impact of a preventive measure (such as aspirin) across a large population (such as heart attack survivors), adaptive designs usually won't help, he notes.
"Adaptive design gives us the potential to get it right and put more people where the bang for the buck is, but still have the change be invisible to the physicians and staff carrying out the trial," Meurer says. "If a particular option helps patients about 10 percent more than other options, but the adaptive design's impact on the statistical results means that you can only say the effect is somewhere between 9 percent and 11 percent, the tradeoff is still worth it."
In addition to Meurer, the JAMA Viewpoint authors are Roger J. Lewis, M.D., PhD of Harbor-UCLA Medical Center, and Donald Berry, Ph.D., of the University of Texas MD Anderson Cancer Center. NETT is headed by William Barsan, M.D., outgoing chair of the U-M Department of Emergency Medicine.
JAMA. 2012;307(22):2377-2378. doi:10.1001/jama.2012.4174