In two-thirds of the reports there was bias in the way adverse effects of the treatment were reported, with more serious side-effects (those with toxicities graded as III or IV) poorly reported. This was particularly the case in trials that showed a significant benefit for the treatment under investigation. Only 32% of articles gave details of the frequency of grade III or IV toxicities in the summary (known as the "abstract").
The authors of the study call for authors, journals and experts who review the articles for journals to be more rigorous in encouraging unbiased reporting of trial results and in enforcing guidelines.
Professor Ian Tannock, medical oncologist and senior scientist in the Division of Medical Oncology and Hematology at the Princess Margaret, who led the research, said: "Better and more accurate reporting is urgently needed. Journal editors and reviewers, who give their expertise on the topic, are very important in ensuring this happens. However, readers also need to critically appraise reports in order to detect potential bias. We believe guidelines are necessary to improve the reporting of both efficacy and toxicity."
Prof Tannock and his colleagues identified all randomised controlled, phase III clinical trials for breast cancer therapies that had been published between January 1995 and August 2011. Out of a total of 568 articles, 164 were eligible for inclusion in the analysis. Phase III trials usually evaluate the efficacy and/or the best dose for a particular therapy that has already been tested in earlier, small trials, and they usually involve more patients than phase I or II trials. Often, they are the final stage that a drug or other therapy has to pass before the treatment can be licensed for use in patients in normal clinical practice, outside of the trial setting.
Trials always have a "primary endpoint" - the specific event that is measured at the end of the trial to see whether or not the given treatment works. The primary endpoint is decided before the study begins. Often it relates to overall survival: did more patients survive or live longer on the new treatment than patients on the existing standard treatment? However, there can also be "secondary endpoints"; these are additional events that are of interest to the investigators, but which the study has not been designed specifically to address, and for this reason investigators have to be cautious in analysing and drawing conclusions from them. Secondary endpoints can include how much longer patients on the new treatment live without the disease progressing, spreading to other parts of the body or recurring, compared to patients on the standard treatment; what are the adverse side-effects and what is the quality of life.
Prof Tannock and his colleagues defined bias as "inappropriate reporting of the primary endpoint and toxicity, with emphasis on reporting of these outcomes in the abstract". They defined spin as "the use of words in the concluding statement of the abstract to suggest that a trial with a negative primary endpoint was positive based on some apparent benefit shown in one or more secondary endpoints".
They found that 54 (33%) trials were reported as positive, based on secondary endpoints, despite not finding a statistically significant benefit in the primary endpoint. "These reports were biased and used spin in attempts to conceal that bias," write the authors. They found that 58% of 92 trials that showed no benefit for patients from the experimental therapy (negative primary endpoint) used secondary endpoints to suggest benefit from the treatment.
A total of 110 (67%) of papers reported adverse side-effects of the experimental therapy in a biased manner. If a trial showed a benefit for the treatment (positive primary endpoint), then toxicities were more likely to be under-reported.
The first author of the study, Dr Francisco Vera-Badillo, clinical research fellow at the Princess Margaret, said: "We found a high incidence of biased reporting of the outcomes of clinical trials. In those with outcomes that were either negative or did not show a statistically significant benefit, spin was used frequently to influence positively the interpretation of the results, by focusing on apparent benefits from secondary endpoints.
"Where trials showed a positive outcome, the toxicities were less likely to be reported. A possible explanation for this could be that the investigators, sponsors or both, prefer to focus on the efficacy of the experimental treatment and downplay toxicity to make the results look more attractive."
The source of funding for trials (industry or academic) was not associated with bias or spin in the reporting of results and toxicities.
In order to be published in most academic journals, it is now compulsory to register clinical trials before they start. Many countries register them on either the USA registry (ClinicalTrials.gov) or the European registry (clinicaltrialsregister.eu). Some of the studies analysed for the Annals of Oncology paper started before registration became compulsory. However, for those that were registered, the researchers found that some changed the primary endpoint between registration and the report of the outcomes being published. "Among these trials, there was a trend towards change of the primary endpoint being associated with positive results, suggesting that it may be a strategy to make a negative trial appear positive," write the authors. "Trial registration does not necessarily remove bias in reporting outcomes, although it does make it easier to detect."