The Problem: Less to Do with Advertising, More to Do with Sponsored Trials
The most conspicuous example of medical journals' dependence on the pharmaceutical industry is the substantial income from advertising, but this is, I suggest, the least corrupting form of dependence. The advertisements may often be misleading [5,6] and the profits worth millions, but the advertisements are there for all to see and criticise. Doctors may not be as uninfluenced by the advertisements as they would like to believe, but in every sphere, the public is used to discounting the claims of advertisers.
The much bigger problem lies with the original studies, particularly the clinical trials, published by journals. Far from discounting these, readers see randomised controlled trials as one of the highest forms of evidence. A large trial published in a major journal has the journal's stamp of approval (unlike the advertising), will be distributed around the world, and may well receive global media coverage, particularly if promoted simultaneously by press releases from both the journal and the expensive public-relations firm hired by the pharmaceutical company that sponsored the trial. For a drug company, a favourable trial is worth thousands of pages of advertising, which is why a company will sometimes spend upwards of a million dollars on reprints of the trial for worldwide distribution. The doctors receiving the reprints may not read them, but they will be impressed by the name of the journal from which they come. The quality of the journal will bless the quality of the drug.
Fortunately from the point of view of the companies funding these trials-but unfortunately for the credibility of the journals who publish them-these trials rarely produce results that are unfavourable to the companies' products [7,8]. Paula Rochon and others examined in 1994 all the trials funded by manufacturers of nonsteroidal anti-inflammatory drugs for arthritis that they could find . They found 56 trials, and not one of the published trials presented results that were unfavourable to the company that sponsored the trial. Every trial showed the company's drug to be as good as or better than the comparison treatment.
By 2003 it was possible to do a systematic review of 30 studies comparing the outcomes of studies funded by the pharmaceutical industry with those of studies funded from other sources . Some 16 of the studies looked at clinical trials or meta-analyses, and 13 had outcomes favourable to the sponsoring companies. Overall, studies funded by a company were four times more likely to have results favourable to the company than studies funded from other sources. In the case of the five studies that looked at economic evaluations, the results were favourable to the sponsoring company in every case.
The evidence is strong that companies are getting the results they want, and this is especially worrisome because between two-thirds and three-quarters of the trials published in the major journals-Annals of Internal Medicine, JAMA, Lancet, and New England Journal of Medicine-are funded by the industry . For the BMJ, it's only one-third-partly, perhaps, because the journal has less influence than the others in North America, which is responsible for half of all the revenue of drug companies, and partly because the journal publishes more cluster-randomised trials (which are usually not drug trials) .
Why Do Pharmaceutical Companies Get the Results They Want?
Why are pharmaceutical companies getting the results they want? Why are the peer-review systems of journals not noticing what seem to be biased results? The systematic review of 2003 looked at the technical quality of the studies funded by the industry and found that it was as good-and often better-than that of studies funded by others . This is not surprising as the companies have huge resources and are very familiar with conducting trials to the highest standards.
The companies seem to get the results they want not by fiddling the results, which would be far too crude and possibly detectable by peer review, but rather by asking the "right" questions-and there are many ways to do this . Some of the methods for achieving favourable results are listed in the Sidebar, but there are many ways to hugely increase the chance of producing favourable results, and there are many hired guns who will think up new ways and stay one jump ahead of peer reviewers.
Then, various publishing strategies are available to ensure maximum exposure of positive results. Companies have resorted to trying to suppress negative studies [11,12], but this is a crude strategy-and one that should rarely be necessary if the company is asking the "right" questions. A much better strategy is to publish positive results more than once, often in supplements to journals, which are highly profitable to the publishers and shown to be of dubious quality [13,14]. Companies will usually conduct multicentre trials, and there is huge scope for publishing different results from different centres at different times in different journals. It's also possible to combine the results from different centres in multiple combinations.
These strategies have been exposed in the cases of risperidone  and odansetron , but it's a huge amount of work to discover how many trials are truly independent and how many are simply the same results being published more than once. And usually it's impossible to tell from the published studies: it's necessary to go back to the authors and get data on individual patients.
Peer Review Doesn't Solve the Problem
Journal editors are becoming increasingly aware of how they are being manipulated and are fighting back [17,18], but I must confess that it took me almost a quarter of a century editing for the BMJ to wake up to what was happening. Editors work by considering the studies submitted to them. They ask the authors to send them any related studies, but editors have no other mechanism to know what other unpublished studies exist. It's hard even to know about related studies that are published, and it may be impossible to tell that studies are describing results from some of the same patients. Editors may thus be peer reviewing one piece of a gigantic and clever marketing jigsaw-and the piece they have is likely to be of high technical quality. It will probably pass peer review, a process that research has anyway shown to be an ineffective lottery prone to bias and abuse .
Furthermore, the editors are likely to favour randomised trials. Many journals publish few such trials and would like to publish more: they are, as I've said, a superior form of evidence. The trials are also likely to be clinically interesting. Other reasons for publishing are less worthy. Publishers know that pharmaceutical companies will often purchase thousands of dollars' worth of reprints, and the profit margin on reprints is likely to be 70%. Editors, too, know that publishing such studies is highly profitable, and editors are increasingly responsible for the budgets of their journals and for producing a profit for the owners. Many owners-including academic societies-depend on profits from their journals. An editor may thus face a frighteningly stark conflict of interest: publish a trial that will bring US$100 000 of profit or meet the end-of-year budget by firing an editor.
Journals Should Critique Trials, Not Publish Them
How might we prevent journals from being an extension of the marketing arm of pharmaceutical companies in publishing trials that favour their products? Editors can review protocols, insist on trials being registered, demand that the role of sponsors be made transparent, and decline to publish trials unless researchers control the decision to publish [17,18]. I doubt, however, that these steps will make much difference. Something more fundamental is needed.
Firstly, we need more public funding of trials, particularly of large head-to-head trials of all the treatments available for treating a condition. Secondly, journals should perhaps stop publishing trials. Instead, the protocols and results should be made available on regulated Web sites. Only such a radical step, I think, will stop journals from being beholden to companies. Instead of publishing trials, journals could concentrate on critically describing them.
This article is based on a talk that Richard Smith gave at the Medical Society of London in October 2004 when receiving the HealthWatch Award for 2004. The speech is reported in the January 2005 HealthWatch newsletter . The article overlaps to a small extent with an article published in the BMJ .
Smith R (2005) Medical Journals Are an Extension of the Marketing Arm of Pharmaceutical Companies. PLoS Med 2(5): e138
1. Horton R (2004) The dawn of McScience. New York Rev Books 51(4): 7-9. Find this article online
2. Angell M (2005) The truth about drug companies: How they deceive us and what to do about it. New York: Random House. 336 p.
3. Kassirer JP (2004) On the take: How medicine's complicity with big business can endanger your health. New York: Oxford University Press. 251 p.
4. Barbour V, Butcher J, Cohen B, Yamey G (2004) Prescription for a healthy journal. PLoS Med 1: e22 DOI: 10.1371/journal.pmed.0010022. Find this article online
5. Wilkes MS, Doblin BH, Shapiro MF (1992) Pharmaceutical advertisements in leading medical journals: Experts' assessments. Ann Intern Med 116: 912-919. Find this article online
6. Villanueva P, Peiro S, Librero J, Pereiro I (2003) Accuracy of pharmaceutical advertisements in medical journals. Lancet 361: 27-32. Find this article online
7. Rochon PA, Gurwitz JH, Simms RW, Fortin PR, Felson DT, et al. (1994) A study of manufacturer-supported trials of nonsteroidal anti-inflammatory drugs in the treatment of arthritis. Arch Intern Med 154: 157-163. Find this article online
8. Lexchin J, Bero LA, Djulbegovic B, Clark O (2003) Pharmaceutical industry sponsorship and research outcome and quality. BMJ 326: 1167-1170. Find this article online
9. Egger M, Bartlett C, Juni P (2001) Are randomised controlled trials in the BMJ different? BMJ 323: 1253. Find this article online
10. Sackett DL, Oxman AD (2003) HARLOT plc: An amalgamation of the world's two oldest professions. BMJ 327: 1442-1445. Find this article online
11. Thompson J, Baird P, Downie J (2001) The Olivieri report. The complete text of the independent inquiry commissioned by the Canadian Association of University Teachers. Toronto: Lorimer. 584 p.
12. Rennie D (1997) Thyroid storm. JAMA 277: 1238-1243. Find this article online
13. Rochon PA, Gurwitz JH, Cheung M, Hayes JA, Chalmers TC (1994) Evaluating the quality of articles published in journal supplements compared with the quality of those published in the parent journal. JAMA 272: 108-113. Find this article online
14. Cho MK, Bero LA (1996) The quality of drug studies published in symposium proceedings. Ann Intern Med 124: 485-489. Find this article online
15. Huston P, Moher D (1996) Redundancy, disaggregation, and the integrity of medical research. Lancet 347: 1024-1026. Find this article online
16. Tramèr MR, Reynolds DJM, Moore RA, McQuay HJ (1997) Impact of covert duplicate publication on meta-analysis: A case study. BMJ 315: 635-640. Find this article online
17. Davidoff F, DeAngelis CD, Drazen JM, Hoey J, Hojgaard L, et al. (2001) Sponsorship, authorship, and accountability. Lancet 358: 854-856. Find this article online
18. De Angelis C, Drazen JM, Frizelle FA, Haug C, Hoey J, et al. (2004) Clinical trial registration: A statement from the International Committee of Medical Journal Editors. Lancet 364: 911-912. Find this article online
19. Godlee F, Jefferson T (2003) Peer review in health sciences, 2nd ed. London: BMJ Publishing Group. 367 p.
20. Garrow J (2005 January) HealthWatch Award winner. HealthWatch 56: 4-5. Find this article online
21. Smith R (2003) Medical journals and pharmaceutical companies: Uneasy bedfellows. BMJ 326: 1202-1205.