With drug makers spending $4.5 billion on DTC advertising, a 30% increase over the spending rate just two years ago, it is incumbent upon us to consider carefully the accuracy of these companies' claims that they are educating the American consumers in their making informed medical decisions about various prescriptions available.
Friedman then compares the effectiveness of comparable anti-depressants, given the ample marketing of the anti-psychotic drug Latuda, which is also used in the treatment of bipolar depression. He examines four different anti-psychotics used to treat bipolar depression (Prozac/Zypreza, Seroquel, Symbyax, Latuda), their monthly cost not factoring in insurance, and the number of people needing treatment in clinical trials for one individual to benefit from the drug. (None of the drugs provide information on the number of successful and unsuccessful clinical trials, so those columns contain only question marks for all four drugs). The number of people requiring treatment in a clinical trial for one to benefit from the drug range from 4 to 6 across the drugs, so there is limited variation on that dimension. However, while Prozac's monthly cost is $21.79, Seroquel ranges from approximately $51-102, and Latuda is $922. And while it is not at all uncommon for one to try different SSRIs and similar drugs to find one that (potentially for idiosyncratic reasons) is most effective and provides fewer side effects (e.g., Prozac vs. Zoloft), when a medication is nearly 42 times more expensive, we expect there to be a large differential in effectiveness. But it is unclear whether that's actually what we're getting here, with many earlier medications apparently functioning the same or similarly and potentially being similarly effective.
Friedman offers some valuable suggestions for information that would be valuable to consumers in determining which medication to take, and the costs and benefits associated with that decision. Among those recommendations are statistics about the rate at which people taking the drug experience various mild vs. serious side effects (are the cautions we hear on television about seizures or coma just to protect against litigation in a very rare event, or is it a legitimate possibility to consider?), and the results of clinical trials for the drug. He recommends that the FDA provide a universal scorecard for all new drugs to reveal how the drug's cost and effectiveness compare to similar drugs already available, with the scorecard in advertising and pharmacy-provided safety information. Thus, consumers can have not just absolute information (e.g., the cost of this drug and its proper use) but also relative information given the number of similar drugs on the market.
This is particularly important given the number of comparable medications available to American consumers. However, Friedman notes, "Drug companies have little incentive to make these comparisons. Why? Because a vast majority of 'new' drugs are really not new at all; instead, they are minor tweaks and modifications of older drugs, and therefore unlikely to substantially outperform them. For example, there are seven statins on the market that all lower cholesterol by the same mechanism, and eight antidepressants (selective serotonin reuptake inhibitors) that essentially work the same way." If "new" options for medications are rarely very new but may pose different costs and risks, such a comparative analysis and relativistic way of evaluating drugs will be especially advantageous. The website eHealthMe makes a valuable advance toward such compiling detailed data on patient experiences of particular constellations of side effects with medications as well as with interactions between medications.
Friedman's additional recommendation that clinical trial information -- both positive and negative -- be provided to consumers speaks to a broader issue in medical research (as well as research in many other fields including political science, though the consequences may be more limited than with respect to the development and distribution of medications to the American public), which is selection bias in publications. Positive clinical trials (those for which the statistical significance is p<0.05) are more likely to be published than are negative ones, though drug companies are required to register all trials. It is not difficult to understand drug companies' incentive to produce positive findings: they are spending marked amounts of money to produce a drug and want to show that it works. And in other contexts, there is an attractiveness to formulating a hypothesis and finding that we were in fact correct in our assessment of the relationship between X and Y. Looking into this matter, the World Health Organization found in surveying various trials and investigations into publication bias that the rate at which clinical trial results are published ranges from 36-93% (striking variation!), that positive findings were more likely to be published than negative or null findings with an odds ratio of 3.90, and that positive findings were more likely to be published sooner. Thus, if a medication is determined to be ineffective, we might not learn about that at all, or if we do, we might not learn this in a timely fashion.
An October 2015 New York Times article evaluated the overstated effects of talk therapy as a consequence of studies with poor results rarely being published in journals and thus peoples' impressions not accounting for the unpublished accounts of limited or null effects. Indeed, when accounting for the selection bias by tracking down the data from 13 unpublished clinical trials, they find that estimated effects of the benefits of talk therapy reduced by approximately a quarter. And while the harm associated with engaging in therapy when it may or may not be helpful is relatively small -- there is a clear financial investment but otherwise may still give someone a venue in which to talk and feel heard, even if not effectively managing the more severe symptoms at hand -- a 25% differential in effectiveness may matter a LOT when considering putting medication in one's body (again, with uncertain clusters of side effects, some serious) and spending hundreds of dollars per month to do so.
The independent, nonprofit NGO Patient-Centered Outcomes Research Institute (PCORI), created in 2010 as part of the Affordable Care Act ("Obamacare"), has the mission of improving the availability and quality of medical evidence that is made available. Among its programs is Clinical Effectiveness Research (CER), aiming to remedy the incomplete nature of available clinical trial evidence, though clearly there is a long way to go. Increasing the number of venues in which to publish null findings would go a long way toward improving the quality of research in healthcare and beyond.