Your Money or Your Medicine: Corruption at the FDA
$56,000 a year for a new Alzheimer’s drug, aducanumab (Aduheim), that has little if any clinical benefit and substantial risk. A 40% rise in the share price for its producer, Biogen. All this despite lack of a single positive vote by any of 11 expert Food and Drug Administration (FDA) reviewers(1). Is there something wrong here or is this just the way capitalism is supposed to work? Seems that way.
It is no surprise to any of our readers that the poor, disproportionately non-white and non-citizen, are deprived of the basic right to a healthy life in part because of the expense of medications. However, many of us think of government oversight agencies as moral and reliable, if not always competent, but the story of the FDA not only shows the blatant influence of the private sector over its professional judgments, but how the degree of this influence is steadily increasing. Although there has been some uproar over this most recent and egregious example of corruption, it remains to be seen if any procedures will change or if this drug will be withdrawn. As long as profits determine what drugs get developed and at what price they will be sold, we will be deprived of medicines we need for many conditions. Either drugs are not developed or are taken off market because the demand is too small to make profits, or the price is too high for many patients to afford an available treatment. It is a similar story to the disastrous lack of world-wide distribution of Covid vaccines, which is causing havoc even in our own country.
How We Got Here
The FDA came into existence in 1906 under President Theodore Roosevelt in order to protect the public from dangerous medications. It had been evident for over a century that regulation was needed against profiteers like those who peddled false editions of Edward Jenner’s smallpox vaccine introduced in 1796. Several attempts to pass legislation holding drug manufacturers responsible for purity and efficacy came and went in the 1800s, but only in 1906 did the government gain the authority to regulate the sale and transport of medicine and foods. However, the courts made it difficult to uphold government standards and many dangerous drugs slipped through, including a sulfa antibiotic that killed 100 people in 1937. By the 1950s federal regulatory powers were expanded to require pre-market review and ban false claims, require prescriptions for some drugs, and recall ineffective or dangerous medications. In the 1960s drug advertising was restricted and inspections of pharmaceutical plants began.
The evidence newly required for FDA approval required three stages: a small trial among the healthy to determine a safe dose (phase I), a second among patients to look for efficacy and side effects (phase II), and two large phase III trials to monitor benefits and safety. Then the data would be reviewed by FDA and sometimes outside experts. These processes increased the time required for clinical trials, slowed down the process for drug approvals and, under the Hatch-Waxman Act of 1984, allowed generics to be tested before patents expired on brand name drugs. All these factors threatened pharmaceutical profits, so drug companies began to bypass testing using rules that said they could do so if their product was essentially equivalent to an existing one and that they could submit “data summaries” and “real world evidence” in lieu of formal clinical studies. Several harmful products were thus approved(2).
A trend of major importance has been for pharmaceutical companies to pay more and more of the FDA’s costs, increasing from 27% in 1993 to 75% in 2017. Many of the witnesses, as well as experts, who come before the FDA also have their expenses, such as travel and housing, paid by the drug companies. In the late 1980s, AIDS activists were instrumental in demanding the FDA make available lifesaving drugs, to the benefit of many. But over the past 30 years, so many new routes to faster approvals have been introduced that the FDA process is now the fastest in the world, and the lobbyists of the Pharmaceutical Research and Manufacturers of America continue to push for even more speed. As of today there are accelerated approval, priority review, and breakthrough therapy paths for fast approval, for which 68% of new drugs qualified from 2014–2016. Once there has been an expedited approval, the manufacturer has over ten years to complete post-marketing studies(3).
A report in Science Magazine in 2018(4) studied the frequency with which physician FDA experts are paid by drug manufacturers. They found that 40 of 107 advisers from 2013–16 received over $10,000 in post hoc earnings or research support. Of those, 26 received over $100,000 and six over $1 million. Many of these received other funds from the same companies in the year before they advised on a product, which was disclosed in journal articles but not by the FDA. In addition, many advisors understand that they will receive benefits far into the future, such as speaking engagements, research funding or consulting fees, that enhance their careers. Within the FDA, since industry fees have been used to pay for drug reviews, only review teams that vote for approval have received agency awards(3). A physician colleague who once worked at FDA tells me that reviewers who oppose a medication or device are often removed from the team. 93% of patient and consumer representatives invited to FDA hearings today receive funds from drug companies (5).
One of many examples of the bad results of the abbreviated approval process is the drug febuxostat (Uloric), a drug for gout, which was denied in 2005 and 2006 because patients suffered more serious heart problems than patients on the standard generic allopurinol therapy. Its producer, Takeda, then performed a larger trial in 2009 that showed no increased mortality, and Uloric was approved pending a post-marketing trial of 6000 patients. After eight years, this study showed that patients on Uloric had a 22% increased risk of death and a 34% increased risk of heart-related deaths over those on allopurinol, but only then did the FDA publish an alert and say it should only be used as a second line drug(3).
The Case of Aducanumab
Alzheimer’s dementia affects about 35 million people in the world, 6 million in the US, and there are no effective medications or new ones since 2003. Aducanumab, like other proposed Alzheimers drugs, decreases the clumps of amyloid in the brain that some researchers think is the cause of dementia, but the evidence that it helps cognition is thin. In March 2019, two phase III trials were stopped early because no therapeutic effect was noted, but a later analysis by the Biogen showed that a group of those with early disease who received the highest dose in only one trial had some benefit. Ten out of eleven independent expert panelists advising the FDA (one was uncertain) concluded that this result was insignificant and recommended against approval, and three have subsequently resigned. In addition, 40% of trial participants developed brain swelling which, although not always dangerous, would require frequent MRIs to evaluate. Nonetheless, the FDA approved the drug last month through its accelerated approval program, which allows Biogen to take a decade to complete bigger studies. Meanwhile, Biogen announced that the once a month intravenous injection will cost $56,000 per person, which would mean a profit of $17 billion per year if only 5% of Alzheimer patients take the drug. Its share price rose 40%(1).
It’s no surprise to learn that Biogen executives had met with FDA officials, including the director of the neuroscience division, since 2019 to find a pathway for approval.
These discussions violate FDA protocols, but the secret project was important enough to Biogen to even have a name, “Project Onyx.” Public Citizen, a government watchdog group, has called for the resignation of top FDA officials(6).
So controversial is the FDA approval of aducanumab that the Cleveland Clinic, a major medical institution will refuse to dispense it after a team of its own experts reviewed all the evidence. The Mount Sinai Health Care system in New York will also not use the drug until FDA ties with Biogen are investigated. Six state affiliates of Blue Cross/Blue Shield are refusing to cover the drug and other insurers are suggesting the price be decreased tenfold. Medicare is considering what to do, which could be pivotal both for Biogen and for Medicare, which could be bankrupted by the cost.
Who Wins and Who Loses?
As we have said, this story is really not so surprising. The pharmaceutical industry is the most profitable in the US, the 35 largest companies making a profit of $8.6 trillion from 2000–2018(7). Americans spend more on prescription drugs annually -$1200- than anyone else in the world. Even though insurance may pay many of these costs, the high prices cause premium prices to rise. Even the manufacturers of generics, which are supposed to keep prices down, are being investigated for collusion. High prices also mean that insurers may restrict drug usage to the sickest patients, as happened with Hepatitis C.
Medicaid, the plan for the poorest insured patients, pays less than market rates to drug makers. However, Medicare, which insures the vast majority of those over 65, is forbidden by law from negotiating drug prices with manufacturers. The 30% of US residents who are uninsured are out of luck entirely and unable to afford many medicines, and the rest pay an average of 14% of their drug costs. 20% of people overall have said they do not fill or finish prescriptions because of the cost (8).
Just as with the inadequate state of housing, wages, education and all services for most US workers, the solution will ultimately lie in remaking the whole system, in changing to one where profits do not exist and human welfare is the primary concern. Not only would the spectrum of disease change, away from many caused by toxins and preventable infections, but research would be collaborative and done based on need alone, an would be for use and without profit. None of this is possible under capitalism.