It is disgraceful that for most of the 20th century, Alzheimer’s disease – first named and described in 1910 – was underfunded by the government, neglected by most of the medical community and ignored by Big Pharma. It was not until 1994, more than 80 years after the disease was discovered, that the Food and Drug Administration approved the first drug for Alzheimer’s. That medicine was not especially effective and was ultimately pulled from the market for safety concerns. Were it any other disease, such an approach would be unconscionable.

Now things are changing. Every day, about 10,000 baby boomers – people born between 1946 and 1964 – turn 65, the average age of onset of the most common form of Alzheimer’s. In 2020, those older than 65 in the U.S. numbered about 56 million (nearly 17% of the total population); by 2030, every baby boomer will be older than 65, making up an estimated 73 million Americans.

This has caused a rethinking of Alzheimer’s treatment, and in the last few years, several new medications have received FDA approval. The drugs have different actions and different effects; none is a cure. Some work to slow the relentless progression of the disease, and others moderate symptoms such as memory loss and confusion. Unfortunately, these treatments do not restore brain function lost through prior neurologic deterioration.

Because they are not a cure, these medications present a new ethical question for patients, physicians and surrogate decision-makers: When during the course of the disease should the drugs be prescribed? Diagnosis of Alzheimer’s currently involves brain imaging, behavioral and cognitive testing, and often genetic evaluation. Obviously, those who are diagnosed with early, new-onset Alzheimer’s are good candidates but what about those with advanced disease? Is it fair to subject them to the side effects, not to mention the likely expense the drugs entail, simply to prolong their cognitive decline and potentially extend their suffering?

Put simply, is there a stage in the disease process when it becomes too late to institute therapeutic intervention? And who bears the responsibility of making such a decision?

Patients? Certainly when they are mentally competent and have the capacity to understand the risks and benefits of Alzheimer’s treatment. But what happens when they lose that ability? And by restricting the medications to those patients who can give informed consent, are we consigning those further down the road to no treatment unless someone speaks for them?

Advertisement

Families and surrogates? This would be the next logical step, and in most cases, the hope is that they would decide as the patient would. But it doesn’t always happen that way. Moreover, there is often a conflict of interest because an Alzheimer’s patient whose disease is extended through these drugs means more care and hospitalizations, as well as greater expenses for caregivers.

Philadelphia Daily News reporter Jim Nicholson, a longtime caregiver for his spouse with Alzheimer’s, once observed, “The ‘moral imperative’ for families to take care of even a healthy, unafflicted ‘grandmom’ or ‘granddad’ began to fast dissipate in the last half of the 20th century.”

Physicians? Treatment for Alzheimer’s ultimately devolves to physicians, but to date, there have been few conversations in the clinical and research communities regarding the ethics of treating neurodegenerative conditions with nonrestorative drugs. Such discussions are urgently required to ensure that patients’ rights and well-being are protected.

Ironically, this debate about when to treat is similar to one the medical community faced 50 years ago with new cancer drugs and surgeries. Which patients were a candidate for a cure or significant palliation, and in which patients would treatment simply be prolonging suffering without measurable benefit? While occasionally still a problem in cancer patients, it is much less so today because of years of experience and a long track record of outcomes with cancer therapy. This is where the dilatory approach to Alzheimer’s for decades has left us today. We still know little about the causes or trajectory of the disease. It could take many years to answer the question of which patients will benefit most, and an entire generation of patients may suffer the uncertainty.

For most of those who are not familiar with Alzheimer’s disease, the condition is essentially invisible; as their symptoms progress, patients, once active and engaged, gradually disappear from the stage. They no longer contribute to the economy, cannot interact socially and are kept out of sight. Nursing homes will predictably expand to become comfort villages for the afflicted, and those villages will not be so comfortable if, as a society, we cannot muster the financial, political and moral imperative to help these invisible victims and pay caregivers reasonable wages.

Nicholson has said, “People who are of no further immediate and practical use to our society are being replaced every day. The unproductive are residue, and the afflicted are an inconvenience.”