1186 stories
·
4 followers

When Cancer Was Conquerable

1 Share

The first attempt to treat cancer in humans with chemotherapy happened within days of doctors realizing that it reduced the size of tumors in mice.

The year was 1942, and we were at war. Yale pharmacologist Alfred Gilman was serving as chief of the pharmacology section in the Army Medical Division at Edgewood Arsenal, Maryland, working on developing antidotes to nerve gases and other chemical weapons the Army feared would be used against American troops.

After a few months of researching mustard gas in mice, Gilman and his collaborator, Louis S. Goodman, noticed that the poison also caused a regression of cancer in the rodents. Just a few days later, they persuaded a professor of surgery at Yale to run a clinical trial on a patient with terminal cancer; within 48 hours, the patient's tumors had receded.

In 1971, three decades after Gilman's discovery, the U.S. government declared a "war on cancer." Since then, we have spent nearly $200 billion in federal money on research to defeat the disease. But we haven't gotten much bang for our buck: Cancer deaths have fallen by a total of just 5 percent since 1950. (In comparison, heart disease deaths are a third of what they were then, thanks to innovations like statins, stents, and bypass surgery.) The American Cancer Society estimates that more than 600,000 Americans die of cancer annually; 33 percent of those diagnosed will be dead within five years.

Chemotherapy drugs remain the most common treatments for cancer, and most of them were developed before the federal effort ramped up. Out of 44 such drugs used in the U.S. today, more than half were approved prior to 1980. It currently takes 10–15 years and hundreds of millions of dollars for a drug to go from basic research to human clinical trials, according to a 2009 report funded by the National Institutes of Health (NIH). It is now nearly impossible to conceive of going from a eureka moment to human testing in a few years, much less a few days.

Beating cancer is not a lost cause. But if we're going to break new ground, we need to recapture the urgency that characterized the work of pioneers like Gilman and Goodman. And in order to do that, we need to understand how we managed to turn the fight against humanity's most pernicious pathology into a lethargic slog.

Fast, Efficient, and Effective

Look at the history of chemotherapy research and you'll find a very different world than the one that characterizes cancer research today: fast bench-to-bedside drug development; courageous, even reckless researchers willing to experiment with deadly drugs on amenable patients; and centralized, interdisciplinary research efforts. Cancer research was much more like a war effort before the feds officially declared war on it.

One reason that's true is that research on chemotherapy started as a top-secret military project. Medical records never mentioned nitrogen mustard by name, for example—it was referred to only by its Army code name, "Substance X." By 1948, close to 150 patients with terminal blood cancers had been treated with a substance most Americans knew of only as a battlefield killer. After World War II, Sloan Kettering Institute Director Cornelius "Dusty" Rhoads recruited "nearly the entire program and staff of the Chemical Warfare Service" into the hospital's cancer drug development program, former National Cancer Institute (NCI) Director Vincent DeVita recalled in 2008 in the pages of the journal Cancer Research.

Researchers turned swords into ploughshares, and they did it quickly. In February 1948, Sidney Farber, a pathologist at Harvard Medical School, began experiments with the antifolate drug aminopterin. This early chemotherapy drug, and its successor methotrexate, had been synthesized by Yellapragada Subbarrow, an Indian chemist who led the research program at Lederle Labs, along with his colleague Harriet Kiltie. Using their compounds, Farber and his team produced the first leukemia remissions in children in June 1948.

In a July 1951 paper, Jane C. Wright, an African-American surgeon, reported she had extended the successes of methotrexate from blood to solid cancers, achieving regressions in breast and prostate tumors by using the substance.

Chemist Gertrude Elion, who'd joined Wellcome Labs in 1944 despite being too poor to afford graduate school, quickly developed a new class of chemotherapy drugs—2,6-diaminopurine in 1948 and 6-mercaptopurine in 1951—for which she and George H. Hitchings would later win the Nobel Prize.

In 1952, Sloan Kettering's Rhoads was running clinical trials using Elion's drugs to treat leukemia. After popular columnist Walter Winchell reported on the near-miraculous results, public demand for 6-mercaptopurine forced the Food and Drug Administration (FDA) to expedite its approval. The treatment was on the market by 1953.

Notice how fast these researchers were moving: The whole cycle, from no chemotherapies at all to development, trial, and FDA approval for multiple chemotherapy drugs, took just six years. Modern developments, by contrast, can take decades to get to market. Adoptive cell transfer—the technique of using immune cells to fight cancer—was first found to produce tumor regressions in 1985, yet the first such treatments, marketed as Kymriah and Yescarta, were not approved by the FDA until 2017. That's a 32-year lag, more than five times slower than the early treatments.

Despite the pace of progress in the 1940s, researchers had only scratched the surface. As of the early 1950s, cancer remissions were generally short-lived and chemotherapy was still regarded with skepticism. As DeVita observed in his Cancer Research retrospective, chemotherapists in the 1960s were called the "lunatic fringe." Doctors scoffed at George Washington University cancer researcher Louis Alpert, referring to "Louis the Hawk and his poisons." Paul Calabresi, a distinguished professor at Yale, was fired for doing too much early testing of new anti-cancer drugs.

"It took plain old courage to be a chemotherapist in the 1960s," DeVita said in a 2008 public radio interview, "and certainly the courage of the conviction that cancer would eventually succumb to drugs."

The first real cure due to chemotherapy was of choriocarcinoma, the cancer of the placenta. And yet Min Chiu Li—the Chinese-born NCI oncologist who discovered in 1955 that methotrexate could produce permanent remissions in pregnant women—was fired in 1957. His superiors thought he was inappropriately experimenting on patients and unnecessarily poisoning them.

But early chemotherapists were willing to bet their careers and reputations on the success of the new drugs. They developed a culture that rewarded boldness over credentialism and pedigree—which may be why so many of the founders of the field were women, immigrants, and people of color at a time when the phrase affirmative action had yet to be coined. Gertrude Elion, who didn't even have a Ph.D., synthesized six drugs that would later find a place on the World Health Organization's list of essential medicines.

"Our educational system, as you know, is regimenting, not mind-expanding," said Emil Freireich, another chemotherapy pioneer, in a 2001 interview at the MD Anderson Cancer Center. "So I'd spent all my life being told what I should do next. And I came to the NIH and people said, 'Do what you want.'…What came out of that environment was attracting people who were adventurous, because you don't do what I did if you're not adventurous. I could have gone into the military and been a captain and gone into practice. But this looked like a challenge."

Chemotherapy became a truly viable treatment option in the late 1960s, with the introduction of combination chemotherapy and the discovery of alkaloid chemotherapeutic agents such as vincristine. While the idea of combination therapy was controversial at first—you're giving cancer patients more poisons over longer periods of time?—the research showed that prolonged protocols and cocktails of complementary medications countered cancer's ability to evolve and evade the treatment. The VAMP program (vincristine, amethopterin, 6-mercaptopurine, and prednisone) raised leukemia remission rates to 60 percent by the end of the decade, and at least half the time these remissions were measured in years. Oncologists were also just starting to mitigate the negative effects of chemotherapy with platelet transfusions. Within a decade of its inception, chemotherapy was starting to live up to its promise.

The late '60s saw the development of two successful protocols for Hodgkin's disease. The complete remission rate went from nearly zero to 80 percent, and about 60 percent of the original patients never relapsed. Hodgkin's lymphoma is now regarded as a curable affliction.

In the 1970s, chemotherapists expanded beyond lymphomas and leukemias and began to treat operable solid tumors with chemotherapy in addition to surgery. Breast cancer could be treated with less invasive surgeries and a lower risk of remission if the operation was followed by chemotherapy. The results proved spectacular: Five-year breast cancer survival rates increased by more than 70 percent. In his public radio interview, DeVita credited "at least 50 percent of the decline in mortality" in colorectal cancer and breast cancer to this combined approach.

Chemo was soon being used on a variety of solid tumors, vindicating the work of the "lunatic fringe" and proving that the most common cancers could be treated with drugs after all.

The '70s also saw the advent of the taxane drugs, originally extracted from the Pacific yew. That tree was first identified as cytotoxic, or cell-killing, in 1962 as part of an NCI investigation into medicinal plants. The taxanes were the first cytotoxic drugs to show efficacy against metastatic breast and ovarian cancer.

But something had changed between the development of the first chemotherapies and the creation of this next class of medicines. Taxol, the first taxane, wasn't approved for use in cancer until 1992. While 6-mercaptopurine journeyed from the lab to the doctor's office in just two years, taxol required three decades.

Something clearly has gone wrong.

A Stalemate in the War

In the early 1950s, Harvard's Farber, along with activist and philanthropist Mary Lasker, began to pressure Congress to start funding cancer research. In 1955, federal lawmakers appropriated $5 million for the Cancer Chemotherapy National Service Center (CCNSC), which was set up between May and October of that year.

At $46 million in 2018 dollars, the initial budget of the CCNSC wouldn't be enough to fund the clinical trials of even one average oncology drug today. The National Cancer Institute, meanwhile, now has an annual budget of more than $4 billion.

At Lasker's insistence, CCNSC research was originally funded by contracts, not grants. Importantly, funding was allocated in exchange for a specific deliverable on a specific schedule. (Today, on the other hand, money is generally allocated on the basis of grant applications that are peer-reviewed by other scientists and often don't promise specific results.) The original approach was controversial in the scientific community, because it makes it hard to win funding for more open-ended "basic" research. The contracts allowed, however, for focused, goal-oriented, patient-relevant studies with a minimum of bureaucratic interference.

Farber pushed for directed research aimed at finding a cure rather than basic research to understand the "mechanism of action," or how and why a drug works. He also favored centralized oversight rather than open-ended grants.

"We cannot wait for full understanding," he testified in a 1970 congressional hearing. "The 325,000 patients with cancer who are going to die this year cannot wait; nor is it necessary, in order to make great progress in the cure of cancer, for us to have the full solution of all the problems of basic research.…The history of medicine is replete with examples of cures obtained years, decades, and even centuries before the mechanism of action was understood for these cures—from vaccination, to digitalis, to aspirin."

Over time, cancer research drifted from Farber's vision. The NCI and various other national agencies now largely fund research through grants to universities and institutes all over the country.

In 1971, Congress passed the National Cancer Act, which established 69 geographically dispersed, NCI-designated cancer research and treatment centers. James Watson—one of the discoverers of the structure of DNA and at the time a member of the National Cancer Advisory Board—objected strenuously to the move. In fact, DeVita in his memoir remembers the Nobel winner calling the cancer centers program "a pile of shit." Watson was fired that day.

Impolitic? Perhaps. Yet the proliferation of organizations receiving grants means cancer research is no longer primarily funded with specific treatments or cures (and accountability for those outcomes) as a goal.

With their funding streams guaranteed regardless of the pace of progress, researchers have become increasingly risk-averse. "The biggest obstacle today to moving forward effectively towards a true war against cancer may, in fact, come from the inherently conservative nature of today's cancer research establishments," Watson wrote in a 2013 article for the journal Open Biology.

As the complexity of the research ecosystem grew, so did the bureaucratic requirements—grant applications, drug approval applications, research board applications. The price tag on complying with regulations for clinical research ballooned as well. A 2010 paper in the Journal of Clinical Oncology reported that in 1975, R&D for the average drug cost $100 million. By 2005, the figure was $1.3 billion, according to the Manhattan Institute's Avik Roy. Even the rate at which costs are increasing is itself increasing, from an annual (inflation-adjusted) rise of 7.3 percent in 1970–1980 to 12.2 percent in 1980–1990.

Running a clinical trial now requires getting "protocols" approved. These plans for how the research will be conducted—on average, 200 pages long—must go through the FDA, grant-making agencies such as the NCI or NIH, and various institutional review boards (IRBs), which are administered in turn by the Department of Health and Human Services' Office of Human Research Protections. On average, "16.8 percent of the total costs of an observational protocol are devoted to IRB interactions, with exchanges of more than 15,000 pages of material, but with minimal or no impact on human subject protection or on study procedures," wrote David J. Stewart, head of the oncology division at the University of Ottawa, in that Journal of Clinical Oncology article.

While protocols used to be guidelines for investigators to follow, they're now considered legally binding documents. If a patient changes the dates of her chemotherapy sessions to accommodate family or work responsibilities, it may be considered a violation of protocol that can void the whole trial.

Adverse events during trials require a time-consuming reporting and re-consent process. Whenever a subject experiences a side effect, a report must be submitted to the IRB, and all the other subjects must be informed and asked if they want to continue participation.

A sizable fraction of the growth in the cost of trials is due to such increasing requirements. But reporting—which often involves making patients fill out questionnaires ranking dozens of subjective symptoms, taking numerous blood draws, and minutely tracking adherence to the protocol—is largely irrelevant to the purpose of the study. "It is just not all that important if it was day 5 versus day 6 that the patient's grade 1 fatigue improved, particularly when the patient then dies on day 40 of uncontrolled cancer," Stewart noted drily.

As R&D gets more expensive and compliance more onerous, only very large organizations—well-funded universities and giant pharmaceutical companies, say—can afford to field clinical trials. Even these are pressured to favor tried-and-true approaches that already have FDA approval and drugs where researchers can massage the data to just barely show an improvement over the placebo. (Since clinical trials are so expensive that organizations can only do a few, there's an incentive to choose drugs that are almost certain to pass with modest results—and not to select for drugs that could result in spectacular success or failure.) Of course, minimal improvement means effectively no lives saved. Oligopoly is bad for patients.

To be sure, cancer research is making progress, even within these constraints. New immunotherapies that enlist white blood cells to attack tumors have shown excellent results. Early screening and the decline in smoking have had a huge impact as well: Cancer mortality rates are finally dropping, after increasing for most of the second half of the 20th century. It's possible that progress slowed in part because we collected most of the low-hanging fruit from chemotherapy early on. Yet we'll never know for sure how many more treatments could have been developed—how much higher up the proverbial tree we might be now—if policy makers hadn't made it so much harder to test drugs in patients and get them approved.

To find cures for cancer, we need novel approaches that produce dramatic results. The only way to get them is by lowering barriers to entry. The type of research that gave us chemotherapy could never receive funding—and would likely get its practitioners thrown in jail—if it were attempted today. Patient safety and research ethics matter, of course, and it's important to maintain high standards for clinical research. But at current margins it would be possible to open up cancer research quite a bit without compromising safety.

Our institutions have been resistant to that openness. As a result, scholars, doctors, and patients have less freedom to experiment than ever before.

Bringing Back the Urgency

The problem is clear: Despite tens of billions of dollars every year spent on research, progress in combating cancer has slowed to a snail's pace. So how can we start to reverse this frustrating trend?

One option is regulatory reform, and much can be done on that front. Streamline the process for getting grant funding and IRB approval. Cut down on reporting requirements for clinical trials, and start programs to accelerate drug authorizations for the deadliest illnesses.

One proposal, developed by American economist Bartley Madden, is "free-to-choose medicine." Once drugs have passed Phase I trials demonstrating safety, doctors would be able to prescribe them while documenting the results in an open-access database. Patients would get access to drugs far earlier, and researchers would get preliminary data about efficacy long before clinical trials are completed.

More radically, it might be possible to repeal the 1962 Kefauver-Harris amendment to the Federal Food, Drug, and Cosmetic Act, a provision that requires drug developers to prove a medication's efficacy (rather than just its safety) before it can receive FDA approval. Since this more stringent authorization process was enacted, the average number of new drugs greenlighted per year has dropped by more than half, while the death rate from drug toxicity stayed constant. The additional regulation has produced stagnation, in other words, with no upside in terms of improved safety.

Years ago, a Cato Institute study estimated the loss of life resulting from FDA-related drug delays from 1962 to 1985 in the hundreds of thousands. And this only included medications that were eventually approved, not the potentially beneficial drugs that were abandoned, rejected, or never developed, so it's probably a vast underestimate.

There have been some moves in the right direction. Between 1992 and 2002, the FDA launched three special programs to allow for faster approval of drugs for certain serious diseases, including cancer. And current FDA Commissioner Scott Gottlieb shows at least some appetite for further reform.

Another avenue worth exploring is private funding of cancer research. There's no shortage of wealthy donors who care about discovering cures and are willing to invest big money to that end. Bill and Melinda Gates are known around the world for their commitment to philanthropy and interest in public health. In 2016, Facebook founder Mark Zuckerberg and his wife Priscilla Chan promised to spend $3 billion on "curing all disease in our children's lifetime." Napster co-founder Sean Parker has donated $250 million for immunotherapy research.

We know from history that cancer research doesn't need to cost billions to be effective. Instead of open-ended grants, donors could pay for results via contracts or prizes. Instead of relying solely on clinical tests, doctors could do more case series in which they use experimental treatments on willing patients to get valuable human data before progressing to the expensive "gold standard" of a randomized controlled trial. And instead of giving huge sums to a handful of insiders pursuing the same old research avenues, cancer funders could imitate tech investors and cast around for cheap, early stage, contrarian projects with the potential for fantastic results.

The original logo on the Memorial Sloan Kettering Cancer Center, designed in 1960, is an arrow pointing upward along with the words Toward the Conquest of Cancer. We used to think cancer was conquerable. Today, that idea is often laughed off as utopian. But there are countless reasons to believe that progress has slowed because of organizational and governmental problems, not because the disease is inherently incurable. If we approach some of the promising new avenues for cancer research with the same zeal and relentlessness that Sidney Farber had, we might beat cancer after all.

Read the whole story
francisga
49 minutes ago
reply
Lafayette, LA, USA
Share this story
Delete

Don't Worry About That Diet Soda Habit: Artificial Sweeteners Are Harmless, Say Scientists

1 Share

Good news for fans of diet drinks and sugar-free sweets: You can safely ignore the hype about zero-calorie sweeteners somehow triggering weight gain and metabolic issues, according to a team of U.S. and European scientists.

The potential paradox of diet soda fueling weight gain had a lot of traction in popular health media. But this idea was based on inconsistent rodent research results, plus human studies that found links between artificial-sweetener consumption and ill effects but not a causal relationship .

Beyond Calories

A new article in the journal Obesity Reviews summarizes last year's "Beyond Calories—Diet and Cardiometabolic Health" conference, sponsored by the CrossFit Foundation. The event convened doctors, obesity researchers, molecular biologists, nutrition scientists, and other academics from the U.S., Denmark, and Germany to consider whether all calories are "equal with regard to effects on cardiometabolic disease and obesity."

"There is no doubt that positive energy balance, due to excessive caloric consumption and/or inadequate physical activity, is the main driver of the obesity and cardiometabolic epidemics," write Janet King and Laura Schmidt in the paper's introduction. But there's also evidence that "certain dietary components increase risk" for heart disease and weight gain in ways that go beyond a simple tradeoff between calories consumed and calories burned.

In the case of diet soda and its ilk, there are all sorts of theories about how these drinks could sneakily imitate the effects of sugary beverages. It was posited that they might trigger our sweet taste receptors to crave more sweet things after consumption, that they might alter our gut bacteria in a negative way, or that they induce a biochemical response as if real sugar had been consumed.

Some speculated that "caloric compensation occurs, negating calories 'saved,'" writes Allison Sylvetsky in a section of the article that deals with non-nutritive sweeteners (NNS). "This compensation could be psychological, whereby one's knowledge of consuming a lower‐calorie NNS‐containing alternative may lead to giving oneself permission for greater calorie ingestion at subsequent meals," or it "could be physiological, in which consumption of lower‐calorie NNS‐containing alternatives promotes heightened hunger and subsequently higher calorie intake."

But that wasn't much more than speculation. "Two separate meta‐analyses consisting of 10 and eight [randomized controlled trials] both indicated that substituting [artificial sweeteners] for sugar resulted in a modest weight loss in adults," notes Sylvetsky. "In 62 of 90 animal studies, NNS did not increase body weight, and a more recent meta‐analysis of 12 prospective cohort studies did not support an association between NNS consumption and BMI."

Embracing Aspartame

The most popular artificial sweetener these days is aspartame, which can be found in most diet soft drinks. Acesulfame Potatassium, Sucralose (sold in the U.S. as Splenda), and substances derived from the stevia plant are also popular. The paper cautions that aspartame has much more safety evidence on its side than the others, as it has been studied much more extensively. (There's no particular reason to think the others will prove any less safe, but none has been studied "for periods no longer than 16 weeks.")

Aspartame has been controversial for decades, but fears over its alleged links to everything from Alzheimer's disease to brain cancer, diabetes, leukemia, and weight gain have proven unfounded. (Such was also the case with saccharine before it.) And there have been ample randomized controlled trials to study its effects.

"It does not appear that any of these [trials] revealed adverse effects of NNS consumption on risk factors for cardiometabolic disease," writes Sylvetsky, summing up the research. In one six-month study, overweight and obese participants were assigned to drink either sucrose‐sweetened cola, aspartame‐sweetened cola, water, or low-fat milk. Researchers found "no significant differences between the effects of aspartame‐sweetened cola and water on body weight, visceral adiposity, liver fat and metabolic risk factors."

In "the longest intervention study conducted to date," 163 obese women were randomly assigned to have or avoid aspartame‐sweetened foods and drinks during a several-month weight-loss program, a one-year weight-maintenace program, and a two-year follow-up period. "The aspartame group lost significantly more weight overall," reports Sylvetsky, "and regained significantly less weight during the 1‐year maintenance and the 2‐year follow‐up than the no‐aspartame group."

Controlled trials "consistently demonstrate" that consuming aspartame and other artificial sweeteners is associated with decreased calorie consumption, the paper concludes. And "there are no clinical intervention studies involving chronic [sweetener] exposure in which [it] induced a weight increase relative to sugar, water or habitual diet."

The team of researchers suggests that more studies should be done on on the effects of artificially-sweetened beverages on children and on how consumption of these drinks is related to glucose tolerance and inflammation.

Read the whole story
francisga
2 days ago
reply
Lafayette, LA, USA
Share this story
Delete

Should Psychiatry Test For Lead More?

2 Shares

Dr. Matthew Dumont treated a 44 year old woman with depression, body dysmorphia, and psychosis. She failed to respond to most of the ordinary treatments, failed to respond to electroconvulsive therapy, and seemed generally untreatable until she mentioned offhandedly that she spent evenings cleaning up after her husband’s half-baked attempts to scrape lead paint off the walls. Blood tests revealed elevated lead levels, the doctor convinced her to be more careful about lead exposure, and even though that didn’t make the depression any better, at least it was a moral victory.

The story continues: Dr. Dumont investigated lead more generally, found that a lot of his most severely affected patients had high lead levels, discovered that his town had a giant, poorly-maintained lead bridge that was making everyone sick, and – well, the rest stops being story about psychiatry and turns into a (barely believable, outrageous) story about politics. Read the whole thing on Siderea’s blog.

Siderea continues by asking: why don’t psychiatrists regularly test for lead?

Now, in my case, I’m a talk therapist, and worrying about patients maybe being poisoned is not even supposed to be on my radar. I’m supposed to trust the MDs to handle it.

Dumont, however, is just such an MD. And that this was a clinical possibility was almost entirely ignored by his training.

Dumont’s point here is that while “medical science” knows about the psychiatric effects of lead poisoning and carbon disulfide poisoning and other poisons that have psychiatric effects – as evidenced by his quoting from the scientific literature – psychiatry as practiced in the hospitals and clinics behaves as if it knows no such thing. Dumont is arguing that, in fact, he knew no such thing, because his professional training as a psychiatrist did not include it as a fact, or even as a possibility of a fact.

Dumont’s point is that psychiatry, as a practical, clinical branch of medicine, has acted, collectively, as if poisoning is just not a medical problem that comes up in psychiatry. Psychiatry generally did not consider poisoning, whether by lead or any other noxious substance, as a clinical explanation for psychiatric conditions. By which I mean, that when a patient presented with the sorts of symptoms he described, the question was simply never asked, is the patient being poisoned?

Dumont wants you to be shocked and horrified by what was done to those people, yes. He also wants you to be shocked and horrified by this: psychiatry as a profession – in the 1970s, when (I believe) the incidents he relates where happening, in the 1990s, when he wrote it in his book, or in 2000 when a journal on public health decided to publish it – psychiatry as a profession did not ask the question is the patient being poisoned?

And it didn’t ask the question, because clinical psychiatry had other explanations it liked better, to which it had a priori philosophical commitments.

And that, when you think through what it means for psychiatry, is absolutely chilling.

And:

I can tell you that, standing here in 2018:

• No mental health clinic I’ve worked at ever had the facilities for even performing blood draws, nor doing urine testing for anything other than commonly abused intoxicants (alcohol, opioids, amphetamines, etc), and then only the clinics that specialized in substance abuse treatment. The clinic I work for now can’t even do urine screens. Psychiatrists’ offices, here abouts at least, are not places blood tests are or can be performed, unless they are attached to a general medical practice. Such tests have to be referred out, usually to the patient’s PCP’s office.

• No psychiatrist has ever asked me to arrange blood draw test from the PCP for anything other than white blood cell count, thyroid panel, or Lithium blood level.

• Though I’ve seen documentation in patient charts of psychiatrists ordering two of those three tests from PCPs themselves, I’ve never seen documentation of ordering any other tests. I have literally never seen a psychiatrist order a test for any sort of poison.

• I have never seen any sort of toxicology report for poisons in any of the blood test results I have found in my patients’ discharge paperwork from psychiatric hospitalization.

• I have never, in all my case discussions with psychiatrists in-patient and out, or with hospital staff at psychiatric hospitals and hospital departments, ever heard anyone suggest anything about poisoning be a possibility in our mutual cases. Nobody has ever said anything like, “We don’t want to prescribe anything until the tox report comes back, in case it’s an environmental toxin” or “R/o env tox” or even “We don’t think there’s much chance of an environmental toxin, so we’re not bothering to test for it. It has literally never been mentioned.

• Not even when, due to the suddenness of the onset of psychotic symptoms, psychiatrists were discussing with me the possibility that a patient was intoxicated on some street drug that somehow just wasn’t showing up in his/her urine screens and blood draws.

Maybe it’s not fair for me to generalize from the psychiatrists I’ve worked with. Maybe it’s just that the psychiatrists I’ve worked with – including at MGH and McLean – aren’t representative, being somehow really bad doctors, or poorly educated, and that, contrariwise, normal psychiatrists, basically adequately well-trained psychiatrists, generally do stop to consider poisoning as a cause for severe presenting symptoms, especially when they’ve proved refractory.

I’m not getting that impression though.

I’m not getting that impression from the many interactions I’ve had with psychiatrists and other psychiatric professionals over the last decade, and neither have I been subject to exhortations of what I, as a clinical mental health counselor, should be alert to as evidence of possible poisoning in my patients.

When I was in grad school, it was briefly mentioned that most disorders in the DSM (this was version IV-tr) had a “caused by a General Medical Condition” variety, and then it was never spoken of again.

So as far as I can tell, nothing has changed.

This is not merely an incidental failure of instruction on the part of Dumont’s med school professors, nor of mine in grad school. This is, at the most charitable, a massive blindspot, of precisely the sort that “scientific” field of endeavor should never have, and it seems to afflict the entire profession.

There’s a lot more, and you should read the whole post. Siderea is a great writer and a careful thinker, so when she criticizes my practice I take note. And since I don’t think I’ve ever tested anyone for lead, this is definitely criticizing my practice. What’s my justification?

Take a look at some papers like The Emerging Role For Zinc In Depression And Psychosis and Effect Of Zinc Supplementation In Patients With Major Depression: A Randomized Controlled Trial. Done? Looks like there’s some pretty good evidence that zinc deficiency is involved in depression somehow, right? Do you think zinc is more or less important than lead? By how much?

Or what about toxoplasma? Seems to be twice as common in depressed people as in controls, and increases suicide risk 50%. Pretty suspicious; should we test all depressed people for toxoplasma? If so, is this more or less important than testing all depressed people for lead? By how much?

And when you’ve answered that, what about copper? Omega-3/omega-6 ratio? Vitamin D levels? Cortisol? Magnesium balance? The methylation cycle? Mitochondrial function? Inflammation? Covert viral infections? Covert autoimmune disorders? Paraneoplastic syndromes? Allergies? Light exposure? Circadian rhythm? Selenium? Lithium levels in your local water supply? Insulin resistance? Gut microbiome? PANDAS? FODMAPs? Structural brain abnormalities? And that’s not even getting into the psychosocial stuff!

Every one of these has some evidence of being involved in depression. Some have excellent evidence of being robustly involved. Imagine how dumb you would feel if it turned out only 0.01% of cases of depression were lead related, and you spent so much time testing your patient for lead that you never got around to asking about color temperature of their home lighting, or whether they clean their cats’ litterbox, or how many dental fillings they have.

(Dental fillings? Really? No, not really.)

Why not test all the things? Number one, cost. Number two, sticking your patient with more needles than a Trump voodoo doll owned by the DNC. But number three, everybody is weird in a bunch of ways. Have you ever gone to your doctor for labwork, and gotten a piece of paper back with a lot of words like BASOPHILS and BUN-CREATININE RATIO, and probably three or four of them were highlighted in red to indicate they were abnormal, and your doctor looked at it and shrugged and said not to worry about it? That’s because everybody is weird in a bunch of ways. Your 30-item depression risk factor panel is going to come back saying you don’t have enough selenium and your gut microbiome is off, and your doctor is going to make you spend a month eating nothing but kefir and brazil nuts, and then a month later you’ll leave your abusive partner and your depression will mysteriously disappear.

Consider prostate cancer screening. This is like the best-case scenario for universal testing. Prostate cancer is pretty common – about 10% of men are diagnosed with it sometime during their lives. We know who’s most at risk – older men. It’s potentially pretty bad – nobody wants cancer. The test is easy – the same simple blood draw that tells you if your cholesterol is too high. And yet governing bodies keep recommending that doctors stop screening for prostate cancer (recent guidelines are nevertheless complicated: ask your doctor if PSA screening is right for you). The bodies cite possibility of “overdiagnosis and overtreatment”, potential false security by missing some cancers, and studies which show no decrease in mortality. Breast cancer screening organizations keep pushing back the age at which they recommend women start getting mammograms, because the costs outweigh the benefits before then.

Doctors never just say “I hear this condition is bad in my field, let’s worry about it with everyone who comes through the door”. They want really good evidence that it’s common enough (and that testing works well enough) that the benefits are worth the risk. Right now for lead, we have no such evidence.

The only study I have ever seen even begin to make the slightest attempt to quantify the role of lead in depression is Bouchard et al. It claims that people in the highest quintile (top 20 percentiles) of lead exposure have twice the depression risk as people in the lowest quintile. Big if true. But I’m skeptical, for several reasons:

1. Lead exposure is heavily linked to poverty – poor people tend to live in the decaying houses and polluted neighborhoods where lead is most common. Poor people are also more likely to get depressed. The study attempted to control for poverty, but this never works. So it’s not clear how much of the lead effect they picked up was really a poverty effect.

2. This is an implausibly large effect size. The amount of environmental lead has plummeted over the past thirty years after the removal of leaded gasoline. Since then, the percent of people with elevated lead levels has decreased by a factor of twenty. Some quick calculations suggest that if this study were right, depression rates should have gone down 66% in the past few years. They haven’t. Compare this to violent crime, which we have better evidence is lead-related and which did decrease by a factor of 2 or 3 over the past few years.

3. I don’t entirely understand what they’re doing with statistics here. In Table 2, every lead quintile has about the same depression rate. It’s only after they apply their model that they find higher-lead people have higher depression. This is sort of a red flag for the kind of thing that might not replicate later on. Nobody else has ever tried to repeat this study and as far as I know it remains the only investigation into the epidemiology of lead and depression.

Aside from this study, we have nothing to guide us. Does lead contribute to 5% of cases of depression? 0.5%? 0.0005%? I’m not sure anyone knows.

If it’s 0.0005%, then we’re talking about people who work in lead mines, or Dumont’s patient who scrapes lead paint off the wall every day. I agree if your patient works in a lead mine and has any problems at all, you should test them for lead poisoning. This is why your doctor asks you what line of work you’re in when you go in for an evaluation. She’s not trying to make small talk – she’s waiting for the one guy who says “I French kiss cockatiels for a living” so she can diagnose him with psittacosis.

I am definitely not going to make every patient take a blood test just so I can catch 0.0005% of people. But should I be more careful in the history here? Specifically ask patients “Do you have any hobbies that put you in contact with lead?”. I’m not sure – right now I have no intuition for whether that’s more or less important than “Do you do anything that might cause zinc deficiency?” or “Do you clean out cat litterboxes?” Usually what I do is I ask broad open-ended questions to start with, and then as the depression proves itself weirder and more treatment-resistant, we gradually go one-by-one down the list of super-rare causes that never happen in real life so I can be sure I’m not missing anything.

What if lead causes more like 5% of depressions? In this case, we’re talking about people with no specific exposure factors. Maybe they just live in an old house, or a bad neighborhood, or got a bad draw in the genetic lottery for whatever systems affect heavy metal removal. So suppose I grudgingly give every patient who comes through the door a blood lead level test. Lots of them come back with lead levels in the top quintile – just like 20% of the general population would. Now what? I say “Your lead level is within the normal range, I have no proof that it’s contributing to your depression at all, but just to be safe you should move to a classier neighborhood”? “Oh, you mean that will cure my depression?” “No, Dr. Dumont didn’t even cure his patient’s depression when he made her stop scraping lead paint off the walls, I’m just saying it probably couldn’t hurt”. I consider it a good day when I can get my patients to take their Lexapro without missing any doses. My ability to get them to move to a nicer neighborhood – most people aren’t living in bad neighborhoods just because they never thought of moving to a better one – is pretty low, especially when I can’t even honestly say I expect it to help that much.

What about chelation therapy, a series of techniques that suck lead out of the body? From Wikipedia: “Chelation therapy must be administered with care as it has a number of possible side effects, including death…various health organizations have confirmed that medical evidence does not support the effectiveness of chelation therapy for any purpose other than the treatment of heavy metal poisoning.” If you chelate everyone who comes through the door with high-average lead levels, there’s no guarantee you will cure any depressions, but you will definitely kill some people.

There’s an old medical maxim: never do any test if you have no plan for acting on the results. What’s my plan for acting on the results of a upper-end-of-normal lead level? I really don’t have one. If it’s an extreme level, the 0.0005% works-in-a-lead-mine type, then I can at least demand the patient find a different line of work. But if I’m just going to be diagnosing 20%+ of the patients who come in the door with upper-end-of-normal lead? Forget it.

I understand Siderea probably isn’t recommending universal blood lead screening. But then I’m having trouble figuring out what she is recommending doctors do differently. Ask everyone some history questions to see if they work in lead mines? We try to do that kind of thing already. Have lead poisoning on the top of our minds and ask a lot of really intense history questions to ferret out any possible exposure? Not clearly privileged above doing that for the thirty other rare causes of depression. Just make sure to talk about lead as a cause of depression in medical school? They already do, it’s right between “Prader-Willi Syndrome as cause of obesity” and “Q fever as cause of pneumonia” in the lecture series entitled Extremely Rare Causes Of Common Symptoms Which Ideally You Can Keep In Mind And Have Dr. House Levels Of Diagnostic Genius, But Let’s Be Honest Here, Realistically Over Half Of You Will Prescribe Antibiotics For Viral Infections.

I’m not saying nobody should worry about lead poisoning and depression. I’m just saying those people shouldn’t be front-line clinicians. Some epidemiologist should absolutely be trying to replicate Bouchard et al’s work on the magnitude of the lead-depression correlation. Some guideline-making body should be coming up with a guideline for when doctors should take lead levels, and what results should prompt what kind of action, even if the guideline is just “never worry about this unless somebody works at a lead mine” (which is plausibly the right answer given the current paucity of data, but I’d feel more comfortable if a guideline-making body said so officially). And public health officials should worry a lot about how to decrease lead on a society-wide level (which they’re already doing, albeit for different reasons), since that’s much higher-yield than some random doctor telling a poor person to move to a different house.

But I am not sure the average clinician needs to think about this too much.

(Maybe Siderea already agrees with me here; I can’t tell.)

There are thirty-plus plausible causes of depression that nobody knows enough about to be sure they’re real, estimate their magnitude, or begin to treat. If you look at any one of them too closely, you will come to the conclusion that every psychiatrist in the country is a quack who’s ignoring the evidence right in front of their eyes and willfully blind to the role of [lead/zinc/toxoplasmosis/inflammation/gut microbiome/etc] in order to keep getting the sweet pharma company cash for prescribing Lexapro. It’s not that we don’t know about these things. It’s that we don’t have an action plan. We don’t have a good feel for when to do the tests, what numbers on the tests mean we should do something, what that thing should be, and whether it should work. So we punt the question to the researchers, who already have a backlog of ten million other things they need to be working on.

This isn’t a great state of affairs. But I only know three ways doctors can deal with it, and none of them are very good.

The first is the one I learned at age 3 from my father teaching me to read out of evidence-based-medicine textbooks. You insist that nothing can be admitted into the medical canon unless it has some guideline-making-body’s stamp of approval, which the guideline-making-body will not give until a bunch of randomized controlled trials have validated every step of the model and shown that the proposed solutions definitely work on a large scale with every demographic of patient. Until then we will keep doing the things that have met that bar – which is basically giving people Lexapro and telling them to diet and exercise. This path assures you a long and prosperous career as a respected member of the medical establishment.

The second is to become Dr. Oz. You fall in love with anything that has an even slightly plausible mechanism and at least one n = 15 study saying it works. I’m not talking about literal homeopathy here. I’m talking about things where if you ask a biologist whether it works, they just sort of shrug and say “well, it should“, and there’s a bunch of respectable research into it. But this is a really low bar, and if it’s the only one you hold yourself to, then you’re going to be the guy telling your patients they need heavy metal tests and vitamin levels and SPECT brain scans and screens for twenty latent infections just because they came in saying they’re tired all the time. This path assures you a lucrative daytime TV show and a side gig selling supplements with your picture on them.

The third is to be a generally respectable doctor with one Big Idea. Like “why aren’t we testing everybody for lead?” or “why don’t we care more about the gut microbiome?”. These people are often really good at what they do, really passionate, and mostly within the mainstream. Sometimes they are impressive researcher-crusader-prophets, they get their Big Idea universally adopted, and then they become the next generation of medical orthodoxy. Other times they’re just annoying clinicians who love saying “I see you aren’t even testing for cortisol levels, clearly you have no interest in going beyond Textbook 101 Level” but can’t really explain why this is better than the twenty-nine other things you might consider doing. This path assures you a long bibliography of successful articles in The Journal Of Medical Hypotheses.

I love everybody in Group 3, they’re all great people. But the thing is, if I were to believe everybody in Group 3, then I would end up as Group 2 – and I don’t have enough time to star on a TV show, so screw that. I think that makes me Group 1 by default, which is good, because otherwise my family would disown me.

“Shouldn’t we be able to use rationality techniques to figure out which of the Group 3 people are right, and move faster than guideline-making bodies“? Well, that’s the dream. But take that route, and you notice you’re wading through ankle-deep skulls. I occasionally flirt with trying this – like every doctor, my practice has a few idiosyncrasies and places where it deviates from the exact textbook solutions. But I would be nervous putting too much trust in my own gut.

This is all context for how to think about questions like “should we test everybody for lead?” or “should we think more about lead?” or “is the psychiatric establishment incompetent for not testing lead more?” The prior on the psychiatric establishment being incompetent is never that low. But the prior on any given alternative being especially fruitful isn’t great either.

[EDIT: The only general medical conditions I consistently find worth worrying about in depression (absent some specific reason to worry about another) are hypothyroidism and sleep apnea. I test a lot of people for anemia and various vitamin deficiencies, because the guidelines say so, but I’ve never found them too helpful. Curious if anyone else in the field has different experiences. I recently had one patient obtain a miraculous and lasting cure of his chronic fatigue using nasal steroids (ie it was apparently caused by nasal inflammation from allergies) but nobody ever talks about that and I’m not sure if it was just a fluke.]

Read the whole story
francisga
3 days ago
reply
Lafayette, LA, USA
Share this story
Delete

Weaponized Classical Music

jwz
1 Share
Bach at the Burger King

At The Corner of 8th and Market in San Francisco, by a shuttered subway escalator outside a Burger King, an unusual soundtrack plays. A beige speaker, mounted atop a tall window, blasts Baroque harpsichord at deafening volumes. The music never stops. Night and day, Bach, Mozart, and Vivaldi rain down from Burger King rooftops onto empty streets.

Empty streets, however, are the target audience for this concert. The playlist has been selected to repel sidewalk listeners -- specifically, the mid-Market homeless who once congregated outside the restaurant doors that served as a neighborhood hub for the indigent. [...]

This tactic was suggested by a cryptic organization called the Central Market Community Benefit District, a nonprofit collective of neighborhood property owners whose mission statement strikes an Orwellian note: "The CMCBD makes the Central Market area a safer, more attractive, more desirable place to work, live, shop, locate a business and own property by delivering services beyond those the City of San Francisco can provide." These supra-civic services seem to consist primarily of finding tasteful ways to displace the destitute. [...]

Baroque music seems to make the most potent repellant. "[D]espite a few assertive, late-Romantic exceptions like Mussorgsky and Rachmaninoff," notes critic Scott Timberg, "the music used to scatter hoodlums is pre-Romantic, by Baroque or Classical-era composers such as Vivaldi or Mozart." Public administrators seldom speculate on the underlying reasons why the music is so effective but often tout the results with a certain pugnacious pride. [...]

One London subway observer voiced the punitive mindset behind the strategy in bluntest terms: "These juvenile delinquents are saying 'Well, we can either stand here and listen to what we regard as this absolute rubbish, or our alternative -- we can, you know, take our delinquency elsewhere.'"

Take your delinquency elsewhere could be the subtext under every tune in the classical crime-fighting movement. It is crucial to remember that the tactic does not aim to stop or even necessarily reduce crime -- but to relocate it. [...]

Thus music returns to its oldest evolutionary function: claiming territory.

Previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously.

Read the whole story
francisga
3 days ago
reply
Lafayette, LA, USA
Share this story
Delete

Saturday Morning Breakfast Cereal - Horatio

7 Shares


Click here to go see the bonus panel!

Hovertext:
Also why is no talking about a psychiatrist for Ophelia? Like, what's with your family, man?

New comic!
Today's News:
Read the whole story
francisga
3 days ago
reply
Lafayette, LA, USA
Share this story
Delete

Newest NOAA weather satellite suffers critical malfunction

1 Share

Enlarge / GOES-17 is in space now, where fixing problems is... difficult. (credit: NASA)

The US National Oceanic and Atmospheric Administration released some bad news today: the GOES-17 weather satellite that launched almost two months ago has a cooling problem that could endanger the majority of the satellite’s value.

GOES-17 is the second of a new generation of weather satellite to join NOAA’s orbital fleet. Its predecessor is covering the US East Coast, with GOES-17 meant to become “GOES-West.” While providing higher-resolution images of atmospheric conditions, it also tracks fires, lightning strikes, and solar behavior. It’s important that NOAA stays ahead of the loss of dying satellites by launching new satellites that ensure no gap in global coverage ever occurs.

The various instruments onboard the satellite have been put through their paces to make sure everything is working properly before it goes into official operation. Several weeks ago, it became clear that the most important instrument—the Advanced Baseline Imager—had a cooling problem. This instrument images the Earth at a number of different wavelengths, including the visible portion of the spectrum as well as infrared wavelengths that help detect clouds and water vapor content.

Read 3 remaining paragraphs | Comments

Read the whole story
francisga
4 days ago
reply
Lafayette, LA, USA
Share this story
Delete
Next Page of Stories