Open Heart Read online

Page 13


  But why, Jerry asks, given the history of medicine, should we be surprised that this is so—and why should we deny its claim upon our priorities?

  Jerry’s primary medical training, along with the study of infectious disease, was in public health, and he points me toward studies that demonstrate how public health measures, and not specifically medical measures, are what have made the great and decisive difference.

  Thus, in one recent instance of the difference preventive measures can make, a study involving more than eighty thousand women aged thirty to fifty-five concluded that “in this population of middle-aged women, those who did not smoke cigarettes, were not overweight, maintained [a healthful diet], exercised moderately or vigorously for half an hour a day, and consumed alcohol moderately had an incidence of coronary events that was more than 80 percent lower than that in the rest of the population” (italics added).* During the course of the study, begun in 1980, and which included follow-up studies on each participant every two years for fourteen years, researchers documented 1,128 major coronary events—296 deaths from coronary heart disease and 832 nonfatal heart attacks—and concluded that “eighty-two percent of coronary events in the study cohort could be attributed to lack of adherence to this low-risk pattern.”

  In another recent example of the difference preventive measures can make, this time for individuals suffering from type 2 diabetes, a large clinical study determined that “even modest lifestyle changes”—eating less fat, exercising (walking briskly) two and a half hours a week, and losing a moderate amount of weight—can cut the incidence of type 2 diabetes by more than 50 percent among those most at risk. Type 2 diabetes (known also as adult-onset diabetes) accounts for between 90 and 95 percent of all cases of diabetes, and currently affects approximately nineteen million Americans. It often leads to heart disease, as well as to kidney failure, blindness, and stroke.

  In this study, sponsored by the National Institute of Diabetes and Digestive and Kidney Diseases, which followed more than three thousand individuals at twenty-seven medical centers for three years, on average, and included a large number of people from minority groups at particularly high risk for the disease, participants were randomly divided into three groups: those given a medication (metformin), those given placebos, and those who received no pills but instead received guidance and training to help them modify their eating habits and incorporate exercise into their schedules.

  While all participants in the study were advised to restrict diet, to exercise, and to lose weight, it turned out that only those who attended diet and exercise classes, and received follow-up help, were able to turn the advice into practice. Among those who took placebos, 11 percent a year developed diabetes; among those who took metformin, 7.8 percent a year developed diabetes; and among those who changed their eating and exercise habits, only 4.8 percent a year developed diabetes, a reduction of more than 50 percent when compared to the control group.

  So dramatic were the study’s results—so clear the hope that the alarming rise in diabetes in the United States could be significantly reversed (type 2 diabetes has risen by a third since 1990)—that researchers decided to make the results public and end the study in August 2001, a year early, in order, according to the New York Times, to find ways to begin “translating the results of the study into an effective public health policy.”*

  Most of us are living longer, healthier lives than our parents and grandparents did, and our average life expectancy has increased greatly—by more than 60 percent during the past century—and this is largely because more of us survive our childhoods.* And the fact that we do derives mainly not from practices that are specifically medical, as we ordinarily understand that term, but from measures that come, more generally, under the heading of public health.

  “Life span has always been and still is somewhere between eighty and ninety-five for most people,” Steven Harrell, of the University of Washington Center for Studies in Demography, explains. “What has happened in the twentieth century is that infant and child deaths have been reduced drastically…[so that] the average length of life has begun to approach the life span.” In other words, when you average in all those infants and children who died before the age of five with those who died after, say, sixty-five, the result is a much lower average life expectancy than you get when the vast majority of us no longer die before the age of five.

  Consider the following: In Plymouth Colony three hundred years ago, a male surviving to age twenty-one could expect to live to around age sixty-nine, while the corresponding figure for a woman was around sixty-two (the disparity due in large part to risks that accompanied childbirth).* By age fifty, life expectancy for both men and women was nearly seventy-four (exactly what the census reports it is for men in 1998). Other colonies fared as well. In Andover, Massachusetts, in the seventeenth century, the average age at death of first-generation males and females was over seventy, and more than half their children who survived to age twenty lived to seventy or more.

  Unlike southern colonies in the Chesapeake area of Virginia and Maryland, where life expectancy was significantly lower (those who survived to age twenty rarely lived past age fifty), New England colonies were spared from epidemic childhood infectious diseases largely because they had a clean and safe water supply, a varied and sufficient food supply, and relative isolation from the kind of commercial intercourse that spread epidemic diseases such as malaria.

  In the United States during the past century, due mainly to the decline of infant and child mortality, what we see when we look at federal census figures is a consistent pattern: in each succeeding decade deaths that previously occurred at younger ages are replaced by a larger proportion of deaths occurring in advanced ages.* And according to the evolutionary biologist Paul Ewald, the patterns of life expectancy that prevailed in New England three hundred years ago have also prevailed in other regions of the world, as they do, for example, among the !Kung Bushmen of southwest Africa and the Ache Indians of northern Paraguay.

  But what about the contribution to our lives of the many new medical technologies—drugs, vaccines, diagnostic capabilities, and treatments (surgical techniques, neonatal developments, dialysis, transplants, bypasses, and DNA cloning)? Haven’t they been central to our increasing longevity and well-being? The answer, though hardly uncomplicated, confirms what my friends say: these have been helpful, have made a difference in many lives (mine!), but have not been central or decisive.

  The introduction of antibiotic therapy, for example, as we have seen with respect to infant and child mortality (where nearly 90 percent of the decline in infectious disease mortality occurred before 1940), happened during and after World War II, when infectious diseases were no longer a significant cause of mortality.*

  In a series of seminal studies, John and Sonja McKinlay have demonstrated “that the introduction of specific medical measures and/or the expansion of medical services are generally not responsible for most of the modern decline in mortality”; nevertheless, “it is not uncommon today for biotechnological knowledge and specific medical interventions to be invoked as the major reason for most of the modern (twentieth-century) decline in mortality.”*

  The vital statistics are as follows. From 1900 until 1950, the annual rate of decline in overall mortality was 0.22 per thousand, “after which,” the McKinlays write, “it became an almost negligible decline of 0.04 annually.”* And “of the total fall in the standardized death rate between 1900 and 1973, 92.3 percent occurred prior to 1950.” Moreover, the “major part” of the decline, they explain, can be attributed to the virtual disappearance of eleven infectious diseases: typhoid, smallpox, scarlet fever, measles, whooping cough, diphtheria, influenza, tuberculosis, pneumonia, diseases of the digestive system, and poliomyelitis.

  Charting the years in which medical interventions for the major causes of mortality were introduced—sulphonamide for pneumonia, 1935; penicillin for scarlet fever, 1946; and vaccines for whooping cough (1930), influenza (
1943), measles (1963), and smallpox (1798)—and correlating these interventions with mortality rates, the McKinlays conclude that “medical measures (both chemotherapuetic and prophylactic) appear to have contributed little to the overall decline in mortality in the United States since about 1900—having in many instances been introduced several decades after a marked decline had already set in and having no detectable influence in most instances” (italics added). With reference to five conditions (influenza, pneumonia, diphtheria, whooping cough, and poliomyelitis) in which the decline in mortality appears substantial after the point of medical intervention, “and on the unlikely assumption that all of this decline is attributable to the intervention,” they estimate that “at most 3.5 percent of the total decline in mortality since 1900 could be ascribed to medical measures introduced for [these diseases].”

  The McKinlays also comment on an irony my friends have noted: that “the beginning of the precipitate and still unrestrained rise in medical care expenditures began when nearly all (92 percent) of the modern decline in mortality this century had already occurred” (italics in original).

  Others, before and after, corroborate the McKinlays’ thesis. Dr. Thomas McKeown, for example, has argued in numerous papers and several books that approximately 75 percent of the decline in mortality in the twentieth century (from 1900 to 1971) is associated with the control of infectious disease.*

  Gerald Grob, surveying the history of disease in America, concludes that the “epidemiological transition in which chronic degenerative disease replaced infectious disease as the major cause of mortality, a process that began sometime in the late nineteenth century…was largely completed by 1940.”* He writes, “Whatever the explanation of the causes of [this] epidemiologic transition, there is general agreement that strictly medical therapies played an insignificant role.”

  Reviewing the history, and decline, of each infectious disease in detail, Grob notes the many and varied elements—differing in each instance—that most probably contributed to the decline in mortality, while also warning that the decline in mortality “from infectious diseases associated with childhood admits of no simple explanation.”* Thus, in the cases of scarlet fever and rheumatic fever, for example, there is strong evidence that diminutions in the virulence of the pathogens themselves were crucial (the complex ways whereby pathogens interact with their human hosts continually give rise to variable virulence levels). And the decline of smallpox turns out to have been due less to vaccination than to isolation of those infected and, especially, to the appearance of a far less lethal strain of the disease in the United States after 1896.

  Although the origins of the decline in mortality from some diseases, such as tuberculosis, remain controversial, there is general agreement among scholars that the overall decline in mortality and poor health—our increasing longevity and well-being—derives most often, as my friends maintain, from changes that have been socially caused and have not been brought into being either by medical interventions (whether biotechnological, prophylactic, or chemical) or medical design.

  We live healthier, longer lives, that is, largely because of changes arising from public health policies and public health measures: the education of families concerning maternal and child health care, disease, disease prevention, and hygiene (such as an emphasis on hand washing); public education concerning diet, exercise, and smoking; improved sanitation, sewage, waste disposal, and disinfection interventions; the availability of clean water and clean air; the elimination of chronic malnutrition; the general rise in standards of living; the reduced consumption of toxic waste; improvements and decreased population density in housing; quarantines and other means of controlling epidemics; and—as with smallpox—a sometimes fortuitous mix of elements: vaccination, surveillance, isolation of active cases, and diminished virulence.*

  And there is this too—a fact I find as surprising as it is unsettling: despite the abundance of new, highly publicized chemical and surgical interventions for various specific cancers, and despite President Nixon’s “war on cancer,” launched in 1971, along with subsequent campaigns against various specific cancers, if one excepts lung cancer, where substantial gains in mortality have been made in recent years (especially since 1992) because people have learned to smoke less, the incidence and prevalence of all other cancers (increases in some, decreases in others) have, since 1900, remained more or less constant.*

  More surprising still: from 1950 until 1998, during which decades most of the new technologies have been introduced, both for screening (early detection) and treatment (chemotherapies), the mortality rates for cancer have remained relatively stable.* Although there have been slight fluctuations during the past half century, the mortality rate in 1998—123.6—is roughly identical to what it was in 1950: 125.2.

  There have, happily, been significant advances in the treatment of various specific forms of cancer in the past fifty years—cancer of the bone, stomach, uterus, and cervix, along with cancer of the prostate for those under sixty-five, has declined. Mortality from all types of leukemia, and in all ages, has decreased, and deaths from colorectal cancer have also decreased. At the same time, pancreatic cancer along with cancers of the urinary organs, kidney, ovaries, and intestine have generally increased. Small increases have been recorded for malignant brain tumors and malignant melanomas, and mortality from lymphomas—despite reductions in mortality from the cancer I had, Hodgkin’s disease—has also increased.

  The reasons for increases and decreases are variable, controversial, and complex; still, what the data reveal is that progress, as we usually understand it—despite the multitude of new medications, therapies, screenings, and technologies—has been, at best, irregular. Surely, the original objective of the National Cancer Institute—that we reduce age-adjusted mortality from cancer 50 percent by the year 2000—has come nowhere near to being achieved. And again, as with infectious diseases, it turns out in many cases that the major salutary changes from which we do benefit occurred before the introduction of new cancer therapies—and that they came about not because of specifically medical measures, but because of preventive measures.

  In a study published in 1986 by the Department of Health Studies at the University of Chicago, researchers conclude that “some 35 years of intense effort focused largely on improving [cancer] treatment must be judged a qualified failure.”* Choosing as the “single best measure of progress against cancer,” the mortality rate for all forms of cancer combined, age-adjusted to the U.S. 1980 standard (a measure also adopted by the National Cancer Institute), they find that “age-adjusted mortality rates have shown a slow and steady increase over several decades, and [that] there is no evidence of a recent downward trend.” In 1997, the University of Chicago researchers reviewed their 1986 findings. They write that “with 12 more years of data and experience, we see little reason to change [our earlier] conclusion.”

  “Despite numerous past claims that success was just around the corner,” they write (Dr. Vincent De Vita, of the National Cancer Institute, predicted in 1981 that “fifty percent of all cancers will be curable within ten years”), “hopes for a substantial reduction in mortality by the year 2000 were clearly misplaced.”

  Moreover, they are skeptical about “new therapeutic approaches rooted in molecular medicine” because “the arguments are similar in tone and rhetoric to those of decades past about chemotherapy, tumor virology, immunology, and other approaches.” They continue: “In our view, prudence requires a skeptical view of the tacit assumption that marvelous new treatments for cancer are just waiting to be discovered.” While they “earnestly hope that such discoveries can and will be made,” they suggest a modest reordering of priorities: “The effect of primary prevention (e.g., reductions in the prevalence of smoking) and secondary prevention (e.g., the Papanicolaou smear) on mortality due to cancer indicates a pressing need for reevaluation of the dominant research strategies of the past 40 years, particularly the emphasis on improving treatments, and a redir
ection of effort toward prevention.”

  But won’t our new understanding of the human genome, along with ongoing developments in genetic engineering, gene therapy, DNA cloning, et cetera, lead to discoveries that will enable us to treat cancers with increasing success?

  “What the human genome project gives us seems to me to be beyond the clinical realm at the present time,” Phil says. “Now someday I think we may all go around with medical cards that have our genetic IDs on them because the genome project will help us to devise treatments specific to your specific genetic make-up. So that, for example, we wouldn’t give Lipitor to everyone with high cholesterol, but only to those whose genetic ID shows a particular disposition to atherosclerosis for which Lipitor will probably be helpful. There may be some rare genetic diseases where what we learn from the genome project can be helpful sooner, but for a long time to come it’s not going to have anything to do with the everyday practice of medicine.”

  “The genome project has been a remarkable achievement, even though the hype about its potential value in the diagnosis and treatment of disease has been overblown and simplistic,” Rich says. “Currently, we know little about the factors that determine how genes cause disease, and we need to know a lot more about issues like gene penetrance, what turns genes on and off, and the importance of gene interactions.

  “In my view, the most common diseases, such as coronary disease and cancer, result from a complex interaction of genes, the environment, and what’s been called ‘the mind-body interaction,’ but to date, what’s been done is simply to catalogue the genes, much like naming cities on a map without knowing much about them. Still, I’m optimistic that over the next five to ten years, we will increasingly understand how genes function and interact, and what the causative genes and combinations of genes are for significant diseases.