Free Novel Read

Open Heart Page 19


  Philia, or friendship, was to the Greeks, in fact, the very basis of the doctor-patient relationship. For the doctor, friendship with the patient consisted of a correct combination of philanthropia (love of man) and philotechnia (love of the art of healing); the Hippocratic Precepts (c. 400 B.C.), in discussing the physician’s approach to the patient, state that “where there is love of man, there is also love of the art of healing.”*

  But what about trust in doctors who, like my friends, have a good deal more than placebos to offer? The Roper Center for Public Opinion Research, having conducted polls on this issue since 1945, reports that confidence in “the leaders of medicine has declined from 73 percent in 1966 to 44 percent in 2000.” (The lowest public confidence rate, 22 percent, occurred in 1993, during the debate over national health-care reform.) In a series of recent essays on trust in medical care, David Mechanic, René Dubos professor of behavioral sciences, and director of the Institute for Health, Health Care Policy, and Aging Research at Rutgers University, suggests that patients have become increasingly distrustful mainly because they cannot freely choose their own physicians, or depend upon continuity of care from their health-care providers. In addition, aware that their physicians do not control some decisions concerning their care, they become less certain they will receive the care they need.

  For their part, doctors, too, are disenchanted with recent changes in health-care policies; witness the following, from an editorial in the New England Journal of Medicine:

  Frustrations in their attempts to deliver ideal care, restrictions on their personal time, financial incentives that strain their professional principles, and loss of control over their clinical decisions are a few of the major issues.* Physicians’ time is increasingly consumed by paperwork that they view as intrusive and valueless, by meetings devoted to expanding clinical-reporting requirements, by the need to seek permission to use resources, by telephone calls to patients as formularies change, and by the complex business activities forced on them by the fragmented health care system. To maintain their incomes, many not only work longer hours, but also fit many more patients into their already crowded schedules. These activities often leave little time for their families, for the maintenance of physical fitness, for personal reflection, or for keeping up with the medical literature.

  “The public has a low opinion of insurance companies, and an even lower opinion of managed care,” Mechanic writes, and he lists some of the causes for this general dissatisfaction: negative media coverage, repeated atrocity-type anecdotes, the seemingly arbitrary power of large organizations in managing one’s illness, opposition from physicians and other professionals, the shifting of patients among managed-care organizations and the resulting discontinuities in care, and so on.*

  “But a more fundamental reason for the public perception,” he states, “is that most Americans are discomforted by the idea of having their care rationed and, at some level, they understand that managed care is a mechanism for doing so.” Despite this public perception, a perception not always in accord with reality (Mechanic points out, for example, that medical care was rationed, if in different ways, before managed care; that patient office visits are not getting shorter; that managed-care companies rarely deny hospital admission), “most patients still view their medical care in terms of their relationships with a limited number of physicians,” and “physicians continue to enjoy considerable public respect and credibility.”

  “There’s lots of good information now, from smoking and weight reduction programs, that a doctor’s participation in trying to get people to change behaviors actually works,” Jerry says. “Patients still listen to their doctors. They might not trust the medical profession— but they trust their doctors.”

  Why is it, though, I wonder, that the more effective medicine has become—the more doctors can actually do for virtually every patient and every condition—the more we are dissatisfied with the medical profession, and the more often we cultivate a nostalgia for a medically primitive past?

  One answer, Leon Eisenberg, professor emeritus of social medicine at Harvard, suggests, lies in an understanding of the role doctors, and healers, have always played. “It is not merely that patients ‘get better’ after they consult healers—they would have anyway, most of the time, because the common illnesses usually cure themselves (although that has never kept the doctor from assuming the credit nor the patient from granting it),” Eisenberg writes, and he offers reasons for the doctor’s effectiveness, to the degree that this effectiveness exists apart from the efficacy of any medication the doctor may prescribe.*

  “The arrival of a physician and the expectation that relief will be forthcoming may diminish the severity of the attack even before the medication has had time to reach an effective blood concentration,” he explains. “In a sense, the mere presence of the doctor is the medicine. When relief is produced, faith in the doctor is enhanced, and the power of the medical presence is even greater than before. What is true for respiration is true for any bodily function that is regulated by the brain through neural and hormonal pathways and therefore responds to psychosocial influences.”

  “For the most part, when people come to the office, you have to make a lot of judgments that don’t come from machines,” Phil says. “You have to listen to the patient and see if the pattern sounds familiar and if it could be a neurological illness. Where in the nervous system is the patient being affected? What tests should I do to look for it—and then, if I find any answers, what’s the best way to treat it?

  “Now, the major categories of the diseases we see are malignant tumors, and when it comes to them, the outlook remains bleak. We try to treat them with chemotherapy, radiation, and surgery, but until we understand the biology of these tumors, mostly we watch and see what it does to the patient and try to stop or reverse the growth—and we try to stay alert to the course of the illness so that if it changes, we can alter our thinking.

  “In cases of stroke or brain injury, for example, I have to try to prevent things from deteriorating, or monitor the brain’s swelling, or deal with the side effects of a person not being able to swallow. I try to prescribe therapies that will increase their chances of returning to maximum functioning.

  “But no one has a good way to make the brain heal, and right now most of what we do, day to day, is to try to prevent complications of brain trauma and disease. A large proportion of the illnesses we see are chronic, debilitating conditions, you know—I mean, look at Christopher Reeve, at Michael J. Fox—so that what we want to do above all is to make sure things don’t get worse, and to attend to the quality of life these people can still have. It’s the old Hippocratic Rule—the first thing is to do no harm.

  “I’ll give you a more prosaic example. A mother comes to me with her son. He’s a twitchy kid, she says. He twitches all the time, and she wants me to put him on medications. So I examine her son, and run some tests to make sure there are no neurological problems. There are not, so I ask the mother a few questions: Is your son doing okay in school? Is he getting along with his friends? Yes, yes, she says to both questions. But he twitches a lot, she says.

  ‘“Well, then that’s who he is,’ I say.

  “I mean, I’ll keep monitoring the situation, and we’ll see if the twitching gets worse, or goes away, or interferes with the kid’s life, but my general philosophy is to leave things as they are—don’t change anything unless you absolutely have to.

  “These days, though, the premium is on doing more tests, and giving out more meds. I mean, I think it’s always been like this, with the pendulum swinging back and forth between thinking the machines and technologies are where it’s at, and thinking it’s the doctor’s judgment and listening that matters most. Right now, the technology’s in the ascendancy, so you have to try not to let it swamp your judgment, and to remember that no two people are alike, and to do your work case by case, person by person.”

  What Phil describes, Stanley Jackson puts into perspect
ive. Through the centuries, he writes, beliefs and practices have “swung back and forth between the significance of technological advances and the need to retain humane influences in the practice of medicine.”* In our own time, he adds, the training of our young doctors “is overwhelmingly skewed toward our technological advances and seriously deficient with regard to these humane influences.”

  But the high priority that providers (doctors, hospital administrators, insurance companies, managed-care plans) place on technology would also seem to have an equal and opposite reaction: the low priority given to the doctor-patient relationship, and the consequent depreciation of skills, habits, and practices essential to this relationship. Thus, just as we often feel ill-equipped—vulnerable and defenseless—when we find ourselves in a doctor’s office, so the doctor may feel—and be!—equally ill-equipped, by temperament and training, to deal with us as individuals.

  End-of-life care, for example, is poorly taught, when taught at all, in our medical schools, and virtually nothing is included about end-of-life care in standard medical textbooks recommended to students before or after graduation, this despite the fact that between 27 and 31 percent of Medicare expenditures (covering direct medical expenses for 80 percent of those over sixty-five years old) are for the last year of life, and 52 percent for the last sixty days of life.*

  Nor, according to my friends, do medical students receive much training in either the prevention of disease and public health or the history of medicine, and they talk with me about how medical education influences the way a doctor practices medicine, and about changes they would recommend in the training of doctors. Rich says that when it comes to the medical school curriculum, for example, “the very first course [he] would introduce—the first course a medical student would take—is one dealing with the doctor-patient relationship.*

  “At Irvine, I became involved in developing a course on the doctor-patient relationship. On the very first day we would have either a mock patient or a real patient sitting in front of the students, and start them off interacting with each other. That’s day one, after which we use videotapes, and sit around in small groups of six or eight students, and critique the sessions.

  “It was great fun, and there were no grades, or anything like that. What students noticed most—what was most important—was their embarrassment. They saw how they fidgeted, and how they wouldn’t know what to do, or how to do it, since they felt they were essentially imposters who were pretending to be doctors, but without really knowing anything.

  “And that carries over, you see. Because in order to shut down their fear and insecurity, students would assume what they thought of as a more professional manner. And if you ask me, that’s the beginning of the end as far as the doctor-patient relationship is concerned. The trick is to break down that false, authoritarian facade by introducing students to medicine the right way from the get-go.”

  Such a course, and the intent that governs it, is not an anomaly. There are now many such courses being taught in medical schools around the country, but more often than not, David Mechanic reports, “these innovations are neither known nor recognized by other divisions of the parent institution or by the health services field.”* This is a shame, he writes, since effective communication “is essential to the cultivation of patients’ trust in their doctors and their health institutions.” Yet more and more, Mechanic tells us, “as their education increasingly centers on biomedical science, and as they more commonly are guided by randomized, controlled studies, most young physicians are trained to view their [own] interventions with skepticism. Instructed to practice evidence-based medicine, they are probably more detached and less committed to the effectiveness of their treatment strategies.”

  “In the Bronx—at Montefiore, when I ran training programs in medicine there, and at Yale since then—I’ve seen medical students come in idealistic, bright-eyed, and enthusiastic,” Jerry says, “and I’ve seen them go out, at the end of their training, worn down, hardened, somewhat brutalized, in debt, and having endured an enormous struggle to maintain their humanity.

  “Many of them reconstitute themselves, but many don’t. I’ve always felt there’s something wrong with the way we train doctors, and I think there are a number of things we can do—changes we can make—that will have real, effective ripples.

  “The first thing is to work toward integrating public health—let’s call it prevention—and clinical care. We’ve been suffering in this country, I think, because of a historical split between the two—there are schools of public health, and schools of medicine, and I would work toward integrating them so that we can make the disciplines of prevention that we know of from public health inform the skills and expertise we have in clinical care, and vice versa.

  “Most doctors who work in clinical medicine are not trained, expert, or comfortable in prevention, even within the clinical context. Most doctors don’t know how to take a sexual history. They don’t know much about substance abuse. And these are treatable diseases. The same can be said for mental health, and I would work toward integrating these at a clinical level, and not concentrate so heavily, the way we do now, on the acutely ill in the inpatient hospital setting. The fact is, most doctors are not going to spend their careers working in the kind of technology-intensive environments our hospitals have become.

  “I would also work toward changing the structure of how medicine is financed, so that it becomes oriented more toward prevention, and rewards doctors for preventing disease rather than for treating it once it occurs. I’m an infectious disease doctor, and infectious disease doctors don’t have many procedures. We don’t pass tubes into cardiac vessels or down the throat or up the tush. Mostly we look at your eyes, and your throat, and listen to your heart and lungs, and talk with you, and think about your case. Based on our expertise—our clinical experience—we make recommendations.

  “But to think about not doing something because it was not indicated—the system doesn’t support or reward that because it’s a system that’s technology-driven. If I stick a tube in you, I get four times as much money—or the system sucks four times as much out of whoever’s paying for it—and that’s a little ass-backwards. I mean, look at what’s happening in Africa. The basic work we need to do there, for the millions who have AIDS, is low tech: it’s educating people about transmission of AIDS; it’s getting people to adhere to their medications; it’s providing support for the families that care for their loved ones who are afflicted; it’s learning to comfort people in their deaths and their dying; it’s providing support for healthcare workers who live with fear, and who burn out.

  “In the clinical scheme of things, there and here, it’s the patient and his or her family and loved ones, and not the disease itself, that should be the central focus and unit of care. Through the course of their illness, high technology plays a relatively minor role. I can’t say it often enough: We should treat patients, not diseases.”

  Jerry pauses, continues: “Another way of making significant changes in the system is in the financing of medical education,” he says.* “In Europe, medical education is free. In Latin America, most of medical education is free. You get into a medical school and society pays to make you into a doctor. In exchange for that you spend a year or two performing national service—and you do that in a rural clinic or in a place that is under-doctored. We have something like that here, the National Health Service Corps. But it only subsidizes a very small percentage, and it’s highly selective.

  “Doing this would prolong medical education, of course, but it would also free these young people from the extraordinary debt burdens they wind up with that often influence their choice of medical career. Some of them owe 250,000 dollars when they start out in medicine, and they have a family, and they have this and they have that. So how will you make ends meet? You can’t be a pediatrician in a rural clinic or you’ll never dig yourself out.

  “But I’m saying all this based on what I believe, which is that me
dicine is a fundamental social good that should be part of the social fabric of our society, and that our society should be providing doctors and universal health care.”

  It’s late at night, in August of 2000, and Jerry and I are sitting in his home in Guilford after we’ve spent a day together at the hospital and his clinic, and only three weeks after he has come back from an international AIDS conference in Durban, South Africa—a city to which he will return in a year, during a sabbatical from Yale, in order to collaborate with health-care workers there in setting up AIDS prevention and treatment programs.

  “But everything I’m saying, you know,” he says, “is based on what I’ve come to see as the central medical issue of our time, whether in our country, or in Africa: the cruel disparities in access to prevention and treatment.”

  What had once been a traditional emphasis on the individual patient and individual practitioner did not decline only since my friends began their medical training in the late 1950s and early 1960s, or with the coming of managed care in more recent years, but began to be modified in the early nineteenth century, at a time when the hospital was assuming new and increasing importance.

  With the emergence of hospital medicine, the attention of doctors started to shift from the diagnosis and treatment of complexes of symptoms in individual sufferers to the diagnosis and classification of “cases.” The focus then shifted further, from the “‘sick-man’ as a small world unto himself” to the patient “as a collection of synchronized organs, each with a specialized function.”*

  Stanley Jackson’s summary of subsequent developments helps place current beliefs and practices in perspective. After laboratory medicine “began to take shape in German universities in the latter half of the nineteenth century,” he writes, “the theories and techniques of the physical sciences were introduced into the study of living organisms, and experimental physiology flourished.* Cell theory entered the scene. The microscope and staining techniques brought to histology a new importance. Bacteriological investigations brought new perspectives on disease and led to new modes of therapeutic investigation. And clinical diagnosis was gradually reorganized around various ‘chemical tests of body substances designed to identify morbid physiological processes.’ Attention gradually shifted away from the sick person to the case and then to the cell. Gradually the ‘distance’ between the sick person and the physician increased, and even when they were face to face. Or perhaps more accurately, the patient was gradually depersonalized in the doctor-patient relationship, and, all too often, the physician related to him or her more as an object than a sufferer.”