Sunday, October 22, 2017

Blade Runner 2049

In keeping with the previous two posts - I did get out to see Blade Runner 2049 last Saturday.  It was clearly a first rate science fiction film and I guess some viewers not used to the genre might also call it a thriller.  Visually I thought it was less stunning that the first due to the lack of street level scenes and the hectic activity on the street.  It has critical acclaim but because of the high cost is being described by some critics as a "box office bomb".  In this film replicants (bioengineered androids) have become Blade Runners.  In some reviews of the film they are referred to as bioengineered humans and that is not a trivial difference since the main plot theme is whether or not the androids can reproduce.  The focus is on K (Ryan Gosling) who is the main protagonist.   We seem him interacting with and dispatching another replicant in the initial scene.  That replicant asks for mercy on the basis that they "are the same kind" and that there is a higher calling based on the miracle that he has witnessed.  When K returns to the station (LAPD) he undergoes a rapid debriefing protocol, test questions with monitoring of various anthropometric and physiological parameters.  The meaning of the test questions is not clear but the implication is that it determines if he has stayed at his baseline or his status had been perturbed in some way.   The test is also being administered for a very different reason than the Voight-Kampff protocol since the test subject is a known replicant.

There are three generations of replicants in the film starting with K - a Nexus 9 series, to the Nexus 8 replicant he retires in the original scene, the the Nexus 7 series that dates back to Rachael in the original Blade Runner film.  Over the course of that time frame the replicant population has become less subservient and more interested in equality or autonomy.  There is a rebellious faction.  We learn later in the film based on a series of events that the common "miracle" that the replicant population refers to is the birth of a child by Rachael in the original film.  In that film in the final scene she was leaving with Deckard (Harrison Ford).  There were implications that Rachael was a specially modified replicant and in retrospect the question is whether she was modified to reproduce.   

The competing forces in the film were threefold.  First, the LAPD is invoked as the police force determined to suppress any replicant rebellion.  K is a detective for the LAPD and after discovering Rachael's remains buried at the site where he encounters the initial replicant and there is evidence that she gave birth to a child..  Second, Tyrell corporation has been replaced by the potentially more evil Wallace Corporation header by Niander Wallace.  Wallace is very explicit about the need for replicant reproduction since he does not believe that manufacturing capacity can ever meet the need for replicants in service of his corporation and its off world needs.  And finally there is the role of K as a free agent in all of this.  Does he do the bidding of his boss at LAPD or not?  His boss emphasizes the importance of killing any story that replicants have reproduced - she sees it as a game changer for civilization as they know it.  She assigns him to find and kill the child.  He is later assigned to kill Deckard for the same reason.

I will leave the plot specifics to the various reviews and descriptions already out there and concentrate on the main issues that have to do with consciousness in the film.  At one point K is asked about childhood memories and recalls being bullied by a group of boys who wanted a small hand carved horse that he was carrying.  We see him escaping the boys and burying the toy in a pile of ashes in the bottom of an old furnace.  Later he consults with an expert to determine if the memory is real or not.  She confirms that it is a real memory and that leads him to believe he may be the child of Deckard and Rachael.  I asked myself at that point if K's interest in the memory was even possible if he was a replicant.  By definition in Tononi Koch theory, this experience requires consciousness and even perfectly engineered system mimicking the human brain could not generate the human experience associated with the memory much less the integrated emotions associated with this scene.  When K finally finds Deckard he is in a state of emotional turmoil related to information that Deckard provides him about his origins.  In a shootout Deckard is captured by Wallace Corp and is in the process of being tortured to find out information about the location of his and Rachael's child.  He is both rescued by K and united with his child by K.  In both Blade Runner movies Deckard is rescued in the end by a replicant.

My summary may not match up well with other reviews about specifics.  I did not view the protocol being given to K as the Voight-Kampff protocol, since it did not seem like it was an updated version.  Keeping Tononi Koch theory in mind it would be totally unnecessary even if he was really a highly sophisticated bioengineered replicant.  It would only be necessary to place a transcranial magnetic stimulation (TMS) coil close to his brain and observe the high density electroencephalogram (EEG) pattern.  If consciousness exists the theory predicts a pattern of widespread activation and deactivation.  It should also be possible to observe the characteristic sleep EEG pattern of transitioning from consciousness to unconscious dreamless sleep and back.  Of course these androids would need to be flawlessly engineered to protect circuitry from magnetic and electrical fields that occur with these measurements. 

In summary, I thought that Blade Runner 2049 was an excellent film just based on the plot and artistry.  I can always see the distinction between real science and science fiction.  If Tononi Koch theory is accurate, it is hard to imagine that a replicant would not be obvious to conscious humans.  I guess we will need to either wait until that day comes or until the theory has more widespread acceptance and proof.  The other parallel aspect of this film is bioengineered human reproduction.  It is difficult to see how that could ever be done, especially through human sexual contact with machines.  Sexual contact with bioengineered androids is a more frequent science fiction theme these days than in the past.  It is probably easier to see how that might happen from the human side.

There is currently not enough information about human sexual consciousness to imagine how it could be built or programmed into an android.     

George Dawson, MD, DFAPA               

Thursday, October 19, 2017

Tononi Koch Test for Machine Consciousness

In follow up to my previous post and before I saw Blade Runner 2049, I wanted to post a more modern take on the Turing Test based on a coherent theory of consciousness  by Tononi and Koch - both experts in the neuroscience of consciousness.  Their theory is the Integrated information Theory (IIT) of consciousness.  I have included the reference (1) and a graphic from their public access paper on the theory and there are also several very useful videos available to listen to the verbal descriptions of the theory.  I have been following consciousness research for at least the past 20 years including the two main listservs on this topic until they shut them down.  When a topic is so specialized, barring any breakthroughs the arguments become repetitive and a lot of time is spent bringing novices up to speed.  The videos fill a useful gap that these listservs previously addressed although I must admit  that I am always biased toward the written rather than the spoken word because it is a much more efficient information transfer for me.  The videos listed at the bottom of this page also serve another useful purpose.  The viewer is able to see how researchers in this area define consciousness and describe their theories.  I think that it is possible to notice that some of the definitions and descriptions are so vague as to have limited utility.

That is one of the reasons that I like the approach by Koch and Tonini.  I will also also say from the outset that I am not sure whether they view the theory as a joint venture or not.  As an example of what I mean looking at this specific search on consciousness finds that Tononi has been working in this area for at least 20 years.  A similar search on Koch goes back even 8 years earlier.  I don't know either of the authors but based on reading this paper it seems like a joint effort and that seems to come across in  the available videos of their presentations.

In the paper, that authors outline phenomenological definitions that are more exacting than any that I have seen in the past from other authors.  They are also neuroscience based and that makes a difference to me.  In various venues people often faintly praise but then lament psychiatry's emphasis on biology.  That is obviously not true or at least without reason and it also illustrates the lack of research that people do when it comes to critiquing psychiatry.  Psychiatrists have actively researched practically all forms of social, psychological, and biological etiologies of mental illness since the specialty was founded.  Any cursory review of a general psychiatric text illustrates that point.  So if a psychiatrist is focused on brain biology, it is certainly not without reason.  I previously posted a breakfast that I had with a mentor and after a long career as a psychiatrist he summed it up the way a lot of psychiatrists do: "It is all about the biology."  Critics take that to mean some kind of medical intervention.  They are certainly studied, but every other non-medical intervention has been studied as well.  It is common to read about non-medical interventions (psychotherapy, meditation, etc) altering the brain in some way.  In psychiatry that has been known within the field for at least 70 years.

There are two levels to study the work of Tononi and Koch.  The first is at the purely descriptive level.  That is the level that you will find in the first reference.  The second level is at the level of neuroscience and mathematical theory.  The authors have produced this work as well and reference it in this paper, but for the purpose of this post I am going to stay at the descriptive level and possibly post a more technical article on the advanced theory at a later date.  I will add that there are several competing theories of consciousness that I am not going to mention here.  I have studied several of them and think that they have less to offer than the Integrated information Theory (IIT) of consciousness.  I am admittedly a reductionist seeking to close the explanatory gap between brain biology and how conscious states are generated.  In some of the videos available online where there are panel discussions it is clear that the proponents of the other theories think that their own theories are correct and IIT is wrong. I have been down the rabbit hole with a few of those theories and don't want to take time to criticize them.  Feel free to look them up and form your own opinion.  For now I will focus on IIT.   

If you have never heard of Tononi, Koch, or IIT the first task is to read the paper.  I found it to be very clear in terms of definitions, postulates, and a clearly stated theory.  They point out that every experience will have an associate neural correlate of consciousness (NCC). There is currently an explanatory gap at the level of how conscious experiences are actually produced by the NCC.  They discuss the axioms necessary for a coherent phenomenology of consciousness.  From there they move on to the postulates.  Eventually they discuss how a conceptual structure that is maximally irreducible conceptual structure occurs in the brain.  These states are also known as quale.

They give a couple of examples about how conscious states occur within their theory.  They provide and example of how to calculate the quality and consciousness given a particular state containing elements (Figure 4).  They provide a clear example of the physical substrate of experience (complex), and a set of maximally irreducible cause-effect repertoires (concept), and a maximally irreducible "cause -effect structure in cause-effect space made of concepts..." or conceptual structure (quale)(p. 12).  The quantity of experience or consciousness is specified as Φmax.  The quality of experience is the form or shape of the conceptual structure. Distinct shapes occur with different experiences.

A more accessible example is discussed on page 9 and that is seeing Jennifer Aniston in a movie.  In that case, the complexes at the neuronal level affects the probability of past and future states. Consistent with neuroanatomy many specialized neurons are firing or not firing in the visual system that are associated with Jennifer Aniston as an invariant concept.  Other neurons are associated with other invariant concepts that allow for a fuller description in terms of appearance, age, etc.  All of the elements of the complex are intrinsic information and do not depend on visual inputs for example if dreaming or imagining the actress.

The authors also briefly review some of the experimental evidence that is consistent with the theory. They find that the theory is predictive in number of experimental paradigms. Transcranial magnetic stimulation (TMS) can be applied to to conscious individuals and unconscious (dreamless sleep, general anesthesia) individuals. In the conscious state there is a widespread pattern of activation and deactivation noted with high density EEG.  In the unconscious state cortical response is local or global and stereotypical - integration and information are lost.  A metric called the perturbation complexity index (PCI) a measure of the EEG compressibility from TMS stimulation can be used consciousness and it decreases in states that lack it.   

Tonini has been very explicit about the issue of machine consciousness - it doesn't exist no matter how sophisticated the machine is.  Any machine recognizing inputs that the human nervous system would recognize and producing identical outputs, even if that machine duplicates the structure and function of the human brain - is not conscious.  Tononi uses the consciousness science term zombie to characterize such machines.  By definition a zombie system is one that lacks consciousness and they are described as being subsystems in humans (2) when they are active outside the sphere of conscious recognition.

That brings us back to the ability to detect machines from humans.  If a machine is a perfect human zombie in terms of its input and output, we would not expect an empathy or Turing test to throw it off.   IIT theory acknowledges that what appears to be human input and output can be perfectly simulated.  The original Blade Runner protocol seems more than an empathy test. Specific questions about past memories illustrate an attempt determine if there is continuity between any current and past experiences, even though in the case of Rachael - the memories are false and implanted.

That being said IIT states there there is no Turing test for consciousness.  By now it does seem that fairly basic programs (like self learning neural nets) can replicate a narrowly defined human skill. In that case many people speculate that there is an intelligence or even human consciousness behind it.  On the other hand the perturbation complexity index (PCI) seems like a potentially useful test based on current results.

George Dawson, MD, DFAPA


1: Tononi G, Koch C. Consciousness: here, there and everywhere?  Philos Trans R Soc Lond B Biol Sci. 2015 May 19;370(1668). pii: 20140167. doi: 10.1098/rstb.2014.0167. Review. PubMed PMID: 25823865; PubMed Central PMCID: PMC4387509.

2:  Koch C, Crick F. The zombie within. Nature. 2001 Jun 21;411(6840):893. PubMed
PMID: 11418835.


Saturday, October 7, 2017

Blade Runner and Tests of Consciousness

There is an event happening tomorrow that has philosophical, biological and engineering implications and the respective definitions of the conscious state in humans.  That even is the opening of Blade Runner 2049.  This film is the highly anticipated sequel of Blade Runner - a 1982 science fiction film starring Harrison Ford as Deckard - a former cop and Blade Runner who is recruited back into active service as the latter.  Blade Runners are assigned to hunt down and terminate replicants or bioengineered androids.  These androids are the product of Tyrell Corporation and the plot of the first film involves Deckard needing to track down and terminate 6 replicants, including 4 who escaped form a mining colony.  Replicants are engineered to be similar to humans in basic appearance, behavior and social interaction but many have superior strength, intelligence, and speed.  That part of the plot would seem to be enough, except that Blade Runners need a specific skill to detect replicants from humans.  They need to be able to administer a test with the Voight-Kampff machine to determine if the test subject is human or a replicant.  A series of test questions of visual stimuli are administered and pupillary response, typical polygraphic measures like respiratory and heart rate, and particles emitted from the skin are measured.  The test questions themselves are designed to detect empathic responses in the subject suggesting that they are human.  The replicants in question are programmed to die in 4 years to prevent them from acquiring empathy and becoming undetectable. 

Early in the original film there are two of these interviews.  In the first, what appears to be a routine interview of an employee goes bad.  He is asked for a response to what he would do if he saw a tortoise in the desert laying on its back, struggling in the sun and not able to right itself.  The second interview is more critical because Deckard travels to the Tyrell Corporation where he meets Dr. Eldon Tyrell and Rachael who appears to be his assistant.  Dr. Tyrell is interested in both the V-K machine and the Blade Runner protocol for detecting replicants.  Rachael asks if there are any false positives from the procedure: "Have you ever retired a human by mistake?"  Eventually Tyrell asks Deckard to administer the protocol to Rachael. When he is done he and Deckard discuss the results.  Here is their exchange after Tyrell interrupts the questioning (2):

Deckard: One more question. You're watching a stage play. A banquet is in progress. The guests are enjoying an appetizer of raw oysters. The entree consists of boiled dog.
Tyrell: Would you step out for a few moments, Rachael -- Thank you.
Deckard: She's a replicant, isn't she?
Tyrell: I'm impressed. How many questions does it usually take to spot them?
Deckard: I don't get it Tyrell.
Tyrell: How many questions?
Deckard: Twenty, thirty, cross-referenced.
Tyrell: It took more than a hundred for Rachael, didn't it?
Deckard: She doesn't know?!
Tyrell: She's beginning to suspect, I think.
Deckard: Suspect? How can it not know what it is?......

The imagery in the film is outstanding.  The acting is good.  These opening scenes set the stage for most sci-fi fans to recognize that Blade Runners are used in the future (originally November 2019) to terminate replicants and they also detect them through a specific augmented interview protocol.  As the action starts to unfold there are additional philosophical questions that arise but these opening scenes are basic to my post.

The concept that there is some kind of procedure that can detect deception has been with us for some time.  The most recognizable form is the polygraph - the basis of which the V-K machine and the interview protocol based upon.  Polygraphic research illustrates that it has "extremely serious limitations for use in security screening" and that the rationale for such a device is weak (1).  Despite those findings and inadmissibility in court the polygraph continues to be used for both security screenings and informal screening of criminal suspects as though it works.  Would an emotional or more correctly psychophysiological response to questions about empathy yield any better predictive response patterns?  Probably not, but we need to keep in mind this was the author's idea about screening machine consciousness and not humans.

Although the viewer is not aware of all of the questions asked in a typical protocol with a compliant replicant, the ones asked are not impressive in terms of empathy.  Empathy as a conscious quality has varying definitions.  Empathy the technical skill is probably a more rigorous definition than what is applied in the screenplay.  A more common definition of empathy is the ability to understand another person's subjective state.  The word empathy is used only once by Dr. Tyrell who asks if the interview with the V-K machine is an "empathy test".  The initial questions that Deckard asks seem to be focused more on ethical behavior or societal standards. Later Deckard breaks protocol with Rachael and suggests that her memories are implanted from others - not really her own.

Apart from the action sequences, the important aspect of the original Blade Runner was the whole idea that there may be a unique human conscious state differentiated from machine consciousness - even in the most sophisticated machines.  The test for consciousness was a subjective interview protocol combined with physiological measures that were supposed to be an enhancement.  The main metric was empathy.  The measures were purely qualitative and their ability to distinguish the differences in empathy between different humans was not discussed but should be suspect.  If the V-K protocol was a valid test - it is conceivable that humans with the least empathy could be confused with a replicant. This problem in consciousness has been the subject of debate for decades - including any specific test to distinguish humans from machines trying to emulate humans.

Before I take a look at any specific test - it is always a good idea to look at the problem of defining consciousness.  Practically any paper that is focused on consciousness discusses the problem of definition at the outset.  From a medical perspective the term is probably even less certain because of its application in neurology and in contrast to subjects who have no discernible neurological problem.  Classic texts like Plum and Posner's Diagnosis of Stupor and Coma (4) discuss consciousness as awareness of self and awareness of the environment.  Consciousness is also viewed as being determined by level of arousal and the content of consciousness defined as the sum of all cognitive, affective, and experiential products of the brain. Fractional loss of consciousness is discussed as being possible when specific neuronal functions are lost.  They differentiate acute states (eg. delirium, stupor, coma) from subacute or chronic states (eg. dementia) and discuss how they are determined clinically.  These states are studied primarily neurologists along with other clearly altered states of consciousness like sleep and general anesthesia.

Assuming no major disruptions in neuroanatomy or physiology, the definition of a baseline conscious state also lacks precision.  There is a general agreement that the person needs a level of awareness and experience.  They are able to experience sensory phenomenon, thoughts, and emotions.  Some authors break that down into standard components of the mental status exam but it it much more than that.  In normal life, we routinely lose consciousness during non-REM sleep and regain it during periods of REM sleep.    Today we know the situation is even more complex because there appears to be a posterior cortical zone that correlates with dreaming experience (DE) or no-dreaming experience (NE) in both REM and non-REM sleep (5).  On of the unique aspects of brain complexity and human experience is that it generates a unique conscious state for individuals.  That makes it impossible to fully appreciate the specific conscious experience of another person.  We generally infer that they are conscious by their typical behaviors.

One of the critical factors in the study of consciousness, especially if we are stepping away from real discontinuities like the loss of consciousness is how we define it.  Most clinicians will say that "we know it when we see it" - but rigorous research definitions are lacking.  At the level of thought experiments things get even more confusing.           

Consider one of the original tests invented by Alan Turing.  His published paper on the subject is available free online (6).  Turing's paper is interesting from a number of perspectives not the least of which is that he predicted in 1950 that "in 50 years time" there would be digital computers that could fool humans at least 30% of the time in what he called the Imitation Game.  The game is played by asking a machine (computer) a series of question to determine if it is human or a machine.  He details this test protocol in the original paper.  He suggests the communication be accomplished with a teletype machine.  The goal of the test is to fool a human judge into believing that the machine is human.  In rereading the original paper it is clear that his focus is on machine thinking rather that the machine's conscious state.  In the paper he debunks what he refers to as "The Argument from Consciousness " by saying this about the need for emotion and self awareness - specifically recognizing that the machine knows what it has done:

"This argument appears to be a denial of the validity of our test. According to the most extreme form of this view the only way by which one could be sure that machine thinks is to be the machine and to feel oneself thinking. One could then describe these feelings to the world, but of course no one would be justified in taking any notice. Likewise according to this view the only way to know that a man thinks is to be that particular man."

This is of course a property of conscious states and he appears to take it to at least as an extreme as the philosophy professor who proposes the original argument that more is needed than the Imitation Game to show that a machine thinks.  It also sets up the question: "Is there any difference between thinking and consciousnesses?"

Turing also predicts that at some point in time, it may be possible to cover a computer with synthetic human tissue and make the identification even more difficult.  He considers and rejects various counterarguments for eventual machine intelligence in this article.

The relationship between consciousness and intelligence is complex.  Practically all of the AI that is written about involves restricted task domains (playing games (chess, poker, Go), verbal chats, or some specific pattern recognition.  One thought about intelligence is that it is context sensitive and needed to complex tasks that cannot be broken down into smaller single domain tasks.  According to one expert (Tononi - see reference 8) "That kind of intelligence is consciousness".  I think that it is fairly easy to look back at the Turing Test and see what Tononi is referring to.  On the face of it - this task of producing a typed transcript appears to be a single domain task.  But behind that there needs to be an intelligent structure that is able to play a game that involves making a human judge believe that the transcript is being produced by a human rather than a machine. While Turing refers almost exclusively to intelligence or thinking in this case it can be considered consciousness.     

The reality of testing replicants is that it is a lot less complicated than suggested by Deckard's interviewing technique with the V-K device.  All you would have to do is polysomnography.  One of the clear cut conscious states is REM sleep.  You could make the argument that a bioengineered android could be set up to fake a sleep EEG, but my guess is that would be a very costly procedure and as Dr. Tyrell says in the original film commerce is the goal of the corporation and spending a lot of money on faking a sleep EEG would hardly be cost effective.  Some AI philosophers might suggest that at some point the machines might learn it on their own if it was advantageous in their competition with humans.  It would take a lot more than learning.  It would take a lot of engineering to produce the necessary electric potentials under standard EEG electrodes inside an artificial skull even if it was covered with cloned human tissue.  We can say at least that the vision in the film cannot be realized anytime soon.

 On the issue of a verbal test for humanness, that is slightly more complicated but not out of the question even at this point in time.  The first time, that I heard that a computer may have "passed" the Turing Test was when a chess player stated that when he was playing an IBM computer it seemed like he was playing a real human.  That is a very restricted task domain paradigm and outside of that I doubt that same computer could have been mistaken as being conscious.  The official milestone of a computer program passing the Turing Test occurred on Saturday June 7, 2014 at an annual competition held by the Royal Society.  Obviously a blinded test of a robot conversing does probably not represent much of a recognizable conscious state.  AI experts are currently working on more rigorous test of machine consciousnesses including interactions in other sensory modalities in order to improve the level of AI.

As I thought about related issues to this post, testing AI for conscious states may come down to determining the manufacturer.  From a theoretical standpoint - perfecting AI performance on future tests of consciousness looks like a trend in the future.  The current AI trends are bound to leave traces of algorithms and an imprint of the management biases in that company.  There are just too many degrees of freedom in conscious systems to not leave a mark.

There are a list of associated issues having to do with identity and implanted memories in machines hoping to emulate humans that I hope to consider in the future.  Consider this mention of Blade Runner a jumping off point.

George Dawson, MD, DFAPA


1:  National Research Council (2003). Polygraph and Lie Detection.  Committee to Review the Scientific Evidence on the Polygraph.  Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.

2:  Hampton Fancher, David Peoples.  Blade Runner. Screenplay.

3:  Philip K. Dick. Do Androids Dream Of Electric Sheep? New York; Ballantine Books: 1968.

4:  Posner JB, Saper CB, Schiff ND, Plum F.  Plum and Posner's Diagnosis of Stupor and Coma.  Fourth Edition.  Oxford University Press. New York, 2007.  p 6-37.

5:  Siclari F, Baird B, Perogamvros L, Bernardi G, LaRocque JJ, Riedner B, Boly M,Postle BR, Tononi G. The neural correlates of dreaming. Nat Neurosci. 2017 Jun;20(6):872-878. doi: 10.1038/nn.4545. Epub 2017 Apr 10. PubMed PMID: 28394322; PubMed Central PMCID: PMC5462120

6:  Turing AM.  Computing machinery and intelligence. Mind 49: 1959: 433-460. Link to full text

7: Lorraine Boissoneault.  Are Blade Runner’s Replicants “Human”? Descartes and Locke Have Some Thoughts.  Smithsonian.  October 3, 2017. Link to full text.

8: Consciousness and Intelligence:

MIT150 Symposium: Brains, Minds and Machines Moderator: Shimon Ullman PhD '77, Samy and Ruth Cohn Professor of Computer Science, Department of Computer Science and Applied Mathematics, Weizmann Institute of Science Panel: Ned Block, Silver Professor of Philosophy, Psychology, and Neural Science, Department of Philosophy, New York University Christof Koch, Lois and Victor Troendle Professor of Cognitive and Behavioral Biology, California Institute of Technology; Chief Scientific Officer, Allen Institute for Brain Science, Seattle, WA Giulio Tononi, David P. White Chair in Sleep Medicine; Distinguished Chair in Consciousness Science; School of Medicine, Department of Psychiatry, University of Wisconsin, Madison


Graphic credit is: Shutterstock Stock illustration ID: 561497119 "old street in the futuristic city at night with colorful light,sci-fi concept,illustration painting By Tithi Luadthong".

Tuesday, October 3, 2017

Mass Shootings in America - Why They Are not Terrorism

Infographic: Mass Shootings in America | Statista You will find more statistics at Statista

American media is so used to mass shootings that many are set up to reflexively release provocative and often poorly thought out theories after the incident.  The fact that there is rarely much more information about the shooter's motive reinforces this process.  The tragic event in Las Vegas is no exception.  It is currently the worst mass shooting incident in the USA and here is a link to the previous two.  There is the usual gun debate and public relations maneuvers by wide gun access advocates.  There are the rational responses by citizens calling for some measure of gun control.  I say rational because there is excellent evidence (1) that stricter gun laws enacted after a mass shooting incident, prevent further mass shooting incidents.  In the media coverage after this incident and on various social media cites there appears to be some confusion over whether American mass shooters are terrorists or not.

Before I go on, I have noticed that in social media many people are posting state statutes that equate terrorism with acts of violence.  The US Code defines both international and domestic terrorism as intimidation or coercion on a domestic population in order to influence the conduct or policy of the government.  I would take it a step further in that there needs to be an ideological message.  All of the news about who takes "credit" for these incidents implies this is a critical dynamic along with all of the publicity generated by many of these groups with very explicit messages.

For all of these reasons, typical mass shooters in the United States are not terrorists.  There is no ideology, no message, and no attempt to influence the government.  There certainly may be mental illness, but that alone is insufficient to produce a typical mass shooter.  There are many more mass shooters that are not technically mentally ill than those who are, but I will admit that the methodology for studying the problem is inadequate since many of these perpetrators are dead or unwilling/unable to produce a coherent story.  I will also be the first to admit that this is my impression, because the data on mass shooters is large and I have no access to all of that data.  For example, the NY Times came out with a graphic showing that in the past 477  days in the US there were 521 mass shootings (2).  The use the criteria of 4 or more people killed or injured qualifying as a mass shooting.  I have no access to that data.  There have been attempts to look at the data according to specific types of mass shooters like rampage killings.  The most recent FBI study looked at where the events occurred, if there was any connection between the shooter and the location.  It did not focus on the potential motivations of the shooters despite having access to all of the data:

Though this study did not focus on the motivation of the shooters, the study did identify some shooter characteristics. In all but 2 of the incidents, the shooter chose to act alone. Only 6 female shooters were identified. Shooter ages as a whole showed no pattern. However, some patterns were seen in incident sub-groups. For example, 12 of 14 shooters in high school shootings were students at the schools, and 5 of the 6 shooters at middle schools were students at the schools. (p. 20).
It did look at some specific locations and the relationship of the shooter (employee, family member)  to that location.  The critical analysis of this report was that it appeared that although mass shootings have occurred a long time in the United States - they appeared to be increasing in rate and lethality as indicated by the following graphic from that report:

The graphic points out that not only is the general problem of mass shooting being ignored from  policy perspective, the increasing rate and lethality of these incidents is being ignored.  From the FBI report some of the motivations clearly involve enraged employees or former employees.  Mental illness was omitted as a possible motivation.  All of the vignettes of each incident are attached to the end of the report.

My views on mass shootings, violence prevention, and even homicide prevention have not changed from my previous posts in this area.  I will add one more dimension to the issue and that is the cultural meme of the mass shooter in America.  Granted there are various etiologies that can produce a mass shooter, but after terrorism has been  eliminated there is a prominent cultural meme present in the USA and that is - if I feel like I have been wronged - I can pick up a gun and and make things right (at least in my own mind).  Americans are obvious  to the presence of this thought pattern in our culture and what it implies.  The most significant implication is that reality is suspended if I merely feel like I have been wronged.  The reality of why I was fired, divorced, arrested is secondary to my thoughts on the matter.  Most adults in this country have had experience dealing with somebody who had this pattern of thinking.  To some extent most people with some level of self awareness can catch themselves in the process of making the same errors - most frequently when angry or emotionally upset.  Varying degrees of road rage is a classic example.  There is an anthropological argument that violence, aggression, and homicide are age old solutions to often minor disagreements.  In many cases the aggression spreads to a  larger number of targets than were involved in the original conflict.

There is the issue of violent and homicidal fantasy being common in both normative and violent criminal populations (4).  Various theories about the function of these homicidal fantasies exist.  Some homicidal fantasies seem higher risk than others but the study of fantasy per se, is limited by inadequate methodology including degree of self disclosure and lack of long term follow up.  Much of the work is anecdotal.
At the cultural level is there a larger problem in America?  American culture unquestionably has viewed firearms as tools for settling disputes.  That plays out time and time again in various movies and to varying degrees in American subcultures where being capable of violence and aggression is synonymous with being respected. To be very clear most people can tell the difference, but cultural influences can have a powerful effect.

No matter what the intrapsychic or cultural ground for gun violence, one thing is obvious if a firearm is available it is more likely to be used in both incidents of suicide and homicide.  We currently have a Congress and various political factions that are in denial of that basic fact.  Unless there is a radical change in that political approach and/or a concerted effort toward violence and homicide prevention reversing the trend in the FBI graph is unlikely.

George Dawson, MD, DFAPA


1:  Chapman S, Alpers P, Agho K, Jones M. Australia's 1996 gun law reforms: faster falls in firearm deaths, firearm suicides, and a decade without mass shootings. Inj Prev. 2015 Oct;21(5):355-62. doi: 10.1136/ip.2006.013714rep. PubMed PMID: 26396147.

2:  The Editorial Board.  477 Days. 521 Mass Shootings. Zero Action From Congress. New York Times; October 2, 2017.

3:   Blair, J. Pete, and Schweit, Katherine W. (2014). A Study of Active Shooter Incidents, 2000 - 2013. Texas State University and Federal Bureau of Investigation, U.S. Department of Justice, Washington D.C. 2014.

4: Gellerman DM, Suddath R. Violent fantasy, dangerousness, and the duty to warn and protect. J Am Acad Psychiatry Law. 2005;33(4):484-95. PubMed PMID: 16394225

Saturday, September 30, 2017

Treatment Setting Mismatches - The Implications

Most physicians first experience with treatment setting mismatches occur when they are medical students and residents.  The ethos of medical training fosters an attitude of being put upon by the trainees - partly because they are or at least they were.  There was a history in American medicine as using the trainees in particular as inexpensive labor - doing all of the admissions to training hospitals and staffing them all night long.  In many if not most cases that meant long hours and minimal staff supervision.  The staff typically would hear about late night admissions only if they gave their resident team specific parameters to call them.

That work flow created tension in the system of care.  Depending on the institution teams could negotiate for admissions but typical the emergency department (ED) physicians had veto power in getting people in the hospital.  They were in the highest risk situation because they were responsible for what happened with discharges from the ED and they were responsible for getting patients out of the ED in a timely manner.  This led medical and surgical teams to view some of the admissions pejoratively as weak or dumps.  Many of these admissions were discharged as soon as possible - partly due to circumstances and partly self-fulfilling prophecy.  The treatment setting mismatches in these case could occur in both the ED and the hospital if the patient did not need to be there.  These problems has bee addressed over the part 15 years with the advent of hospitalists.  Hospitalists have a more enduring relationship with their colleagues in the ED.  There is more consensus on admissions and hospitals are staffed 24/7 by hospitalists rather than trainees.  That does not mean that the treatment setting mismatch has been solved.  You start to notice the issues involved with treatment setting mismatches after you are practicing medicine and you are no longer a trainee.  A few examples will illustrate this point.    

Hospital to Home

A 75 year old woman with diabetes mellitus Type 2, hypertension, and new onset atrial fibrillation is discharged home after two days in the hospital. She came in taking 5 medications but is leaving with 8.  She lives alone and during the nursing review at the time of discharge she knows how to set up the medications out of the bottles every day and the basics of what she needs to avoid in her diet.  There are some red flags with her medications in terms of potential interactions and symptoms that she needs to quickly report to her physician.  She currently has no primary care physician.  Her physician quit the practice and moved to a different clinic.  She tried making appointments with the other physicians in the clinic and had the feeling that "none of them like old people".  She is discharged with a bundle of medication side effect sheets highlighted by the nursing staff.  She is advised to review the highlights and report those symptoms to the clinic. 

Hospital to Facility

An 82 year old man with dementia and agitation is admitted to an acute care psychiatric unit.  He comes in with the message that his current facility will not take him back because he is too aggressive.  The initial assessment shows that he is barely mobile due to osteoarthritis but that he requires intensive nursing care for diabetes mellitus Type 2, wound care for foot ulcers, nebulizer treatments for asthma/COPD, and careful attention to his input and output each day because of moderate renal failure and a tendency to take inadequate amounts of fluids.  After two weeks of working with medical consultants, the attending psychiatrist realizes that there is no Skilled Nursing Facilities where the patient will get the level of care he is currently getting.  Without that level of care the patient will be dead in a few months. 

ED to Home

Patient X is a 50 year old man with alcoholism, alcoholic liver disease, and mild emphysema.  For the past three months he has been drinking 750 ml of vodka per day.  After an intervention with his friends and family he was referred to a substance use treatment facility.  The family was told at that time that he should be admitted to a detox facility because detox was not available at the treatment facility.  The patient decided to go to the ED.  He was given IV fluids and discharged 3 hours later with a prescription for lorazepam and told to go home and detoxify himself of go directly to the treatment setting.  He took all of the lorazepam on the first day and resumed drinking vodka.  He tried to get in to the original treatment facility and was turned down again because he still needed detox.

ED to Treatment Facility

The patient is at a local drug and alcohol treatment facility when he experiences a sudden acute mental status change.  He is confused and starts to experience auditory hallucinations part way through a detoxification protocol.  He asks to leave the treatment facility.  The facility and the patient's family convince him to go to the ED.  While there the staff treat him with benzodiazepines and IV fluids and tell him to return to treatment.  He tries that but the treatment facility disagrees with the ED and see his mental status and being too compromised to participate in treat.  He goes home and resumes drinking instead.

Hospital/ED to Jail

Patient Y a 29 year old man is detained by the police in a local shopping mall for creating a public disturbance.  He was panhandling. When none of the shoppers responded favorably he got very close to them and made loud threatening noises until the police were called.  When the police asked him to leave the mall, he shouted at them and threatened to kill them.  He was arrested but because the police suspected a mental illness he was taken to the emergency department for evaluation.  The arresting officers were hoping he would be admitted for further observation and treatment.  After the ED evaluation was completed as social worker came out and asked about what would happened if the patient was discharged to the street.  The officers responded that he would be arrested and taken to the local county jail.  At that point the patient was released on the basis that he was not dangerous and transported to county jail.   

These scenarios are all hypotheticals based on my experience.  Any physician with similar experience can cite hundreds of these examples and many, many catastrophic endings.  The common biases are that alcohol is not that much of a problem and that most people with chronic mental health and medical problems can continue to plug along with minimal assistance.  The error is to ignore the real dangers and not be focused on quality care that by definition solves and addresses clear health problems.

These scenarios all have some common dimensions.  First, the receiving setting is easily exceeded by the patient's medical needs.  In some cases the receiving setting is not medical oriented at all and is ill equipped to address medical problems.  Obvious examples are people who are discharged to jail or care facilities that are funded on the basis that they provide little to no medical care.  The scenario where the man with chronic (or in some cases acute) mental illness being sent to jail rather than hospitalized for effective treatment is one of the reasons why county jails have become the largest psychiatric hospitals in the USA.  It is one thing to recognize that fact but it is another to think about how that is happening.  In most cases hospitals have little to no bed capacity for psychiatric patients.  If they do - they are inadequately funded to provide complex care with inadequate staffing, length of stay, and in some cases inadequate medical and psychiatric coverage. At some point the politicians and bureaucrats decided to align the incentives so that level of care would be best provided in jail. 

Second, the discharge to inadequate facilities are driven by rationing of acute care facilities as "expensive and possibly unnecessary facilities".   That determination is complicated by the fact that receiving facilities have also been depleted by the same rationing mechanisms.  The reality of American healthcare at this point is that it is almost all rationed by a middleman who are incentivized to make as much profit as possible by rationing.  A great example is detoxification from drugs and alcohol.  Despite the fact that this process is potentially life threatening, at the minimum is associated with a high degree of distress, has significant psychiatric morbidity including suicide risk, and needs to be properly done in order to facilitate sobriety very few people in the USA are admitted for appropriate detoxification.  Like people with severe mental illnesses they are mostly sent home or to a facility with minimal to no medical coverage and then sent home.  In cases where a person is incarcerated they often go through acute detoxification with no medical assistance.  In many cases they suddenly stop opioids, benzodiazepines, or opioid agonist treatment (methadone or buprenorphine) and go through severe withdrawal in jail. 

Third, leaving a medical facility where there is intensive nursing care is like falling off a cliff for a lot of people.  There is no transition or assurance that many people can manage their own care in their own homes.  There used to be more options.  Public health nursing comes to mind.  Twenty years ago the attending physician could write an order and a public health nurse would see the patient in their own home and make sure that the transition was occurring properly and if not stay in contact with the patient and provide ongoing assistance.  That service was eliminated along time ago in order to reduce costs.

Fourth, an entire system of shadow care has evolved to make it seem like care is being provided when it is not.  Typical examples include health club discounts or a life style coach that calls you up on the phone and encourages you to be more physically active or eat less.  The ultimate advertising these days is a plan where you get a very modest health insurance discount through your employer if you sign up for one of these options and demonstrate compliance.  It makes it seem like both your employer and your health plan care about your health.  In the larger scope of things, it is nothing compared to the lack of care that happens in the above scenarios.

The final point to be made here is the irony of spending more money on health care than any other country in the world and having a large portion of it go up in smoke.  The source of that smoke is the huge administrative costs and profits of rationing health care under the guise that it is more "cost effective" or "efficient".

There is nothing cost effective or efficient about rationing poor quality care to patients.  The best evidence is during care transitions and the resulting treatment setting mismatches.

George Dawson, MD, DFAPA

Sunday, September 24, 2017

Whatever Happened to IPT?

I  first read about the Interpersonal Psychotherapy of Depression when the book came out in 1984.  The origins were there for quite a while before the book.  Gerald Klerman, MD and Myrna Weissman, PhD were prominent in developing a model that depended heavily on psychoanalysis and previous interpersonal theorists like Harry Stack Sullivan and John Bowlby.  The theory rests on a fairly basic assumption and that is that depressions can have an interpersonal etiology as well as social and biological ones.  At the time the book came out, manualized psychotherapies were starting to peak.  A few years earlier I requested a copy of the research manual from Marsha Linehan, PhD and she sent it to me.  That original manual is quite different from the way that (dialectical behavior therapy) DBT is practiced today as a general group behavior therapy.  Beck, Ellis, and Meichenbaum were focused on cognitive-behavioral therapy or CBT at about the same time.  These authors produced texts and manuals on how to perform these therapies.  The driving force for the manuals was psychotherapy research.  A standard research protocol in any therapy was to produce a manualized version, train the research therapists in the therapy, and then monitor them at various points in the therapy to assure that they were performing the therapy according to the manual.

Clinical training at the time was not nearly as standardized. It is fair to say that the predominate training model for psychiatrists was psychoanalytically based psychodynamic psychotherapy.  The main subdivisions were insight oriented psychodynamic psychotherapy and supportive psychotherapy.  Supportive psychotherapy avoided confrontation of the patient's defenses and the therapist used many of the techniques used in CBT.  There were also some brief forms of psychodynamically based psychotherapy.  Viedermann wrote about a psychodynamic life narrative model of crisis intervention for college students in crisis.  It was designed to be delivered in just a few sessions.  The approach was interesting because it had interpersonal psychodynamic interpretations rather than transference based or interpretations based on unconscious mechanisms.

Depression is a very heterogeneous category of disorders.  The interpersonal context remains the same and it is up to the clinician to figure out what might be relevant - what might have personal meaning.   The four areas of focus noted int he above diagram can be historically recorded in just about anyone's life - but are they the cause of depression?  IPT answers the second half of that question - what can be done about it?

A good illustration is the case of the depressed person who has sustained a significant personal loss that they have not recovered from.  In clinical practice it is common to see people who are depressed and  date the onset of that depression to a point in time when a significant figure in their life died. Whether that happened 10 or 20 years ago - they have not recovered despite antidepressant maintenance or multiple antidepressant trials.  The goal for the IPT therapist is to discover of the depression is due to the loss of the meaning of the loss and facilitate completing the grief process.  In today's world, many patients with grief are referred to Eye Movement Desensitization and Reprocessing (EMDR) therapists for presumptive post traumatic stress disorder (PTSD).  I have certainly encountered people who were traumatized by the manner in which their significant other died.  The most common scenario is a surviving spouse or parent.  In the majority of cases, the patient is experiencing grief and they have not been able to complete that process.  The IPT therapist is able to recognize and treat that problem.            

There is plenty of evidence that IPT is an effective form of psychotherapy if you really need evidence.  Medline searches yield a total of 4590 references for interpersonal psychotherapy 786 reviews in that category.  For interpersonal psychotherapy depression there are a total of 1548 articles and 327 reviews.  A recent brief and excellent review article was written by Markowitz and Weissman.   It contained this description of Gerald Klerman's orientation during the initial discussions of this psychotherapy:

"Although Klerman, a psychiatrist, saw depression as basically a biological illness, he was impressed by how social and interpersonal stress exacerbated onset and relapse. Noting that ‘one of the great features of the brain is that it responds to its environment’, he felt that the interpersonal context of the onset of a depressive episode might be a target for psychotherapy." 

I would add that at the time there was active conflict between academic psychiatrists who considered themselves to be biological psychiatrists and a group who considered themselves to be psychotherapists.  Eclectic psychiatrists like Klerman existed in every department but they tended to be the silent majority.  Psychiatrists like me were fortunate to be trained by them.

There are several reasons why knowing about IPT - in addition to other psychotherapy paradigms can be useful to any psychiatrist:

1.  It is easy to learn -

There have certainly been other manualized versions of psychodynamically based psychotherapy.  The authors here have really streamlined the process and generally provide a level of analysis based on social roles/behaviors and discuss specific strategies to address problems.

2.  It facilitates thinking about a formulation (if you do that) - 

When it comes to assessment and diagnosis - I have a lot of details on this blog supporting the basic framework that a psychiatric diagnosis is really not enough when it comes to a psychiatric assessment.  There needs to be an overall formulation of what the patient's problems are and how they came about.  A diagnosis or diagnostic code is a poor substitute.  Considering two 50 year old men with severe depression - it probably matters if one of them got depressed as a result of being fired and the other became spontaneously depressed and could not work because of that disability.  That fact alone creates more relevant information for the diagnosis and treatment planning that all of the diagnostic codes and modifiers.

3.  The therapy can be delivered rapidly in the context of psychiatric appointments -

Once the formulation is in your notes, you can pull it up at subsequent visits and discuss what is relevant to the patient.  Many of the interventions are very focused and can be discussed over the span of 15 or 20 minutes.  Instead of just reviewing medication related symptoms and side effects, the discussion can include a therapy that is effective for depression and may either enhance or replace the medication effects.

4.  It provides a formulation that the patient understands and improves empathic communication - 

I have had people ask me at the end of the interview to "Tell me what you think the problem is." They may add other sentences for emphasis like: "I've done all of the talking here - you're the doctor - tell me what the problem is."  Listening for a thread in addition to the usual description of symptoms allows for a formulation based on interpersonal of social contexts and how that relates to diagnosis and treatment.  It should not be too hard to believe that most people find that a DSM theoretical formulation falls flat.

5.  IPT can reveal unaddressed problems - 

If the IPT therapist is talking with a patient who dates their depression back to the loss of someone who they were emotionally attached to and that has never been addressed, that provides some diagnostic and therapeutic insight in the same session.  In some cases it can also lead to cost effective therapy for the patient if there are grief counseling clinics or a clergy person who does grief counseling.  One of the glaring errors I have noticed with a lot of current therapy is that it is trauma based.  To me that means that a person has experienced trauma at the level that it could cause post traumatic stress disorder or similar problems.  I see many people with grief diagnosed as having a trauma disorder and treated with exposure therapy for grief.  Grief counseling or an IPT approach is a preferable option.  

6.  IPT adds a needed non-medicine dimension to psychiatric treatment - 

The term psychopharmacologist is often mentioned by people who I assess.  I ask myself what does a psychopharmacologist do when the patient is experiencing a chronic stressor that is either environmental of interpersonal in nature.  Does the medication just go up to the point that the person is numb to the stress?  As a psychopharmacologist myself, there is an obligation to let people know that at some point - the stressors in life will overcome the effects of medicine and that there is no medicine that will overcome chronic stress - at least without sedating them to the point that it will be difficult to function.  At that point the therapeutic alliance needs to focus on resolving the environmental or interpersonal stress.  It is extremely important at that point in time to be able to associate the patients problem with the therapy models and discuss these paradigms as a way to resolve the problem.  In this case - hopefully all psychiatrists have been trained in the non-medicine dimension before they start seeing patients.  

Those are some of my thoughts about IPT.  I have always considered it to be an effective and pragmatic form of psychotherapy.  Back when I was learning about psychotherapy, I had supervisors of every stripe ranging from Rogerian therapy to psychodynamic to existential psychotherapy.  The paradoxical  aspect of my psychotherapy supervision was that they all advocated for picking one style of therapy and sticking to it.

I really don't think that is a good idea.  Strictly in terms of psychodynamic therapy, one of the key aspects of the assessment was to determine if the patient was psychologically minded enough to engage in the constant clarification, confrontation, and interpretation that goes on in that format.  If not they were considered candidates for supportive psychotherapy.  To someone trained in my era, CBT, IPT, and DBT and their equivalents would all be considered supportive psychotherapies.

I think that provides a good rationale for knowing these therapies and being able to apply them to situations where they might be the best approach.    

George Dawson, MD, DFAPA


1: Viederman M. The Psychodynamic Life Narrative. Psychiatry. 1983 Aug;46(3):236-246. PubMed PMID: 27719516.

2:  Klerman GL, Weissman MM, Rounsaville BJ, Chevron ES.  Interpersonal Psychotherapy of Depression.  Basic Books, Inc; New York; 1984: 255 pp.

3: Markowitz JC, Weissman MM. Interpersonal psychotherapy: past, present and future. Clin Psychol Psychother. 2012 Mar-Apr;19(2):99-105. doi: 10.1002/cpp.1774. Epub 2012 Feb 14. PubMed PMID: 22331561.

Wednesday, September 20, 2017

Therapeutic Alliance - A Better Diagram

I posted on the therapeutic alliance about 5 years ago.  The goal of that post was to point out how psychiatric treatment occurs - specifically the idea that the physician and the patient need to collaborate and define a set of diagnoses and/or problems to work on.  They have to agree on the problems and also the plans to resolve (or not) resolve them.  In the case of a chronic illness  with no clear resolution, the goals are focused on optimizing function.  The is basically the ideal treatment model for any physician and any treatment - the only difference is that psychiatrists are trained to attend to the relationship between the patient and the physician in very specific ways.  That includes the concept of transference and countertransference or the emotional reaction and associated thoughts of the patient to the physician and and the physician to the patient based on their past experiences.  By attending to those patterns psychiatrists can develop insights into what is unfolding in the relationship and in some cases use defensive patterns to assist in the diagnosis and treatment process.

I had a few ideas about how I wanted this diagram to differ from the diagram in my previous post.  First, I wanted it to reflect treatment continuity.  Ongoing treatment is a dynamic process of multiple events across time in the case of ongoing care. It can also involve single cross sectional interventions that require a patient to complete a prescribed treatment and contact the physician if the problem is not resolved as expected.   There are several hard stops to a medical treatment process - cure, improved function without cure, increasing disability, care refusal, and death to name a few.  I decided to leave those implicit and not alter the basic diagram.  Second, I thought that triangles demarcating the physician-patient decision space would be a good idea because they are more open structures and were used in a recent example of how graph theory may be useful in neuroscience.  Third, I wanted to avoid jargon.  There are numerous conceptualizations of the conscious state of the patient and the physician and what that implies for the communication - but I distilled it down to the communication and collaboration parameters as noted above.  There is implicit informed consent in this model. There are far too may people who see physicians and adopt a passive role.  In some cases they request that the physician make important decisions for them: "What would you do if you were me?" The role of the physician is to communicate the information that the patient acts on with all of the attendant risks.

The general model is a good one for all medical specialties.  Psychiatrists are be trained to attend more the the relationship and overcome obstacles to treatment. A basic example would be the person who consults with a physician but who is skeptical of the physician's motivations or intentions.  In many cases this results in a disagreement and the relationship is terminated without the patient receiving treatment.  A psychiatrist should be capable of recognizing what is occurring in the interview and at least being able to point out the reality of the situation to the patient.  That reality is depicted in the diagram at the top.  I frequently tell people that I have no interest in telling them what to do or even prescribing a medication that they do not want to take.  My appropriate role in the model is to give them the best possible medical advice about resolving problems the we both agree on and that might benefit from treatment.  It is their role to decide among the options and consent to treatment.  Not consenting to any treatment is always an option.

The model also implies that both parties are competent to interact and make decisions.  In the case of physicians, states have a vetting and licensing process that is focused on public safety and it does a good job of removing most unsafe or incompetent physicians.  In the case of the patient, there are various contexts in which substitute decision-makers are engaged in the process including guardians, conservators, and judges.  The legal process to make that determination varies widely from state-to-state and even county-to-county within the same state.

George Dawson, MD, DFAPA

Monday, September 18, 2017

Medication Reconciliation

The Medication Reconciliation Process
Medication reconciliation has become a term that is much en vogue after the Joint Commission and electronic health record (EHR) manufacturers got a hold of it.  Medication reconciliation (MR) basically means that anytime a patient changes health care setting there needs to be a procedure in place to assure that their medications are not changed as a result of that transition - no medication errors can be made.  The best example is a patient goes from an outpatient setting into a hospital for surgery, has the procedure done and is discharged.  This care transitions would require a medication reconciliation at the time of admission and another one at the time of discharge.  Prior to the EHR, a physician would just transcribe the admission orders rewriting or modifying the patient's outpatient medication as necessary.  With the EHR, the MR can be rapid if the patient and the medication are already in the system, or it can be a very slow process if they have to all be entered from scratch.

In my 22 years of inpatient experience, I have a lot of experience with this process.  I would be the recipient of shopping bags full of medications and learn that some patients were taking greater than 20 medications at a time.  Most of my time on inpatient units, I was in charge of reconciling all of the patient medications - medical and psychiatric on the patients I admitted and discharged.  In extreme cases this process alone could take an hour on either end.  It got a lot worse over time because more people were inserting themselves in the process.  Pharmaceutical benefit managers learned to demand  an entirely new prior authorization process, even for medications that the patient had been taking for years - at the time of discharge.

The reconciliation process has several modifications based on the care model.  For example, in less acute care settings where physicians are not present in the facility at all hours, on-call staff call in and do remote medication reconciliation.  Before the EHR that would involve a discussion with nursing staff who would review the medication, the patient's status and put in the orders.  The physician would countersign these orders the next day.  In current EHRs, there is a med reconciliation section and there may be an expectation that the reconciliation occurs at the time of admission.  Nursing staff will typically enter the medications from available bottles or pharmacy records.  The physician has to pull up the record remotely, review the patient's status with nursing staff, and sign off on the entered medications.  In some systems, only the basic prescription is ordered and the physician will have to complete numerous fields before the inpatient orders are complete.  Modern EHRs invariably include drug interaction software with very low thresholds and all of those warnings need to be clicked through and dismissed before the patient's usual medications can be resumed.  It is a very slow and inefficient process compared to before the EHR.

One of the rationalizations for the MR in the EHR is patient safety.  Regulatory bodies like the Joint Commission are very big on safety factors and they should be.  The EHR was supposed to greatly reduce errors due to illegible written orders, but in this case the physician was giving verbal orders to nursing staff.  A quick glance at the graphic at the top of the page illustrates some of the thought and decision making that needs to go into this process.  There is really no known way to make it fool proof.  Subjective determinations about medications and medication safety are being made at every step of the way.  Errors in MR still occur largely because of the assessment required by nursing and in this case psychiatry.  The easiest way to conceptualize this is to think about people who take prescriptions and whether they take the medication exactly as it is prescribed on the label.  The commonest problems involve patients taking their medications at the wrong time or all at once.  Often they have been advised by their physician to make a change but the prescription has never changed.  In many cases they have stopped the medication and want to restart it.  The prescribing physician may not know that the patient is using alcohol, other substances, or nutritional supplements with the prescription.  Continuing medications, stopping them or modifying them requires significant clinical judgment and there may be a lot of uncertainty about the history obtained.

Medication reconciliation is a complex and potentially lengthy task.  It works best with experienced nursing staff who can get the best information to the prescribing physicians and then physicians who have good clinical judgment and flexibility to adapt to changing histories.  It is a potential area for artificial intelligence applications.  AI could assist nursing staff without replacing them.  We need an optimal algorithm and a full description of the decision space associated with this process.  Most importantly we need computer applications that support staff rather than getting in their way and requiring staff support of their own.

One of the most interesting aspects of the current conceptualization of medication reconciliation is how it is perceived by administrators and regulators.  The idea that medications can be entered into a piece of software that is essentially a word processor and that makes things right is almost magical thinking to me.  All of the hard work leading to that conclusion (see graphic and beyond) is not only ignored - but nobody seems to get credit for it.  It is the old rationalization: "If it isn't documented it didn't happen."  This is clear proof that most of what happened isn't documented.  If would be impossible to function if it was.  At some level it appears that all of this hard work was produced by the EHR and not physicians and nurses.  Credit to the geniuses who came up with the software and the administrators who decided to put it in.  How did we ever practice medicine without them?

We end up documenting what we are told to document and it is a poor substitute for what actually happened.  Some of the underlying reasons for that documentation are almost always political.    

George Dawson, MD, DFAPA

Thursday, September 14, 2017

CPAP Follow-up - Reinforcing Daily Use

I posted on obstructive sleep apnea (OSA) and continuous positive airway pressure (CPAP) last year and it was well received.  Since then I have given out a lot of advice on CPAP based on that post and in general to people I have consulted with.  I continue to encounter all of the problems that I mentioned in the original post.  The message that I am continuing to give people is that they cannot view CPAP as an option.  It may not seem like it but it is a critical intervention to prevent the cardiac and metabolic complications of obstructive sleep apnea.  There are several of them and they are severe and potentially life shortening.  Anyone with this diagnosis owes it to themselves and their family to make CPAP work to avoid the morbidity and mortality associated with OSA.

It is very common in my practice to do my standard sleep assessment and hear that a person was diagnosed in a sleep study and that CPAP was recommended but for various reasons they are not using the machine.  I frequently hear about how the patient just "throws it off in the middle of the night" and how they "can't stand to have anything on my face" even in the case where they were diagnosed with severe sleep apnea.  Comments like those seem to understate the seriousness of the problem.  In many cases, insurance companies have asked for the machine back because the record on the SD card in the machine shows that it is not being used.  A person sitting in front of me with untreated OSA is complicated because their physical health is compromised and the immediate complications of untreated apnea and hypertension also compromises their psychiatric care.  The OSA and daytime somnolence becomes insomnia and that person may expect medical treatment for insomnia.  The prescription of sedating drugs is actually not a good idea for people with sleep disordered breathing.  The same think is true for hypertension.  There are several medications that can make hypertension worse and that I would not prescribe to people with uncontrolled hypertension.  Despite those qualifiers - I see medication and doses that I would not prescribe being given to people with untreated OSA.  It is untreated largely because the person does not give CPAP a chance.

Here are a few tips that I give people that they have found to work.  I am not working in a sleep lab or clinic so I am seeing them after the study has been does and after they have seen a wide range to technicians who were supposed to help them with mask fit and instructions on how to use the machine.

1.  Try various masks and types of CPAP - 

A lot of people try the full face mask and throw it off repeatedly at night and decide that's it.  If feeling confined by a mask is a problem there are smaller modified masks and nasal CPAP.  Try several until you find the one that works the best.

2.  Use humidification - 

It is surprising how many people think that they will save time by not using the humidification system with the machine.  Not using the humidification is another sure way to not tolerate CPAP.  Maintain and adjust the humidification for maximum comfort as you are adjusting to CPAP.  

3.  Make sure there are no air leaks -

In order for CPAP to work there has to be air pressure transmitted into the upper airway to maintain a splinting effect and prevent obstruction.  Air leaks put that pressure at risk and can prevent the effective use of CPAP.  Trying to find air leaks can be frustrating because after the fitting occurs by the technician or respiratory therapist there are problems at home associated with sleep positions.  With the wide array of equipment available it is very unlikely that you will not be able to find a device that works, but in some cases it may take a while.  An APAP device with a readout each morning (see graphic) will tell you if there have been any significant air leaks (100% mask fit = no air leaks).

4.  Get a modern APAP machine with feed back -

APAP is an abbreviation for Automatic Positive Airway Pressure.  This machine is able to sense increasing obstruction and adjust the pressure.  One of the main advantages is that a lower baseline pressure can be used and then as any obstruction occurs the devices increases the pressure to overcome it.  Standard CPAP devices have the pressure set based on the original sleep study. In the case of significant obstruction that could mean a constant high pressure.  Constant high pressures can lead to some side effects such as ear pain from pressure effects.  The really strong point of APAP devices is that they are generally much more sophisticated pieces of equipment. They can make the data available over the Internet to a sleep medicine physician who can remotely adjust the settings based  on downloaded data.  They also allow the patient to download their data each morning via a smartphone app (see the above graphic) so they know the hours that they wore the device each night, what the pressure settings were, and how many apneic/hypopneic episodes occurred (AHI or  Apnea/Hypopnea Index) per hour.

5.  Optimize your sleeping position and preparation each night based on the APAP readout - 

The modern APAP allows the individual patient unprecedented control over the treatment of sleep apnea.  With the feedback every morning they can be assured the device is working.  In the previous example, I showed a patient with increasing upper airway obstruction who eventually had some episodes of atrial fibrillation.  He had no idea that his system had airleaks and his AHI was increasing until he developed the atrial fibrillation.  With a new APAP system he would have had immediate feedback on day 1.

Sleep positions can also lead to better APAP/CPAP performance.  With the APAP device, feedback will be there within a few days if side sleeping is better (lower AHI) than back sleeping.  Looking at the readout of an AHI of 1.3  from Monday in the above example, this patient determined that by sleeping on his side he had consistently fewer episodes that if he slept on his back where his AHIs were all in the 3-5 range.

The final advantage of knowing that there is an APAP device out there allows the patient to advocate form themselves.  I don't know if it is widely known but there are clearly some health plans who only provide CPAP devices to patients diagnosed with OSA.  APAP devices are more expensive and based on what I have written it is clear that they are superior devices.

6.  Oral appliances for OSA are inferior to CPAP on measured outcomes like AHI-

I updated this post to include a comment on oral appliances (OA) for CPAP based on a question that I received.  I commonly see people who dislike CPAP and use the OA instead.  They claim that is "works better" than CPAP but I doubt it.  It does improve snoring and can reduce the AHI based on that improvement.  The problem is the improvement in AHI is generally not nearly enough to be considered an adequate level of treatment (AHI < 5) (1).  For that reason, expert guidelines recommend the OA for snoring alone or OSA in the case that the patient is intolerant of CPAP(2). Advertisements for a dental approach to OSA are commonplace and usually cite the years of experience of the clinician as being the determining factor.  I would recommend considering a sleep study with the OA in place to see just how much the AHI had improved.  In the case of the APAP machine you can read the number off your smartphone app every morning.  Use those numbers to determine the best treatment for your condition.

If you have been newly diagnosed with OSA and prescribed CPAP - be sure that you get a complete discussion of CPAP versus APAP and why your doctor is recommending one over the other.  Ask your sleep medicine physician the ideal solution rather than what your insurance company covers.  If cost is the only limiting factor - used and resanitized equipment may be an option.

The treatment of OSA with CPAP/APAP has never been better.  Make sure that you get a machine and a system that you are comfortable with and that works.  APAP devices can give you consistent feedback that is easily accessible.  There are some ways that you can hack a CPAP device and read the information on the SD card, but it is much easier to pull up the data with an app.

The immediate daily feedback that you have a working device and the lowest possible AHI is strong reinforcement to keep using it.      

George Dawson, MD, DFAPA


1: Van Haesendonck G, Dieltjens M, Hamans E, Braem MJ, Vanderveken OM. Treatmentefficacy of a titratable oral appliance in obstructive sleep apnea patients: a prospective clinical trial. B-ENT. 2016; 12 (1): 1-8. PubMed PMID: 27097387.

2:  Ramar K, Dort LC, Katz SG, Lettieri CJ, Harrod CG, Thomas SM, Chervin RD. Clinical Practice Guideline for the Treatment of Obstructive Sleep Apnea and Snoring with Oral Appliance Therapy: An Update for 2015. J Clin Sleep Med. 2015 Jul 15;11(7):773-827. doi: 10.5664/jcsm.4858. Review. PubMed PMID: 26094920

 "CPAP is superior to OAs in the measured outcomes and, therefore, should be the first-line option for treating OSA"


I am not a sleep medicine physician and do not prescribe these devices.  The information posted here is based on my experience doing sleep assessments as part of the standard psychiatric evaluation, referring patients for polysomnography, and getting the results of those tests during the treatment of my patients.  In follow up, I have to assist people in the proper use of the equipment and the pitfalls they encounter trying to establish a routine to use CPAP.  I have no competing financial interests of any kind.


The graphic at the top of this post is from the smartphone app that is used to download (via Bluetooth) all of the data on the screen each morning.  It keeps a running bar graphic and rolling over that graphic gives the data for each day. The data is assembled by a remote server through a wireless connection each day and the patient's sleep medicine doctor can monitor this data and set the machine remotely without needing to visit that physicians office.

Monday, September 11, 2017

HITECH Editorials in the NEJM...

Since my Labor Day message to colleagues dovetails so well with these editorials, I did not want to miss the opportunity to comment on them.  They appear to be written by people with policy interests in this information technology takeover of clinical medicine.  They are mildly critical but totally miss the mark on what a catastrophe this government roll out has been.  The question any taxpayer should ask is why any other outcome would be expected.  Software and network implementations world wide and at the level of the US government have led to colossal failures.  Multibillion dollar investments  that at some point were abandoned.  The only difference in this case is that the government is not the actual client.  The federal approach to health care - apart from the brief foray of FBI agents raiding physicians offices to see if they made any coding violations is to set up payments for proxies and let them hash things out with providers.  The primitive approach of marginal incentives that are really weighted as penalties is supposed to facilitate the whole mess.  The mess would get implemented either way if you ask me.  There are tens of thousands of executive and mid level health care executives chomping at the bit for a project like this to mismanage.  And they have mismanaged it well.  Government leverage makes it difficult to refuse.

The initial editorial by Washington and  co-authors (1) focuses on the success of getting hospitals and physicians on the electronic health record (EHR).  They present a graphic showing the steep increase in EHR use over the 2004 to 2015 decade.  The acute care hospital curve ends at essentially 98-100% for certified EHRs and office based practices are at 90%.  The article rightly points out that physicians have borne the brunt of the implementation and how physicians are frustrated by the lack of "actionable information generated by these systems".  The article discusses the need for the "seamless flow of electronic information" in a couple of places.  It describes how EHR could be useful in research.  It ends  on a vague note that there is still a lot of work to be done and maybe that will happen some day.

The second piece by Halamka and Tripathi (2) starts out on a more realistic note.  Top down implementation gave physicians inadequate tools and then blamed them for being reluctant.  Technically physicians were not reluctant because they did not have a choice.  In most systems, administrations made all of the purchasing decisions, overhyped the software, and let it be known that contrary opinions were never appreciated.  It was up to physicians to learn how to use the stuff no matter how time consuming it was.  They point out that some measures were enacted on top of the clinical workload that made the situation worse.  They include the longest sentence I have recently seen in a journal article but it does cite a fair number of the problems a lot of the problems:          

"Soon physicians were expected to provide high-quality and empathic care in a 12-minute visit while weaning themselves from paper-based workflows, entering the numerous structured data elements required for meaningful use, rolling out new HIPAA privacy notices, implementing security protections for new electronic data, learning and incorporating new ICD-10 billing codes, and convincing their patients to use patient portals and secure e-mail, all while avoiding safety and malpractice issues." (p 907).

At one point they make the argument that health care organization have moved to "value-based purchasing".  Was that applied to the EHR?  Is there anyone today who would suggest that any EHR that is currently sold in this country is a value based proposition or is there as a result of HITECH legislation?  In their conclusion they suggest that now all of these systems are installed - the government can afford to pull back simplify requirements and let market effects shape some of the metrics like interoperability.  They suggest that returning control to the customers is a path to "recapturing the hearts and minds of our clinicians."

The government heavy aspect of these editorial pieces cannot be denied.  It is more of the same "we are from the government and we are here to help you whether you want us to or not."  Here are a few aspects of this roll out that the HITECH legislation either missed or made a lot worse:

1.  Incredible cost - 

Enterprise wide systems are incredibly expensive both upfront and for the annual licensing and maintenance fees.  That does not include any modification of the system - that will typically cost more.  Once a health system has bought in - it is difficult to shop around and come up with a better deal.  In some areas one company has a monopoly on the enterprise.  In many cases the systems are marketed as being a lot more easy to use than they are.  Support is huge in the implementation phases and drops off in a hurry.  Subsequent modifications - even if they are easy to make cost large sums of money.  In some cases the vendors demonstrate whiz bang technology like seamless integration with voice recognition systems.  The customers often find out that those options don't work well with their systems or are available as a high priced option.

In many organizations the EHR budget (combined with other federal costs cutting measures) is a fixed drain on the budget.  If revenues fall, lay offs can occur just to keep the EHR running.  In private practices, the up front and monthly licensing fees are no less of a burden.  There are some "free" EHRs that are funded by advertising or research but no standard comparison or guidance for any clinic that needs to implement one.  The total budget of these costs would be interesting to see, but I have never been able to find a good reference.  Health systems typically describe their margins in the low single digits.  If that is true and EHR system costing tens of millions up front with tens of millions in maintenance costs is clearly a tremendous drain on the system.

2.  IT implementation is poor -

I don't know what percentage of physicians has seen their EHR rolled out in a way that does not optimize clinical utility.  Working physicians need the most rapid route to incorporate the EHR into their work flow.  That includes software that works, software that is efficient, and ideally software that is smart enough to allow individual physicians to analyze trends in the same patient or groups of patients that will allow better diagnosis and treatment.  The IT implementation is also frequently biased toward administration rather than clinicians.  Many clinicians are surprised to find that someone is counting their mouse clicks as a way to measure productivity and the EHR charts they access are monitored.  This is another significant cost that nobody ever seems to discuss. The most egregious implementation error is when a software change is made on the fly and the physicians are given a heads up with no training.  They are expected to learn the software change with no training.  I have always found the illusion of assistance with the EHR interesting.  For the first few months there are always superusers and the factory reps clamoring to help you out.  They gradually fade into the background and you are left with a very poor piece of software.          

3.  Software quality is poor - 

As far as I can tell current EHR programs are designed to deliver lab and imaging data, generate documentation and reports, and perform a billing and coding function.  They do a fair job with the labs and imaging details. Documentation is very labor intensive and poorly done.  It adds hours per day to the physician's work flow and has necessitated the hiring of scribes and retired physicians just to keep up with the documentation tasks.  It is common that EHRs cannot be accessed by outside physicians and when that happens - the printout sent to those physicians is poorly structured and extremely content poor.

On the authorship side - a basic goal should be to produce a document fairly quickly that appears to have been written by an intelligent being.  As anyone who has read EHR entries or reports that is not typically the case.  There are extremes at either end.  You can find notes that are basically a series of check boxes or you can find 18 page notes where the author imported everything that they could into the note because that is one of the few things (in some EHRs) that you can do quickly.  Neither approach is helpful in terms of continuity of care or developing rational treatment for a patient.  Having used EHRs for the past 15 years - I can attest to clunky editing and incompatibility with voice recognition systems as being major drawbacks.  The text fields of some EHRs only work with their own microscopic and very slow editing tools.  It is impossible to set a cursor anywhere in the field to produce the document.  Using this twenty times a day when you are used to working with functional word processors is maddening.  Some systems of care set a font that looks like it is out of the 1950s and that is how the final document appears.

Every physician was appointed (under penalty of law) to be their own billing and coding specialist.  Sure every hospital and clinic has some billing and coding specialists but today they are there basically to audit the work of physicians.  In the EHR this translates to a tedious search for the diagnoses, listing them in the right priority, and signing off on the diagnostic and billing codes.  This can take up to an additional 20 mouse clicks per encounter.  Even if you can do that in 2 minutes - times 4000 encounters per year - that equals another 133 hours per year. That is work added just to maintain the EHR.  Before the EHR, billing and coding could be completed in about 10% of the time.

All the time physicians are engaged in these inefficient EHR based practices they are hearing how the EHR is such an advance in efficiency and productivity.              

4.  Hardware infrastructure/software is running 24/7 - 

Before the modern EHR, there were a limited number of workstations per hospital and most of them were shut down at night.  Now there are thousands of workstations and storage arrays in large organizations running 24/7.  They can't be shut off because of frequent software updates.  Nursing and medical staff can easily be observed spending most of their workday at computers rather than talking with patients and families.  Before the current EHR, physician would typically look at a computer screen to review the labs and possible the MAR (record of meds given).  Now starting at a computer screen most of the time is the norm.  The EHR dominant approach has increased the electrical bill and reduced time spent with patients at the same time.     

5.  A question of security - 

There have been well publicized leaks of large numbers of patient files and more recent ransomware attacks.  Security in most software systems has historically been an afterthought.  I have not seen any specific problems with EHR software but this tip sheet from CMS points out the potential complexity of the situation.  The security problem is also more urgent for healthcare sites that are under more stringent privacy requirements like 42 CFR Part 2.    

Those are a few of my ideas about the rapid deployment of the EHR.  Unlike the authors I am very skeptical of any drastic improvements on the horizon.  If you can't make an EHR that will produce a coherent report with information content at least equal to an old admission or discharge note that is a major problem.  If you can't produce an EHR that allows for some intelligent analysis of data without going through the entire record and reading every text note that is a major problem.  Sure - access to labs is nice, but we had computer access to labs before the EHR.  Patient access is also nice, but let's be honest - it is limited and doesn't address what patients really want - quality health care.

About the only thing that I agree with the authors on is that the physician needs to be put back into the loop.  But that hides the very basic fact that physicians were intentionally taken out of the loop thirty years ago when politicians decided that they could be replaced by managed care administrators.

When you look at it from that perspective the massive problems with the current EHR - make perfect sense.

George Dawson, MD, DFAPA


1:  Washington V, DeSalvo K, Mostashari F, Blumenthal D. The HITECH Era and the Path Forward. N Engl J Med. 2017 Sep 7;377(10):904-906. doi: 10.1056/NEJMp1703370. PubMed PMID: 28877013.

2:   Halamka JD, Tripathi M. The HITECH Era in Retrospect. N Engl J Med. 2017 Sep7;377(10):907-909. doi: 10.1056/NEJMp1709851. PubMed PMID: 28877012.