Showing posts with label psychiatric research. Show all posts
Showing posts with label psychiatric research. Show all posts

Sunday, September 20, 2015

Ioannidis - Why His Landmark Paper Will Stand The Test of Time















John P. A. Ioannidis came out with an essay in 2005 that is a landmark of sorts.  In it he discussed the concern that most published research is false and the reasons behind that observation (1).  That led to some responses in the same publication about how false research findings could be minimized or in some cases accepted (2-4).  Anyone trained in medicine should not find these observations to be surprising.  In the nearly 30 years since I have been in medical school - findings come and findings go.  Interestingly that was a theory I first heard from a biochemistry professor who was charged with organizing all of the medical students into discussion seminars where we would critique research at the time from a broad spectrum of journals.  His final advice to every class was to make sure that you kept reading the New England Journal of Medicine for that reason.  Many people have an inaccurate view of science, especially as it applies to medicine.  They think that science is supposed to be true and that it is a belief system.  In fact science is a process, and initial theories are supposed to be the subject of debate and replication.  If you look closely in the discussion of any paper that looks at correlative research, you will invariably find the researchers saying that their research is suggestive and that it needs further replication.  In the short time I have been writing this blog asthma treatments, the Swan Ganz catheter, and the diagnosis and treatment of acute bronchitis and acute chronic obstructive pulmonary disease are all clear examples of how theories and research about the old standard of care necessarily change over time.  It is becoming increasingly obvious that reproducible research is in short supply.

Ioannidis provided six corollaries with his original paper.  The first 4 regarding power, effect size, the greater the number of relationships tested, and the greater the design flexibility are all relatively straightforward.  The last two corollaries are more focused on subjectivity and are less accessible.  I think it is common when reading research to look at the technical aspects of the paper and all of the statistics involved and forget about the human side of the equation.  From the paper, his 5th Corollary follows:

"Corollary 5: The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true. Conflicts of interest and prejudice may increase bias, u.  Conflicts of interest are very common in biomedical research [26], and typically they are inadequately and sparsely reported [26,27].  Prejudice may not necessarily have financial roots.  Scientists in a given field may be prejudiced purely because of their belief in a scientific theory or commitment to their own findings.  Many otherwise seemingly independent, university-based studies may be conducted for no other reason than to give physicians and researchers qualifications for promotion or tenure.  Such non-financial conflicts may also lead to distorted reported results and interpretations.  Prestigious investigators may suppress via the peer review process the appearance and dissemination of findings that refute their findings, thus condemning their field to perpetuate false dogma. Empirical evidence on expert opinion shows that it is extremely unreliable [28]"  all from Reference 1.

The typical conflict of interest arguments that are seen in medicine have to do with financial conflict of interest.  If the current reporting database is to be believed they may be considerable.  A commentary from Nature earlier this month (5) speaks to the non-financial side of conflicts of interest.  The primary focus is on reproducibility as a marker of quality research.  They cite the facts that 2/3 of members of the American Society for Cell Biology were unable to reproduce published results and that pharmaceutical researchers were able to reproduce the results from 1/4 or fewer high profile papers.  They cite this as the burden of irreproducible research.  They touch on what scientific journals have done to counter some of these biases, basically checklists of good design and more statisticians on staff.  That may be the case for Science and Nature but what about the raft of online open access journals who not only have a less rigorous review process but in some cases require the authors to suggest their own reviewers?  A central piece of the Nature article was a survey of 140 trainees at the MD Anderson Cancer Center in Houston, Texas.  Nearly 50% of the trainees endorsed mentors requiring trainees to have a high impact paper before moving on.  Another 30% felt pressured to support their mentors hypothesis even when the data did not support it and about 18% felt pressured to publish uncertain findings.  The authors suggest that the home institutions are where the problem lies since that is where the incentive for this behavior originates.  They say that the institutions themselves benefit from the perverse incentives that lead to researchers to accumulate markers of scientific achievement rather than high quality reproducible work.  They want the institutions to take corrective steps toward research that is more highly reproducible.

One area of bias that Ioannidis and the Nature commentators are light on is the political biases that seem to preferentially affect psychiatry.  If reputable scientists are affected by the many factors previously described how might a pre-existing bias against psychiatry, various personal vendettas, a clear lack of expertise and scholarship, and a strong financial incentive in marshaling and selling to the antipsychiatry throng work out?  Even if there is a legitimate critic in that group - how would you tell?  And even more significantly why is it that no matter what the underlying factors - it seems that conspiracy theories are the inevitable explanations rather than any real scientific dispute?  Apart from journalists, I can think of no group of people who are more committed to their own findings or the theory that monolithic psychiatry is the common evil creating all of these problems than the morally indignant critics who like to tell us what is wrong with our discipline.  Knowing their positions and in many cases - over the top public statements why would we expect  them sifting through thousands of documents to produce a result other than the one they would like to see?  

I hope that there are medical scientists out there who can move past the checklists suggested to control bias and the institutional controls.  I know that this is an oversimplification and that many can.  Part of the problem in medicine and psychiatry is that there are very few people who can play in the big leagues.  I freely admit that I am not one of them.  I am a lower tier teacher of what the big leaguers do at best.  But I do know the problem with clinical trials is a lack of precision.  Part of that is due to some of Ioannidis' explanation, but in medicine and psychiatry a lot has to do with measurement error.  Measuring syndromes by very approximate means or collapsing some of the measurements into gross categories that may more easily demonstrate an effect may be a way to get regulatory approval from the FDA, but it is not a way to do good science or produce reproducible results. 


George Dawson, MD, DFAPA




References:  

1:  Ioannidis JPA (2005) Why Most Published Research Findings Are False. PLoS Med 2(8): e124. doi:10.1371/journal.pmed.0020124

2:  Moonesinghe R, Khoury MJ, Janssens ACJW (2007)  Most Published Research Findings Are False—But a Little Replication Goes a Long Way. PLoS Med 4(2): e28. doi:10.1371/journal.pmed.0040028

3:  Djulbegovic B, Hozo I (2007)  When Should Potentially False Research Findings Be Considered Acceptable? PLoS Med 4(2): e26. doi:10.1371/journal.pmed.0040026

4:  The PLoS Medicine Editors (2005) Minimizing Mistakes and Embracing Uncertainty. PLoS Med 2(8): e272. doi:10.1371/journal.pmed.0020272

5:  Begley CG, Buchan AM, Dirnagl U. Robust research: Institutions must do theirpart for reproducibility. Nature. 2015 Sep 3;525(7567):25-7. doi: 10.1038/525025a. PubMed PMID: 26333454.


Friday, July 24, 2015

Depression and the Genetics Of Large Combinations










from:  CONVERGE consortium.  Nature. 2015 Jul 15. doi: 10.1038/nature14659. [Epub ahead of print] - see complete reference 1 below.         



This is an interesting effort from a large number of researchers looking at candidate genes in major depression. The authors studied major depressive disorder (MDD) in 5,303 Han Chinese women selected for recurrent major depression compared with 5,337 Han Chinese women screened to rule out MDD. The depressed subjects were all recruited from provincial mental health centers and psychiatric departments of general hospitals in China. The controls were recruited from patients undergoing minor surgical procedures in general hospitals or from local community centers. All of the subjects were Han Chinese women between the ages of 30 and 60 with four Han Chinese grandparents. The MDD sample had two episodes of MDD by DSM-IV criteria. The diagnoses were established by computerized assessments conducted by postgrad medical students, junior psychiatrists, or senior nurses trained by the CONVERGE team. The interview was translated into Mandarin. Exclusion criteria included other serious medical of psychiatric morbidity (see details in ref 1). 

Whole genome sequences were acquired from the subjects and 32,781, 340 SNPs were identified, 6,242,619 were included in genome-wide association studies (GWAS). Figure 1 above is the quantile-quantile plot for the GWAS analysis resulting from "a linear mixed model with genetic relatedness matrix (GRM) as a random effect and principle components from eigen-decomposition of the GRM as fixed effect covariates." I won't pretend to know what that methodology is, even after reading the Methods, Supplementary Notes section. I expect that it would take a more detailed explanation and in the era of essentially unlimited online storage capacity, I would like to see somebody post it with examples. Without it, unless you are an expert in this type of analysis you are forced to accept it at face value. I am skeptical of manipulations of data points that provide a hoped for result and can cite any number of problems related to this approach. On the other hand information of this magnitude probably requires a specialized approach. 

In this case the authors found two loci on chromosome 10 that contributed to the risk of MDD. They replicated the findings in an independent sample. 



One of the features that I liked about this paper was the focus on patients with severe depression. I have lost count of the number of papers I have read where the depression rating scores were what I consider to be low to trivial. Many rating systems used in clinics seem to use these same systems for determining who gets an antidepressant and who does not.  Whenever I see that, I am always reminded of the "biological psychiatry versus psychotherapy" debates that existed when I was in training in the 1980s.  Once of my favorite authors at the time was Julien Mendlewicz and anything he would publish in the Journal of Clinical Endocrinology and Metabolism (4-6).  There is a table in one of his studies with the HAM-D scores of the patients with unipolar depression he was seeing that ranged from 30-57 with a mean of 41+/- 10.  For bipolar patients in the same study the range was 30-43 with a mean of 36 +/- 5.  One of those patients could not be rated initially because of severe psychomotor retardation.  These are levels of depression that are not typically seen in depression research from either the standpoint of basic science and probably never for psychopharmacological research.  Much of the research that I am aware of allows for the recruitment of patients with HAM-D scores in the high teens and low 20s.  I don't think that is the best way to run experiments on biologically based depressions or antidepressant medications, but there is rarely any commentary on it.  The CONSORT group in this paper finally comments on this factor as being a useful experimental approach even though Mendlewicz was using it in the 1980s.

The second issue that crops up in the paper is replication.  The authors validate their original work by running a second sample for validation.  That is the approach we would use in analytic chemistry.  If we were using a new technique we would run samples in triplicate or in extreme cases in sets of 5 to make sure we could replicate the analysis.  It reminded of one of the first great genetic marker papers in the field that was published in the New England Journal of Medicine by Elliot Gershon's lab in 1984 (2).  It was an exciting proposition to consider that fibroblasts could be grown from a skin biopsy and the muscarinic cholinergic receptor in those fibroblasts would be a marker for familial affective disorder.   The general observation in this pilot study of 18 patients was that they had an increased muscarinic receptor density in fibroblasts compared to controls and that the relatives with histories of minor depression had receptor densities that were more similar to the subjects with mood disorders than normal controls.  The subjects with familial affective disorder were defined as subjects with bipolar I, bipolar II, or major depression according to Research Diagnostic Criteria (RDC).  No rating of depression severity was made acutely or on a historical basis.  These findings could not be replicated, in the end even by the original lab.  That process played out in the pages of the New England Journal of Medicine (3) and the original findings were withdrawn.  It would be interesting to look at how often a similar debate occurs in a prestigious journal these days.  Estimates of non-replicable findings by the pharmaceutical industry suggests that it should happen a lot more often.   

In terms of the original paper, the sheer amount of information involved in the genetic code is staggering.  Just looking at the 130 millions base pairs on Chromosome 10 and thinking about combinations of 2, 3, 4, 5, or 6 base pairs yields the numbers in the table below entitled "Combinations of 130 million base pairs."  The exponential notation ranges from 1015 to 1045 or a quadrillion  to a quattuordecillion combinations.  Figuring out the best way to determine which combinations are relevant in illnesses with polygenic inheritance will be an interesting process.
  

George Dawson, MD, DFAPA



References:

1:  CONVERGE consortium. Sparse whole-genome sequencing identifies two loci for major depressive disorder. Nature. 2015 Jul 15. doi: 10.1038/nature14659. [Epub ahead of print] PubMed PMID: 26176920.

2:  Nadi NS, Nurnberger JI Jr, Gershon ES. Muscarinic cholinergic receptors on skin fibroblasts in familial affective disorder. N Engl J Med. 1984 Jul 26;311(4):225-30. PubMed PMID: 6738616.

3:  Failure to Confirm Muscarinic Receptors on Skin Fibroblasts.  N Engl J Med 1985 Mar 28; 312: 861-862  PubMed PMID: 3974670.

4:  Linkowski P, Mendlewicz J, Kerkhofs M, Leclercq R, Golstein J, Brasseur M,Copinschi G, Van Cauter E. 24-hour profiles of adrenocorticotropin, cortisol, and growth hormone in major depressive illness: effect of antidepressant treatment. J Clin Endocrinol Metab. 1987 Jul;65(1):141-52. PubMed PMID: 3034952.

5:  Linkowski P, Mendlewicz J, Leclercq R, Brasseur M, Hubain P, Golstein J, Copinschi G, Van Cauter E. The 24-hour profile of adrenocorticotropin and cortisol in major depressive illness. J Clin Endocrinol Metab. 1985 Sep;61(3):429-38. PubMed PMID: 2991318.

6:  Mendlewicz J, Linkowski P, Kerkhofs M, Desmedt D, Golstein J, Copinschi G, Van Cauter E. Diurnal hypersecretion of growth hormone in depression. J Clin Endocrinol Metab. 1985 Mar;60(3):505-12. PubMed PMID: 4038712.


Attribution:

Extended Data Figure 1 is from: CONVERGE consortium. Sparse whole-genome sequencing identifies two loci for major depressive disorder. Nature. 2015 Jul 15.  With Permission from Nature Publishing Group  © 2015.  License number 3672900044284.

Supplementary 1:





Monday, March 30, 2015

The Luck Of The Ethical Researcher







“My point here is that when discussing an actual case, the ideological wars melt and people from multiple sides of a debate can usually agree. "Clinician trumps Ideology." 

From 1BOM March 30, 2015 post.



Not sure that I follow that line of thinking.  That has not been my experience in psychiatry or any other medical specialty.  There is plenty of ideology and a lack of technology across the board.  There is also the dirty little word that nobody likes to see affiliated with medicine and that is politics.  As far as I can tell a lot of the ethical debates in medicine are all politics. I can point out several on this blog.

There is also the question of uncertainty.  I can recall being a grunt in a new drug protocol that I will not name but I will say it is in a therapeutic class almost never prescribed by psychiatrists.  My job was to do the medical and psychiatric evaluations and assure that the patients were medically fit to continue the protocol.  Part of the weekly screening was an ECG. I looked at this patient’s ECG, determined it had been changed and told the monitor that I was stopping the protocol.  The monitor got very angry at me because the patient was 2/3 of the way through the protocol and would not count as a completed patient.  I referred the patient immediately to a medicine clinic and they agreed the ECG was changed.  The patient was advised to come back for routine follow up care.  They could not comment on the study drug and they did not recommend any acute care. The monitor remained angry, but I stood my ground and the patient was taken out of the study and referred back to medicine.

A week later the patient had a major medical complication and ended up in the ICU. The monitor and the chief investigator both thanked me for taking the patient out of the protocol at that time – one week later.  The monitor apologized for getting irate with me.

So the rub is – am I more “ethical” than the monitor (who was not an MD) or am I just lucky? Uncertainty certainly can make you look like a hero or a zero in a hurry in medicine.  In this case an internist did not have any reason for concern even though the ECG was clearly different. Was the ECG change causally connected to the ICU incident?  Was it casually connected to the study medication?  Or was the decision to stop the protocol more related to my blue-collar anti authoritarian roots?  To this day nobody knows (but as I age I am more inclined to credit the roots).

And what if I had no markers and the person had stayed in the protocol and ended up in the ICU on the study medication?  Certainly the company and the FDA would have investigated the study and me and my methods.  Would I have been vilified as just another researcher working in the interest of a pharmaceutical company?  Would it have been good press for somebody trying to benefit at my expense?  My only thoughts at the time were in the interests of the patient.  But that difference in course could have been career changing for me, despite the fact that my only interest then and in the past 30 years has been patient safety.

Situations like this are easily politicized and there is a very porous boundary between politics and ethics.


George Dawson,  MD, DFAPA



Supplementary 1:  For the whole story go to the 1BOM blog and start reading at the link.