Thursday, 26 July 2012

Ecstasy ~ Russian Roulette with your Memory




‘The general agreement that is emerging about ecstasy is that while you are using the drug, you might expect a very subtle memory impairment but it’s probably not significant in the real world ...[and] When you stop using it, as most people do, things go back to the way they were.’
-Professor V Curran, Daily Mail 26/7/2012
To E or not to E - that is the question. I felt compelled to say something triggered by an article in Today's Daily Mail Ecstasy tablets are far more harmful than previously thought - and taking just ten can cause brain damage. The article is based on a unique and important paper by Wegner and colleagues entitled "A prospective study of learning, memory, and executive function in new MDMA users" (to be published shortly in Addiction).

In the context of the obvious methodological limitations associated with studying recreational drug use, this research is unique insofar as it is a prospective cohort study and so, examines the effects of E on non-users who said they intended to use E during the next 12 months. The users and non-users were well matched and drug use was confirmed by hair testing in a random sample (one-third) for MDMA, 3,4-methylenedioxyamphetamine (MDA), 3,4-methylenedioxy-N-ethylamphetamine (MDEA), amphetamine, methamphetamine and cannabinoids.

This is a more intriguing study of E than most because the participants were "almost MDMA naive" (a lifetime maximum of 5 Ecstasy tablets) and had no history of
"...ingestion of any other illicit psychotropic substances besides cannabis on more than five occasions before the day of the first examination; a history of alcohol misuse (according to DSM-IV criteria, APA 1994); and regular medication (except for contraceptives)...Main inclusion criterion at baseline was a high probability of future ecstasy use"
- So, a remarkable sample of pretty clean-living individuals until now!

Over the 12 months, 23 participants did use more than 10 Ecstasy tablets (mean= 33.6, range 10-62; mean occasions 13.5, range 4-36). The remaining individuals used Ecstasy once or more, but less than 10 times and were excluded from the analyses. The controls consisted of 43 participants who had not indulged in any recreational drug use during the year or previously.

The groups (E users and non-users) did not differ in gender distribution, age, years of education, duration of cannabis use before the 1st assessment, duration of cannabis use between the 1st and the 2nd assessments, mean days since last cannabis use at 2nd assessment, or Ravens Matrices (a measure of general intellectual reasoning)

Neurocogitive testing
Wagner et al gave participants a range of executive and memory tests. The main findings were no significant differences between E users and non-users for a measure of verbal learning and memory (auditory verbal learning task), but moderate-large effects on a measure of visual paired associate learning and memory (LGT3) both for immediate and delayed testing (-.667 & -.746: see Table 1).

Table 1. Memory in Ecstasy users and non-users

Conclusions
Wagner and colleagues conclude there were:
"Significant effects of immediate and delayed recall of a paired associates learning task between subjects who used 10 or more ecstasy pills and subjects who did not use any illicit substance apart from cannabis during the course of the year (non-users) were found. No significant differences were found on any of the other neuropsychological tests."
Now, it doesn't surprise me that Wagner et al found moderate to large effects of E on memory. In 2007, along with Joy Kokkalis, I wrote Ecstasy (MDMA) and memory function: a meta-analytic update. We meta-analysed 28 published studies assessing the effects of E on memory in over 600 users and 400 non-users. We reported large impairments of verbal STM (d=.63) and LTM (d=.87) memory - equivalent to performance in over three-quarters of ecstasy users being worse than in non-ecstasy using controls. Crucially, the degree of memory impairment was unrelated to the lifetime number of tablets consumed (mean range was 16-902 in the studies). By contrast, we found no effect of E on visual memory, but did find that concurrent cannabis use impacted specifically on visual memory

Robert E Niro: the Deer Hunter


Our finding that lifetime dose did not predict degree of memory impairment suggests a non-linear relationship and that taking E may be akin to playing Russian Roulette with your memory - for some a large number of tablets may be required before memory is impaired, but for others very few may be necessary - the reasons for the individual differences remain unknown to scientists and most importantly, unknown to anyone using E.

and Final...E
I started by saying that the Daily Mail article prompted me to blog this - not because the journalist got anything wrong necessarily, but because I couldn't see how Professor Val Curran (from UCL) could justifiably make the assertions outlined at the top of this post. In addition to our meta-analysis, at least five other meta-analyses (Verbaten, 2003; Nulsen et al, 2010; Zakzanis et al 2007; Kalechstein et al 2007; and Murphy et al 2012) - have reported moderate to large memory impairment associated with recreational Ecstasy use.

When talking about recreational drugs, it is easy to be misrepresented in the press (as I have discovered). Nevertheless, in our minds, quotes from well-respected scientists are assumed to be reliable...and in the area of recreational drug use, it is all to easy to see how pithy quotes regularly shape thinking in the general public - even though the empirical evidence runs in the contrary direction.

I am beginning to wonder if the recent problems in psychology (false data, retractions and so on) reflect a problem that is all too peculiar to psychologists - too many experts offering too many opinions. As scientists who work with people, we are often asked to comment on surprising findings concerning human behaviour, the brain... life and the universe. Sometimes its fine because we know the literature and have read the paper, but at other times, experts clearly have not read the paper, dont seem to know the literature but still offer an opinion - I am convinced that giving - what are nothing nore than personal or lay opinions essentially - would be far less likely in other sciences...another possibility of course is that the expert studiously avoids citing evidence because of a preferred personal conviction - science?

So, is there general agreement that the memory problems associated with E are subtle, do not impact real world and return to normal?
You used to slide down the carpeted stairs
Or down the banister
You stuttered like a kaleidoscope
'Cause you knew too many words
The Magnetic Fields - 'Take Ecstasy with me'
References

Kalechstein AD, De La Garza R, 2nd, Mahoney JJ, 3rd, Fantegrossi WE, Newton TF. (2007). MDMA use and neurocognition: a meta-analytic review. Psychopharmacology; 189:531–537

Laws KR, Kokkalis J. (2007) Ecstasy (MDMA) and memory function: a meta-analytic update. Hum Psychopharmacol ;22:381–388

Murphy, P.N., Bruno, R., Wareing, M., Ryland, I., Fisk, J.E., and Montgomery, C. (2012). The effects of ecstasy (MDMA) on visuospatial memory performance: Findings from a systematic review with meta-analysis. Human Psychopharmacology: Clinical and Experimental, 27, 113-138

Nulsen CE, Fox AM, Hammond GR (2010). Differential effects of ecstasy on short-term and working memory: a meta-analysis. Neuropsychol Rev. 20:21-32.

Wagner D, Becker B, Koester P, Gouzoulis-Mayfrank E & Daumann J (in press) A prospective study of learning, memory, and executive function in new MDMA users. Addiction

Zakzanis K. K., Campbell Z., Jovanovski D. (2007) The neuropsychology of ecstasy (MDMA) use: a quantitative review. Human Psychopharmacology: Clinical and Experimental , 22, 427-435
.
ResearchBlogging.org

Thursday, 12 July 2012

Negativland: What to do about negative findings?




Elephant in the Room (by Banksy) 


This violent bias of classical procedures [against the null hypothesis] is not an unmitigated disaster. Many null hypotheses tested by classical procedures are scientifically preposterous, not worthy of a moment's credence even as approximations. If a hypothesis is preposterous to start with, no amount of bias against it can be too great. On the other hand, if it is preposterous to start with, why test it? Ward Edwards, Psychological Bulletin (1965)


Several interesting blog posts, discussions on Twitter and regular media articles (e.g. Times Higher Education) have recently focused on the role of negative (so-called null) findings and the publish or perish culture. Pete Etchells' nice recent blog discusses the pressure to publish in science and how this may lead to a bias against Quality over Quantity. This discussion was in the context of the recent reports concerning the faking of data by Diederik Stapel and others.

 
Reading Pete's blog, I thought to write a few lines about a related, but an issue that receives less attention - what can we do to correct for negative findings not being reported and how do we deal with negative findings when they are reported?



In his blog, Pete gives the following description of how we generally think that null findings influence the scientific process
...if you run a perfectly good, well-designed experiment, but your analysis comes up with a null result, you're much less likely to get it published, or even actually submit it for publication. This is bad, because it means that the total body of research that does get published on a particular topic might be completely unrepresentative of what's actually going on. It can be a particular issue for medical science - say, for example, I run a trial for a new behavioural therapy that's supposed to completely cure anxiety. My design is perfectly robust, but my results suggest that the therapy doesn't work. That's a bit boring, and I don't think it will get published anywhere that's considered prestigious, so I don't bother writing it up; the results just get stashed away in my lab, and maybe I'll come back to it in a few years. But what if labs in other institutions run the same experiment? They don't know I've already done it, so they just carry on with it. Most of them find what I found, and again don't bother to publish their results - it's a waste of time. Except a couple of labs did find that the therapy works. They report their experiments, and now it looks like we have good evidence for a new and effective anxiety therapy, despite the large body of (unpublished) evidence to the contrary.
The hand-wringing about negative or null findings is not new...and worryingly, psychology fares worse than most other disciplines, has done for a long time and (aside from hand-wringing) does little to change this situation. For example, see Greenwald's 'Consequences of the Prejudice against the Null Hypothesis' published in Psychological Bulletin in 1975. The table below comes from Sterling et al (1995) showing that <0.2% of papers in this sample accepted the null hypothesis (compare to a sample of medical journals below)

 
 Table 1. From Sterling et al 1995




More recently Fanelli (2010) confirmed the earlier reports about psychology/psychiatry being especially prone to the bias of publishing positive findings. Table 2 below outlines the probability for a paper to report positive results in various disciplines. It is evident that, compared to every other discipline, Psychology fares the worst - being five times more likely as the baseline (space science) to publish positive results! We might ask 'why' psychology? and what effect does it have?





Table 2 Psychology/Psychiatry bottom of the league

Issues from Meta-Analysis

It is certainly my experience that negative findings are more commonly published in more medically oriented journals. In this context, the use of meta analysis becomes very interesting.


The File Drawer effect and Fail-Safes
Obviously meta-analysis is based on quantitatively summarising the findings that are accessible (tending to be those published of course). This raises the so-called file-drawer effect, whereby negative studies may be tucked away in a file drawer because they are viewed as less publishable. It is possible in meta analysis to statistically estimate the file-drawer effect - the original and still widely used method is the Fail-Safe statistic devised by Orwin, which essentially estimates how many unpublished negative studies would be need to overturn a significant effect size in a meta analysis. A marginal effect size may require just one or two unpublished negative studies to overturn it, while a strong effect may require thousands of unpublished negative studies to eliminate the effect.

So, at least we have a method for estimating the potential influence of negative unpublished studies.



Where wild psychologists roam - Negativland by Neu!

 

Funnel Plots: imputing missing negative findings

Related to the first point, we may also estimate the number of missing effect sizes and even how large they might be. Crucially, we can then impute the missing values to see how it changes the overall effect size in a meta-analysis. This was recently spotlighted by Cuijpers et al (2010) in their timely meta-analysis of psychological treatments for depression, which highlighted a strong bias toward positive reporting.

A standard way to look for bias in a meta analysis is to examine funnel plots of the individual study effect sizes plotted against their sample sizes or the standard errors. When no bias exists, studies with smaller error and larger sample sizes cluster around the mean effect size. By contrast, smaller samples and greater error variance produce far more variable effect sizes (in the tails). Ideally, we should observe a nicely symmetrical inverted funnel shape.

Turning to Cuijpers paper, the first figure below is clearly asymmetrical, showing a lack of negative findings (left side). Statistical techniques now exist for imputing or filling in these assumed missing values (see figure below where this has been done). The lower funnel plot gives a more realistic picture and adjusts the overall effect size downwards as a consequence (Cuijpers imputed 51 missing negative findings - the dark circles) - which reduced the effect size considerably from 0.67 to 0.42.

 


Figure 1. Before and After Science: Funnel Plots from Cuipjers et al (2010)





No One Receiving (Brian Eno - from Before & After Science)



Question "What do you call a group of negative (null) findings?" Answer: "A positive effect"

As noted already, some more medically-oriented disciplines seem happier to publish null findings - but what precisely may be some of the implications of this - especially in meta analyses? Just publishing negative findings is not the end of the questioning!


Although some may disagree, one area that I think I know a fair bit about is the use of CBT for psychosis. The following forest plot in Figure 2 is taken from a meta analysis of CBT for psychosis from Wykes et al (2008).


Figure 2. Forest plot displaying Effect sizes for CBT as treatment for Psychosis
(from Wykes et al 2008)

In meta analysis, forest plots are often one of the most informative sources of information - because they reveal much about the individuals studies. This example shows 24 published studies. The crucial information here, however, concerns not the magnitude of individual effect sizes (where the rectangle sits on the X axis), but
...the confidence intervals - these tell us everything!


When the confidence intervals pass through zero, we know the effect size was nonsigificant. So, looking at this forest plot only 6/24 (25%) studies show clear evidence of being significant trials(Trower 2004; Kuipers 1997; Drury 1997; Milton 1978; Guadiano 2006; Pinto 1999). Although only one quarter of all trials were clearly significant, the overall effect is significant (around 0.4 as indicated by the diamond at the foot of the figure)


In other words, it is quite possible for a vast majority of negative (null) findings to produce an overall significant effect size - surprising? Other examples exist (e.g. streptokinase: Lau et al 1992; for a recent example, see Rerkasem & Rothwell (2010) and indeed, I referred to one in my recent blog "What's your Poison?" on the meta analysis assessing LSD as a treatment for alcoholism (where no individual study was significant!).

Some argue that the negative studies are only negative because they are underpowered - however, this only seems likely with a moderate-large effect size that produces a nonsignificant statistical result. And further speculate that a large trial will prove the effectiveness of the treatment' however, when treatments have subsequently been evaluated in definitive large trials, they have often failed to reach significance. Egger and colleagues have written extensively on the unreliability of conclusions in meta-analyses where small numbers of nonsignificant trials are pooled to produce significant effects (Egger & Davey Smith. 1995)

So, negative or null findings are perhaps more and less worrisome than we may think. Its not just an issue of not publishing negative results in psychology, its also an issue of what to do with them when we have them



Keith R Laws (2012). Negativland: what to do about negative findings http://keithsneuroblog.blogspot.co.uk/2012/07/negativland-what-to-do-about-negative.html ResearchBlogging.org