Monday 6 August 2012

The Drugs (ratings) Dont Work



The British Medical Journal has just published a paper "Quantifying the RR of harm to self and others from substance misuse: results from a survey of clinical experts across Scotland" by Taylor et al (2012)

Abstract

Objective: To produce an expert consensus hierarchy of harm to self and others from legal and illegal substance use.
Design: Structured questionnaire with nine scored categories of harm for 19 different commonly used substances.
Setting/participants: 292 clinical experts from across Scotland.
Results: There was no stepped categorical distinction in harm between the different legal and illegal substances. Heroin was viewed as the most harmful, and cannabis the least harmful of the substances studied. Alcohol was ranked as the fourth most harmful substance, with alcohol, nicotine and volatile solvents being viewed as more harmful than some class A drugs.
Conclusions: The harm rankings of 19 commonly used substances did not match the A, B, C classification under the Misuse of Drugs Act. The legality of a substance of misuse is not correlated with its perceived harm. These results could inform any legal review of drug misuse and help shape public health policy and practice


Harm to self ratings (ranked from top to bottom)


Sample


The authors state
"One of the strengths of this study is the large number of experts involved. Two hundred and ninety-two addiction multidisciplinary experts across Scotland were involved making it the largest national panel to be involved in this type of study.

How representative is the 'large' sample of 'experts'? Well, 50% of the respondents worked in Glasgow and 55.5% of the sample were Addiction Community Psychiatric Nurses - I am not sure how representative that might make any opinion re the relative ranking of drugs. It would have been useful to see a breakdown by occupation - the data are available. I would also liked to have seen a breakdown of responses from face-to-face versus e-mail questionnaires (as responses were collected using both methods and demand characteristics are well known to change between these two approaches (especially with contentious material).


What constitutes expertise here?

Do each of the respondents have experience with all 19 of the drugs (personal or professional)? Is there a relationship between the experience of the raters and their ratings?

Obviously the raters work in addiction services and it is perhaps no surprise that the top ranked drugs are the most addictive and the bottom ranked drugs are all regarded as non-addictive - in this respect are addiction workers simply rating the addictive character of different drugs?



Ratings

Adapting a rating scale developed by Prof Nutt and colleagues (2007), Taylor et al used 9 parameters
"(a) physical harm caused by acute, chronic and parenteral use; (b) psychological harm; physical harm and intensity of pleasure linked to dependence and (c) social harm from intoxication; other social harms and associated healthcare costs"
The method section of the paper states "Participants were asked to score each substance for each of the nine parameters, using a 4-point scale, with 0 being no risk, 1 some risk, 2 moderate risk and 3 extreme risk." This doesn't reflect what is said in the Appendix i.e. that respondents could also say NA (Not Applicable) - they give no information about the number of NA responses or how they were dealt with.

It also unclear how the 'total' harm metric was derived - if it was from the average of all 9 scores then why is it different from the average of the personal and social harm ratings (which presumably reflects those 9 scores)? This leads to anomalies in the paper. For example, if Nicotine rates 2.42 and 2.33 for Personal and Social Harm respectively - why does it appear below Inhaled Solvents 2.38 and 2.18 (when both are lower!)

What in fact is the purpose of the separate ratings for personal and social harm - the assumption is they are measuring different things...but are they? Are the raters using the same information to rate both? I suspect so - the correlation between the two sets of ratings is .964 - so they share 93% of their variance. So in the minds of the raters ...personal harm is the same as social harm.

Finally, what does any metric mean without knowledge of the variance? The authors present mean ratings and without something like the standard deviations for the ratings, we cannot know how one drug differs from another in rating!


The Verve- The drugs don't work 

What about harm?

Even if we ignore the rating issues, it should be noted that 16/19 drugs were rated as producing moderate to extreme Personal Harm.

Finally why rate the drugs at all? In a Horse Race, the Horse always wins

As remarked by @StuartJRitchie on Twitter - "Weird to ask 'experts' rather than, y'know, look at the published evidence..." Stuart makes a good point here. Why not address the empirical data derived from studies of the drugs themselves?

My own bete noire is that the ratings, whatever their outcome, fail to incorporate the consequences for cognition. How is cognitive impairment captured in such ratings and how could it be captured?

4 comments:

  1. I had a similar thought to @StuartJRitchie. It would be good to see a paper that used quantitative data as far as possible. I believe that's possibly for addictive potential, and the 2010 Nutt paper used the "ratio of lethal dose and standard dose".

    The problem comes when comparing completely different health effects (including cognitive impairment). Here I wonder if you could use people to make order of epidemiological data. Note down the known dangers of one drug on a card - e.g. chance of schizophrenia/gout/Parkinson's goes from 1 in 100 to 1.2 in 100. Do the same for others, and for things like soft drinks and horse riding too. Find a collection of people - maybe GPs - who know what these diseases are and get them to order/rate each risky behaviour (blind) by severity of personal harm. Would that address your final question?

    Incidentally, these papers have disappointingly failed to make clear whether every measure of harm takes into account prevalence. If they don't take levels of use into account, that should be fixed (with relative ease). If they do, they should make that very clear for the sake of critics.

    I also think these papers would be better off leaving out most social harms other than direct violence. The last Nutt paper considered international crime and waste from unregulated drug factories. These are useful if the scale is intended to warn people of the harm done by using illicit drugs, but it's damaging if the intention is to inform decisions on classification, legality and regulation. Similarly, costs to the NHS from e.g. alcohol can be covered through taxing users and should be excluded.

    It would also be good to separate out other contributors to harm. What score does smoking heroin get compared to injecting? How does very excessive drinking compare to 'responsible' use? Do contamination and unknown purity contribute to the harm score and can that be separated? Quantifying these things may be tough but it really would be great to have a graph that can show what users and policymakers what large impacts can be made by smallish changes in behaviour. These papers recognise that not all 'drugs' are the same, but need to also tackle the huge variation in patterns of use of any one drug.

    So those are a few things I'd like an improved drug ranking paper to consider. I hope others have more suggestions.

    ReplyDelete
    Replies
    1. Externalities - I think you make a series of interesting and relevant points. The study certainly could be improved both on methodological and content grounds.

      Nonetheless, I cannot stray away from the idea that the best way to measure 'harm' is to look at actual 'harm' rather than the ratings of 'experts'; and to make sure (as you suggest) that we cover the full base of harm.

      I would be surprised if people accepted medication guidance 'purely' on the basis of expert opinion (rather than an evidence base) - although I know 'expertise' plays an explicit role in NICE guidelines in the UK and in guidelines elsewhere...when it shouldnt!)

      What is being advocated in this (and similar) papers is developing a drug policy on the basis of a method that does not achieve the aim...and crucially cannot achieve the aim.

      Delete
  2. One of the questions for me was why did they stick to the 9 criteria used by Nutt 2007 rather than the 16 from Nutt 2010 when the latter study was intended to produce more refined results?

    I was also a bit concerned at the way the lead author tried to wriggle out of the finding on cannabis by saying the 'experts' weren't.

    The conclusion, that the classification system under the MDA is a joke is perfectly valid but my worry is that the a flawed study gives ammunition to those who want to claim it is OK.

    ReplyDelete
    Replies
    1. thepoisongarden, yes - but we could argue about those criteria as well - e.g. where is the explicit impact on cognition in any of these ratings? (an area where we have much empirical evidence rather than opinion!)

      You are absolutely right regarding the authors attempt to explain (dismiss?) the cannabis finding - in some respects the authors response underlines why the rating method is so weak - it cannot tell you why the difference occurs (except to point back to the raters).

      Indeed, I would argue that - in ths paper - cannabis appearing at the bottom of the scale is largely uninformative. The ordinal nature of the scale makes it very weak - without a measure of variance for each rating, we have no way of knowing anything other than an order - but nothing about differences between the drugs -not even if the Heroin rating differs 'significantly' from the Cannabis rating!

      Delete