13.4.15

Stanley Kutcher's Science


I recently exchanged some tweets with Stan Kutcher and the “Knowledge Exchange” manager of the Mental Health Commission of Canada (MHCC), Christopher Canning.  The heart of the matter was the Hot Idea or Hot Air paper released by Kutcher whose purported “results” were re-tweeted by Canning and journalists Tom Blackwell and Andre Picard:
Purchasers of [either the SOS or Yellow Ribbon suicide prevention] programs should be aware that there is no evidence that their use prevents suicide. (Kutcher et al, 2015)
Now, I have no knowledge of either the SOS or Yellow Ribbon suicide prevention programs beyond what can be gleaned from a few minutes on their websites. I would be open to their thoughts on how they measure their own success (and what they think of Kutcher’s pronouncements in the name of “science”) but am at the moment in no position to defend them. I do not know how effective these programs are. 

What I do know is that a reading of Kutcher’s paper begs at least as many questions as it answers.

Methodological tinkering or critical thinking?

In a characteristic section on the methodology of a database search, Kutcher drones detail:
The results from the two search terms were combined using the Boolean AND, which were further filtered by key terms “SOS suicide prevention” and “Yellow Ribbon suicide prevention”. Meanwhile, we repeated the first two steps but filtered the results with key words systematic review and meta-analysis, to capture existing reviews/meta-analyses… (Kutcher et al., 2015)
But at the end of it, we have no actual idea what resources might have been filtered. In another section, Kutcher announces, “Systematic review or meta-analyses dealing with suicides linked to physical illness were excluded,” but we have no idea why such an exclusion was considered necessary or desirable, or whether the results would be different had “suicides linked to physical illness” been included. We do not even know how the suicide-physical illness link is defined or how it was applied.

What we do know is that two “team members” did data searches, found about 13K+ abstracts after de-duping, and then narrowed those down to 5, which were then examined critically resulting in the “no evidence” claim.  How did they do this? If I am understanding it right, the “team” filled out a “data extraction form, developed a priori” on each abstract and then applied standards to select a much smaller group of 32 (7 intervention studies + 25 systematic reviews). After examination of those 32, they applied more standards and then selected 5 of the studies. 

It’s boring to recapitulate tweet battles, but I have to mention I made a mistake when I said that Kutcher had performed a meta-analysis in this Hot Idea or Hot Air paper. In fact, Kutcher never got to the meta-analysis. Instead he reviewed 5 papers after eliminating, first, thousands he deemed irrelevant, and then 27 he deemed unqualified, according to certain standards.

One of the curious things about Kutcher’s methodology is that it utilized standards adopted from a team assembled by George W. Bush shortly after the passage of the Patriot Act. Bush’s White House Task Force for Disadvantaged Youth developed the OJP What Works Repository used by Kutcher’s team. The Obama administration has certainly embraced similar approaches. It’s too tempting for any politician to pass up: Get up and make a stern speech with a “we-are-not-going-to-take-it-anymore-attitude” then conclude, “When spending tax money, measure results and get rid of bad performers.” It’s hard to argue the opposite.

But it is important to note that “evidence-based” evaluations are primarily bureaucratic cost-cutting tools. They can be directed against enemies and skewed to help friends. Minimal critical thinking dictates that we ask whether evidence-based standards are being applied evenly.  Are “evidence-based results” demanded of every government-funded sector, or just a choice few?  Does the MHCC, for example, a government-funded organization that supports the work of Canning, Kutcher and Picard, have any double-blind experiments establishing the effectiveness of spending millions of dollars on studies and marketing campaigns, or justifying its claim to be “a catalyst for improving the mental health system”? 

Putting aside these general observations, let’s look at a particular case.  

Evidence debased practice

Let’s say a medical researcher accepts money from pharmaceutical companies and then seems zealous to dispose of evidence for the positive effects of non-pharmacological interventions.  Let’s say he was also found sitting on evidence for the ineffectiveness of pharmacological interventions. Perhaps we should question his categorical assertions that “there is no evidence” for the effectiveness of what turns out to be the product of a competitor.

Stanley Kutcher and his team attempt to dispose of evidence for the positive effect of one non-pharmacological intervention here:
Further, although all the SOS studies failed to find reduced suicidal ideations, they did report reduced attempts. The reported discrepancy between suicide attempts and ideations is in contrast with previous reports (Bridge, Goldstein, & Brent, 2006; Perez, 2005) and the authors’ clinical experience suggest that the two measures should be significantly associated. (Kutcher et al., 2015)
Apparently, in this version of “evidence-based” theory, the “authors’ clinical experience” trumps data. Kutcher’s team can simply announce “there is no evidence” because a reported decrease in suicide attempts (which is data, or “evidence,” regardless of the weight or interpretation one gives to it) does not accord with expected results based on personal experience.
Kutcher’s team enlists scholarly citations as support, but one of the papers referenced as being “in contrast” to the SOS studies is quite a bit more nuanced in its treatment of suicidal ideation than the Hot Idea or Hot Air paper:
Suicidal ideation: The point prevalence of suicidal ideation in adolescence is approximately 15–25%, ranging in severity from thoughts of death and passive ideation to specific suicidal ideation with intent or plan (Grunbaum et al., 2004). The latter is much less frequent, with annual incidence rates of 6.0% and 2.3% in adolescent girls and boys, respectively.

…Longitudinal studies have shown that the more severe (high intent or planning) and pervasive (high frequency or duration) the suicidal ideation, the more likely such ideation is to eventuate in an attempt (Lewinsohn et al., 1996). Attempters who show persistent suicidal ideation, particularly with a plan or high intent to commit suicide or both, are at increased risk to reattempt (Goldston et al., 1999;Lewinsohn et al.,1996)
Kutcher’s paper makes no mention of these two types of ideation, one type non-predictive (passive ideation), or of the fact that Bridge, Goldstein & Brent consider the best predictor of suicide to be previous suicide attempts:
Previous suicidal behavior: A prior suicide attempt is the single most potent risk factor for youth suicide in both case–control and prospective studies, elevating the risk of asubsequent completion 10–60 fold. (Brent et al., 1999; Kotila & Lonnqvist, 1989; Marttunen, Aro,& Lonnqvist, 1992)
Yet Kutcher et al seem to be saying that a decrease in reported attempts in the absence of a decrease in suicidal ideation should be ignored because it is not in accord with the authors’ personal experience.

None of this is to say that the paper does not raise legitimate questions about “self-reported” suicide attempts as opposed to documented interventions. Again, I am in no position to argue the effectiveness of the programs or the reliability of the data supporting them.  But Kutcher and team do have a knack for turning the lack of a measurement into evidence for the absence of effectiveness.

So, it would seem that Kutcher et al might be a bit zealous to dispose of evidence for the positive effects of non-pharmacological programs.  But is there any indication that Kutcher ever failed to publish evidence for the ineffectiveness of pharmacological interventions? As a matter of fact, there is.

Paxil Studies 329 and 377


Back in 2011 Stanley Kutcher was running as the federal
Liberal Party candidate in Halifax when he
got a lucky break: someone said he “essentially lied.” 

Why was it lucky? Because if Thomas Bousquet, in the Halifax weekly The Coast, within days of an election, had not quoted Alison Bass, award-winning author of the book Side Effects, when she said that Study 329, and implicitly Kutcher as co-author, had “essentially lied,” it would have been much harder for Kutcher’s legal team to force The Coast to apologize and remove Bousquet’s entire article, which it did, using legal threats referencing special libel conditions before an election.

This too was lucky, because it has allowed Kutcher to appear vindicated ever since when the main point of Bousquet’s article is unassailable: Study 329, co-authored by Kutcher, was funded by GlaxoSmithKline, and when raw data tended to show that GSK’s anti-depressants were no more effective than placebo, and that teens who took the drug became more suicidal than the placebo group, a way was figured out to manipulate the data and write the experiment up as a success.

The evidence emerged back in 2004 when NY AG Elliott Spitzer filed a consumer fraud suit against GSK, the brainchild of NY AAG Rose Firestein. As the WSJ put it:
The New York lawsuit describes five Glaxo studies of use among children and adolescents. Two of the studies failed to show that Paxil was more effective than a placebo for treating depression in children and adolescents, according to the suit. In one, the placebo actually outperformed Paxil on a primary efficacy measure, the suit says. And three of the studies showed that certain possibly suicide-related behaviors were about two times more likely among the Paxil users than the others. In one study that wasn't published, 7.7% of the youth on Paxil had behavior that included "suicidal thinking and acts," compared with 3% of the placebo group, according to the suit.
An internal Glaxo document said the company would have to "effectively manage the dissemination of these data in order to minimize any potential negative commercial impact," according to the suit. That document recommended that Glaxo publish a full article only on the one study out of the five that had some favorable conclusions. Soon after, that study was published in Journal of the American Academy of Child and Adolescent Psychiatry
I think that “full article” would be the report on Study 329, co-authored by Stan Kutcher and the very subject of Bousquet’s retracted article in the Coast. 

The suit was settled for a pittance, but Spitzer forced GSK to release all the buried Paxil studies.  Some embarrassing GSK internal memos also emerged:
According to an internal email obtained by the BBC, GSK executives were well aware that Paxil wasn't performing…"Essentially the study did not really show it was effective in treating adolescent depression, which is not something we want to publicize," reads the memo (Halifax Media Coop, 2011
One of the un-publicized findings was Stan Kutcher’s Study 377:
Kutcher was also involved in study 377. The results were dismal, showing that Paxil was no better than sugar for treating depression in youth. While 329 was subject to a market-friendly makeover, 377 was suppressed by the drug company. It didn't see the light of day until the New York district attorney's office forced GSK to release it.
377 discloses each author's involvement in the pharmaceutical industry. Of Kutcher, it lists that he had been a paid consultant for GlaxoSmithKline. It also says that he had "received research grants from, has been a consultant for, or participated on advisory boards of" pharmaceutical heavy-weights GSK, Pfizer, Eli Lilly. He disclosed nine drug companies in total. (Halifax Media Coop, 2011)
Bousquet’s article didn’t even mention Study 377.  In 2012 the U.S. Justice Department announced that GSK had agreed to plead guilty and pay a $3 billion fine, in part for promoting the use of Paxil for children. The suppression of Kutcher’s Study 377 was a major part of that case.

Tweeting knowledge or tweaking truth

Now to answer my own tweet, “Is Stan Kutcher doing bad science?” 

First I would say that Kutcher has certainly appeared capable of “bad science” in the past based on his involvement in a famous case of withholding evidence resulting in a guilty plea by a major drug company and, arguably, in suicide promotion.

Second, I don’t know yet whether Kutcher’s current attempt to de-fund two popular suicide prevention programs is “bad science” because I am not sure it is science at all. It does appear somewhat “nasty and brutish” to this critic. It is certainly questionable and does not deserve the unquestioned re-tweeting it gets from Blackwell, Canning and Picard. 




22.8.14

bon côté mauvais côté


Bonne nouvelle : Diane Côté m’a récemment informé que le service de référence de l’Ordre des Psychologues du Québec sera corrigé pour inclure les psychothérapeutes qui acceptent des mandats de Santé Canada pour les autochtones.  

Mauvaise nouvelle : ce changement n’aura pas lieu avant le mois de mars, 2015.  

Mme. Côté répondait à un courriel qui l'informait qu'un représentant de Santé Canada a confirmé que les services des travailleurs sociaux, des psychologues, et de tout psychothérapeute détenteur d’un permis délivré par l’OPQ sont tous remboursés par Santé Canada. 

Mme. Côté a par ailleurs écrit qu’elle ne jugeait ‘pas nécessaire’ d’en discuter avec ledit représentant de Santé Canada.

En réponse à Mme Côté, j’écrivais le suivant :
Si j'ai bien compris, la clientèle autochtone qui consulte le site de l'OPQ ne sera pas référée à tous les fournisseurs inscrits dans votre service de référence qui acceptent les mandats de Santé Canada avant le mois de mars, 2015?
On verra si Mme Côté répondra à cette question. 
 
Je lui ai aussi fait savoir qu'à mon avis il était bien nécessaire de tenir le représentant de Santé Canada au courant de sa décision. 

15.8.14

Newsjacking the death of an actor


The initial purpose of this blog was to spread the word on  my petition and document an encounter with various people at the OPQ.

As I heard more stories about the deleterious effects of Bill 21, I researched the various publicity campaigns that introduced it to the public.  I saw a connection to my own experience serving on the Mental Health Commission of Canada (MHCC) where I heard all the same buzzwords.

“Stigma” and “Anti-stigma”

There is a class of “professional journalists” who depend on corporate and governmental sources for their livelihood, and who publicize to some degree the “official line” on a story in exchange for journalistic access to the powerful.  When I see the same keywords disseminated by these journalists whenever and wherever possible, I see a publicity campaign, which is to say, I see the tail end of a coordinated marketing campaign.

Now one of the key techniques of modern marketers is to hijack a story, also called “newsjacking.”  And one of the favorite types of stories to newsjack is the death of a celebrity.  And sure enough, right on time, André Picard, public health reporter for the Globe and Mail, has newsjacked the sad story of Robin Williams' death by retweeting the newsjack of James Kirkbride in order to direct us to the “sage words” of Ian Colman, quoted in another newsjack by Lynn Desjardins of Radio Canada.

James Kirkbride and Ian Colman both just happen to be in the business of getting funding for psychiatric research of the type that the “anti-stigma” campaign is hoping to fund.  (The MHCC and those behind Bill 21 have a very pronounced bias towards funding research, and seem peculiarly unable to take in that they are restricting public access to helpful mental health resources right now, the subject of this blog.)

I do not wish to discuss the particulars here except to say that an argument, based on Mr. Williams’ death, for funding the type of research called for by the “anti-stigma” campaign seems questionable, especially because Mr. Williams cannot speak for himself.  But no matter.  Picard, Kirkbride, Colman, and Desjardins had the hijack angle all figured out before the facts of Mr. Williams’ death were even established.  There really is no argument.  It is propaganda.

Mr. Williams, like most great comics, sometimes had a keen sense of how things work.  An interviewer described one of his recent films as “a devastatingly funny indictment of the modern grief industry.”  When she asked him if things were getting worse, Williams replied: "Well, I think people want it. In a weird way, it's trying to keep hope alive… you just try and keep it in perspective; you have to remember the best and the worst."

Sounds like Robin Williams would have forgiven Picard, Kirkbride, Colman and Desjardins for using the grief over his death to gain marketshare, but I for one wish they might listen to that critical marketing expert linked above, who notes “we stopped counting how many PR people broadly distribute a pitch for their client when someone in the public eye dies."