Back to the Top
The following message was posted to: PharmPK
Dear all,
I'd like to hear your ideas on how to deal with placebo effect in various therapeutic areas. I have worked with a couple of clients, who have failed phase 3 studies on their hands due to "unexpectedly" high placebo effects. In planning repeat studies, they want to come up with designs that minimizes the placebo effect, although I am not sure if that is the correct way to treat the issue. However, I'd like to know how others deal with the issue.
Thanks!
Toufigh
Toufigh Gordi, PhD
Clinical Pharmacology, PK/PD analysis consultant
www.tgordi.com
E-mail: tg.at.tgordi.com
Back to the Top
The following message was posted to: PharmPK
Hello Toufigh,
We have seen this problem in the schizophrenia franchise and you may
find the following article interesting: Kemp et al. What Is Causing the
Reduced Drug-Placebo Difference in Recent Schizophrenia Clinical Trials
and What Can be Done About It? Schizophr Bull. 2008 Aug 22.
[Epub ahead of print:
http://www.ncbi.nlm.nih.gov/pubmed/18723840
]
The authors show
in Figure 1 of the paper that "Placebo response in Clinical Trials Has
Increased in the Direction of Greater Improvements, Which Is Correlated
With the Year That the Studies Were Conducted" (i.e. placebo responses
have increased in the last decade or so...)
Knowing your expertise, I am sure you would suggest that 1 way to
approach this problem is to think about probability of trial success
using tools such as clinical trial simulations (I like this idea as
well). There is a nice published example where the Uppsala group shows
through "simulations that the post hoc probability of success of the
performed trials was low to moderate" in Fig 4 of this paper: Friberg et
al. Modeling and simulation of the time course of asenapine exposure
response and dropout patterns in acute schizophrenia. Clin Pharmacol
Ther. 2009; 86(1):84-91.
Good luck,
Mahesh
Back to the Top
The following message was posted to: PharmPK
I have come across some literature on triple blinding that could be
useful.
Nav Coelho
Manager, Biostatistics - Biovail Contract Research
Back to the Top
Toufigh,
Designs to 'minimize' the placebo effect have been tried in anti-depressant trials (where 80-90% of the response is due to placebo). They don't work.
On the other hand one can model the placebo effect directly and then understand both the placebo and the drug effect.
Nick
Back to the Top
The following message was posted to: PharmPK
Dear Nick,
As I wrote in my original post, "I am not sure if (designs that minimizes the placebo effect) are the correct way to treat the issue". I agree with you and Mahesh's comments that a model based approach, where one can incorporate both the drug and the effect of placebo in the same model, would be the best way to go. However, the pharmaceutical industry, especially smaller companies, are far from implementing such approach in dealing with the placebo effect and coming up with designs that supposedly decrease the placebo effect are widely employed.
A general idea I am getting interested in is to study the placebo effect more extensively than has been done so far. In most studies prior to the phase 3 (confirmatory) studies, placebo arms are normally one among several study arms and hence much less information is gathered on the magnitude, significance, and the variability of the placebo effect. In a typical phase 2 study, one may test 4-6 active arms and 1 placebo arm. It means that the design of the phase 3 studies relies on probably incomplete understanding of the placebo effect and that's why people may be surprised by an "unexpectedly" large placebo effect. Well, if one hasn't studied the effect thoroughly, one should not be surprised by the finding. In my mind, for indications where significant placebo effects are known to be present, more attention should be paid to study the placebo effect, e.g., by allocating more subjects to the placebo arms. Why not have a ratio of 1:2 or even higher for
the placebo:active instead of the common ratio of 1:4 or 1:6? Why not (in the phase 2 study) investigate the groups for a longer period of time than required by the regulatory authorities (in a phase 3 setting) to see how the placebo effect and the effect of the active compound changes with time? Such information allows for better understanding of the variability and time curse of the placebo effect and, once utilized in a PK/PD model, should offer a more rational approach to the design of the confirmatory studies. I guess it will be a tough sell to ask the clinical development people to "waste" more patients on the placebo arm but it may be worthwhile testing.
I look forward to comments from others involved in studies with large placebo effects.
Toufigh
Toufigh Gordi, PhD
Clinical Pharmacology, PK/PD analysis consultant
www.tgordi.com
E-mail: tg.-at-.tgordi.com
Back to the Top
Hello Toufigh,
Maybe including too may arms in the phase II study is not be a good idea. There is literature that suggests that higher number of treatment arms is associated with a greater magnitude of response to placebo. Here are a couple of references:
* Khan A, Kolts RL, Thase ME, Krishnan KR, Brown W. Research design features and patient characteristics associated with the outcome of antidepressant clinical trials. Am J Psychiatry. 2004 Nov;161:2045. * Papakostas GI, Fava M. Does the probability of receiving placebo influence clinical trial outcome? A meta-regression of double-blind, randomized clinical trials in MDD. Eur Neuropsychopharmacol. 2009;19:34.
The authors interpret these findings as follows: "Higher number of treatment arms may increase the 'perceived' likelihood of receiving active treatment. Higher number of treatment arms is thought to be a proxy for the degree of expectation of improvement and is predicted to show lesser antidepressant placebo separation"
Good luck,
Mahesh
Back to the Top
The following message was posted to: PharmPK
Toufigh:
I used to work at a clinical site that specialized on conducting depression, anxiety, Alzheimer's and ADHD trials. As Nick pointed out, placebo effect can be really high in these trials. The site I worked almost always performed well when analyzed for the placebo effect at our site. Here is my tongue-in-cheek and totally antedotal recommendations to sponsors for minimizing placebo effect:
1. Don't write a stupid protocol
It was obvious that some of the protocols awarded to our site were written by people that had no clinical experience and some of those protocols just didn't work well in terms of recruitment and conduct. Every protocol needs to be critically reviewed by someone that is familiar with the disease in question and the issues that people with that disease face.
One of the worst protocols?...A federally funded study that compared a dietary supplement to an SSRI. This study received a lot of publicity because the SSRI arm did not perform better than the placebo arm. But none of the newswires mentioned just how difficult that protocol was. Among the many issues was that all encounters with the subjects were scripted. We had to read a script to subjects and to make sure we didn't deviate from the script...every encounter was recorded. There more more subjects that spontaneously mentioned that they thought they were on placebo because they weren't "feeling anything" than any other trial. I suspect the script tipped them off.
2. Double blind trials usually aren't
Placebo runins and other trial designs can help reduce placebo effect to a certain extent but they will never eliminate it. After a couple of years, I could almost always could figure out which subjects were placebo without trying. Almost all drugs produce mild side effects even if they resolve. The thing is that many subjects could figure this out too and for all practical purposes the trials weren't even single blind (see above example).
3. Save your bargain hunting skills for ebay.
Yes, clinical trials are really expensive but there are lots of low budget trials that fall apart. When you select a clinical site, you generally get what you pay for. Bargain sites may not follow Good Clinical Practice Guidelines, and they may not have sufficient experience to deal with issues like placebo effect.
Big pharmaceutical companies would inspect our site with the kind of detail you would expect if you were trying to get security clearance to work at Los Almos. They would demand to see the placebo effect for our site for previous trials. Smaller companies didn't ask, they were most concerned with price.
4. Design your payment schedule so that the clinical trial sites have the incentive to be really picky. Many times we would reject subjects with concurrent DSM Axis II disorders such as borderline personality disorder and antisocial personality disorder even if the protocol did not specifically exclude them because we knew these subjects would be trouble. (And I eventually realized the subjects I observed responding to the placebo during a runin exhibited signs of borderline personality disorder).
5. Look for a site that does not have a lot of turnover in personel. Everyone at the site needs to be trained to prevent placebo effect, from the receptionist to the lab technician. We were instructed not to be too nice to people. It is tempting to be really friendly...you see these subjects a lot more than you would see patients and you are grateful to them for volunteering (and even though it is research, you want things to go well for them). But the site personel need to be very business-like even almost aloof.
Cheers,
Carol Collins
Back to the Top
The following message was posted to: PharmPK
Dear Toufigh,
How would you attempt to model placebo effect anyway please?
I believe 'wasting' more subjects on an inactive control arm would be deemed an ethical consideration. Also a longer follow-up for the placebo group would again be unethical because usually phase II would be in a small patient sample, possibly non-responsive patient types compared with an established active standard treatment or uncontrolled, depending on your therapeutic indication. Any significant treatment effect, and therefore placebo effect, would possibly be disproportionately large and imprecise because the phase II trial is small and shortlived, as these are feeder trials and hypothesis generators for phase III, with unconfirmed toxicity adverse event rates. So in your landmark confirmatory trial they can randomise appropriately in a controlled manner as to minimise any bias and monitor safety appropriately. If the trial was a well designed double blind, parallel placebo vs new drug ratio 1:1 then minimising any bias in the treatment difference for placebo effect should not be a problem. Case-controll
Back to the Top
Folks
On the placebo effect I agree with Toufig's comments in that there are occasions when it is especially important to know how big the placebo effect is and I think that is best done by having more than one placebo group. It would apply particularly when using a self-reporting outcome measure as in a study of an anti-depressant or an analgesic for example. Rather than one placebo group larger than the actives under test as Toufig mentioned it is logical to have 2 or even more groups of the same size so as to see the variation in the variance at the selected group size because that gives a measure of the sensitivity of the study design and a better assessment of the validiity of the active drug effect. This can be useful when you cannot make one placeo for 2 or more actives.
The worst thing that a company can do is to market a new medicine where the evidence was in fact mostly a placebo effect. The huge investment and efforts made to motivate the sales force will have an initial impact but after the honeymoon period it will meet increasing resistance from the clinicians as their patients start reporting that it doesn't actually work as well as other treatments. The end result is a demoralised sales force and of course a lost opportunity to develop a more worthwhile product.
So I would question Manesh's remark that a bigger placebo effect is not a good idea. The investment in a Phase 3 development programme and beyond is so great that it is better to be cautious and reject a (mildly) active compound than to risk launching it, only to find that the competition is more effective. In Phase 2 and 3 we need to know how big the placebo effect can be in the conditions of the study rather than try to minimnise it for a short term gain.
Andrew Sutton
--
Hello Simon,
I hope that my previous reply to Toufig and Manesh goes some way to answering your points. My rationale is that in Phase 2 you have to accept that you are going to "sacrifice" some patients' rights to a potentially new active treatment because that is essential to avoid treating hundreds of future patients with a medication that is ineffective.
To redress the "sacrifice" problem it is usually quite possible to design the study so that the patients who took the placebo get crossed over to an active, even if that has to be under less tightly controlled conditions or they only get a standard treatment instead of the new potentially active compound. There is no ethical problem with this provided that the situation is carefully explained to the patients and they have plenty of time to consider their response.
I agree that such a procedure has an effect on patient selection, ie: losing those who fear going on a placebo, but it is better to do the study under that restraint than try to manage with only another active group in the design because you cannot really measure the effect of the active control without knowing the extent of the placebo effect under the study conditions. Moreover, I have found that one effect of including the placebo is to restrain the natural enthusiasm of the investigator...a major component of the placebo effect in some cases.
Cheers
Andrew
--
Hi again Toufig,
I too have been excercised by the placebo effect over many studies ( I have been PI in about 200 trials) and all I can say is that the following factors help to define it (not minimise it) during a study. Careful selection of the rating scale to avoid ceiling and floor effects in particular, larger goups so as to measure variance more accurately, use of more objective outcome measures and longer duration of treatment because placebo effects tend to fade with time. The trouble it that all these increase costs.
If I may illustrate another point with an anecdote from my days at Guildford Clinical Pharmacology, we invented a mood rating scale for sufferers of chronic pain (The Short Pain Inventory of Shaun Kilminster's) and we found that some patients reported high mood changes ( anger, fatigue & sadness being the main ones) while they rated their pain low. In other words they were stoics. At the opposite end of the scale the more hysterical type patients reported high mood changes but low pain scores. The first have a floor effect when it comes to rating pain so your new compound has a problem showing any efficacy, while the second group have high variances ...which is another aspect of the placebo effect, so one answer to your question is to remove the outliers, so reducing group variances quite dramatically. That has a very useful effect in reducing group sizes...which means your sponsor can afford the other measures that I recommended above.
I hope that goes some way towards an answer.
Best regards
Andrew
Want to post a follow-up message on this topic?
If this link does not work with your browser send a follow-up message to PharmPK@boomer.org with "Placebo effect" as the subject | Support PharmPK by using the |
Copyright 1995-2011 David W. A. Bourne (david@boomer.org)