This site is 100% ad supported. Please add an exception to adblock for this site.

Clinical Psychological Science exam 2

Terms

undefined, object
copy deck
There are two approaches to interpreting diagnostic data: clinical and actuarial decision making. How do these approaches differ?
Clinical decisions:
a) The decision-maker arrives at a judgment intuitively, based on assumptions that typically remain implicit rather than explicit.
2. Actuarial (or statistical) decisions:
a) The decision is made through an automated, pre oecified, or routinized application of a formula that is based on empirically established relations.
(1)Note, this approach does not necessarily eliminate the role of clinicians because their judgments about the patient can be quantified and, to the extent that they improve accuracy, they can be added to the formula.
Is a combined approach (of clinical and actuarial decision making) workable?
a) Not in the case of dichotomous decisions.
(1) If the methods agree, there is no need to combine them.
(2) If they disagree, you must choose the result of one or the other
Meehl (1954) specified conditions for a fair comparison of the empirical methods. What are they?
Both methods should base judgments on the same data.
b) Conditions that artificially inflate the accuracy of the actuarial approach should be avoided.
(1)Regression analysis capitalizes on sample-specific chance relationships. Therefore, you must cross-validate any formula on a new sample
Does training improve the accuracy of clinical judgment?
*In virtually every study, the accuracy of the actuarial approach exceeds that of clinician judgments*

(a) Goldberg trained judges with 300 MMPI profiles with the criterion diagnosis on the back.
(i) Thus, judges got immediate feedback.
(b)After 4000 training trials, the judges had improved, but none of them equalled the accuracy of the formula.
(c) They then gave them the outcome of the rule and told them how accurate it was.
(i) Judges were free to use the rule or not.
(d) They improved, but still none equalled or exceeded the 70% accuracy of the formula.
(i) THEY WOULD ALWAYS HAVE DONE BETTER TO FOLLOW THE FORMULA.
(4) Goldberg also modelled the decision rules being used by these clinicians.

(a) I he rule derived from a clznzczan was more accurate than the clinician on whom the rule was modelled.
(i) In other words, people make mistakes and don’t always follow their own implicit rule.
In virtually every study, the accuracy of the actuarial approach exceeds that of clinician judgments. What problems do people have with these studies and why are these probs unfounded?
1. Many have argued that such studies misrepresent clinical judgment accuracy because they deny clinicians access to preferred information sources.
a) But, even when allowed to have more information than the formula uses (e.g., clinical interview results), the formula still does better.
(1)Goldberg formula based on the MMPI alone outperforms clinicians using both the MMPI and a clinical interview (Sawyer, 1966).
2. Many have argued that such studies use inexperienced clinicians.
a) But empirical comparison of inexperienced and
experienced clinicians fails to show any benefit of experience.
(1)An aside: more experienced clinicians are not more accurate in general, but they are more confident in the accuracy of their judgments.
3. Perhaps clinicians would do better than the formula when they notice rare events that are too rare to be included in the formula.
a) This is known as the “broken leg” problem:
(1)A formula might be very accurate at predicting a man’s weekly attendance of a movie, but, it will become invalid if the man breaks his leg.

b) But available empirical evidence shows that clinicians’ overall accuracy is always better when they never over rule the formula.
(1) In other words, when clinicians over rule the formula, they are wrong more often than they are right.
(2) That is, clinicians identify too many exceptions to the rule.
why are actuarial approaches superior to clinical ones?
1. Completely reliable
a) They never get tired or make a mistake.
2. They preserve only the predictive variables and ignore those that are not valid predictors.
a) Clinicians have trouble distinguishing those variables that are truly predictive from those that are not.
3. Clinicians tend to be more confident than they should be in their judgments:
a) This may be due to lack of outcome data in many cases so clinicians have trouble learning when they are wrong or right.
b) It may also stem from the skewed sample that clinicians see.
Be able to give a brief history of the scientific-practitioner model and explain the rationale behind it, which was offered by the Boulder Conference
The first national training conference on clinical psychology, the
Boulder conference (Raimy, 1950), was a milestone for several
reasons. First, it established the PhD as the required degree, as in
other academic research fields. Second, the conference reinforced
the idea that the appropriate location for training was within
university departments, not separate schools or institutes as in
medicine. And third, clinical psychologists were to be trained as
scientist-practitioners for simultaneous existence in two worlds:
academic/scientific and clinical/professional.
B. Be able to describe the differences between Psy.D. and Ph.D. programs (or, more accurately) professional schools and university-based programs
~Clinical PsyD programs accept more of their applicant pool (41 vs. 17)
~Fewer Psy D individuals get financial aid
~Fewer Psy D students are able to secure an APA approved internship
~PsyD graduates do not perform as well as PhD graduates on the national liscensing examination for psychologists.
Why are the differences between Psy. D. and PhD. programs problematic?
. Rising acceptance rates and shorter training
periods will probably translate into less qualified students (at least
on conventional academic criteria), larger incoming classes, less
financial aid, greater student debt, shortages in APA-accredited
internship positions, and lower scores on the national licensing
examination. If these trends continue, the proportion of clinical
psychologists graduating from PsyD programs will soon surpass
those from university-based PhD programs. Such a shift will raise
important questions regarding the identity of psychology as a
doctoral-level profession
What are some of the problems with the professional psych programs?
The professional-applied programs
• Are rated significantly lower in faculty quality, and therefore fall
predominantly in the fourth faculty-quality quarter
• Have a significantly lower average publication record (average of
1.5 publications per faculty member in the period 1988–1992)
• Depend significantly more on the service of part-time faculty
members
• Have significantly more students per faculty member
• Admit candidates with median Graduate Record Examination
(GRE; Verbal plus Quantitative) scores significantly lower than
those of students in the research programs (Note that 8 of the 25
professional programs did not report median GREs.)
• Have significantly increased their output of Ph.D.s since 1982
• Have a significantly higher output of Ph.D.s than the psychological
research-science programs in all faculty-quality quarters (Note that
the Ph.D. output of the applied programs is higher in the fourth
quarter than in the other three quarters.)
This increase in the output of professional psychology degrees is
occurring in institutions that have few or no research doctoral pro-
grams in disciplines other than psychology. In short, these programs
are not seated in a research context
Be able to define an empirically-supported treatment
CRITERIA FOR EMPIRICALLY-VALIDATED TREATMENTS
: Well-Established Treatments

I. At least two good between group design experiments demonstrating efficacy in one or more of the following ways:
A. Superior (statistically significantly so) to pill or psychological placebo or to another treatment.

B. Equivalent to an already established treatment in experiments with adequate sample sizes.

OR

II. A large series of single case design experiments (n >9) demonstrating efficacy. These experiments must have:

A. Used good experimental designs and
B. Compared the intervention to another treatment as in IA.

FURTHER CRITERIA FOR BOTH I AND II:
Ill. Experiments must be conducted with treatment manuals.
IV. Characteristics of the client samples must be clearly specified.
V. Effects must have been demonstrated by at least two different investigators or investigating teams.

: Probably Efficacious Treatments
I. Two experiments showing the treatment is superior (statistically significantly so) to a waiting-list control group.

OR II. One or more experiments meeting the Well-Established Treatment Criteria IA or IB, Ill, and IV, but not V.
: OR
III. A small series of single case design experiments otherwise meeting Well-Established Treatment
What are McFalls principles?
~ Cardinal Principle: Scientific Clinical Psychology Is the Only Legitimate and Acceptable Form of Clinical Psychology
~First Corollary: Psychological services should not be administered to the public (except under strict experimental control) until they have satisfied these four minimal criteria:

1. The exact nature of the service must be described clearly.
2. The claimed benefits of the service must be stated explicitly.
3. These claimed benefits must be validated scientifically .
4. Possible negative side effects that might outweigh any benefits must be ruled out empirically
~Second Corollary: The primary and overriding objective of doctoral training programs in clinical psychology must be to produce the most competent clinical scientists possible.
Be able to list, explain, and defend McFall’s cardinal principle
~ Cardinal Principle: Scientific Clinical Psychology Is the Only Legitimate and Acceptable Form of Clinical Psychology

~All competent clinical psychologists must be scientists first and foremost and ensure that their practice is scientifically valid.
~Most of us have become accutomed to giving dispassionate, objective, critical evals of journal articles, now we must apply the same kind of critical eval to the full spectrum of activities in clinical psych.
Be able to list, explain, and defend McFall’s four criteria associated with the first corollary
~First Corollary: Psychological services should not be administered to the public (except under strict experimental control) until they have satisfied these four minimal criteria:

1. The exact nature of the service must be described clearly.
2. The claimed benefits of the service must be stated explicitly.
3. These claimed benefits must be validated scientifically .
4. Possible negative side effects that might outweigh any benefits must be ruled out empirically
~to the extent that clinical psychologists offer services to the public that research has shown to be invalid, or for which there is no clear empirical support, we have failed as a disciplie to exercise appropriate quality control.
Be able to list, explain, and defend McFall’s second corollary.
~Second Corollary: The primary and overriding objective of doctoral training programs in clinical psychology must be to produce the most competent clinical scientists possible.
~First, the Boulder Model, with its stated goal of training, "scientist-practitioners," is confusing and misleading
~Second, scientific training should not be concerned with preparing students for any particular job placements
~Third, some hallmarks of good scientific training are rigor, independence, scholarship, flexibility in critical thinking, and success in problem solving. It is unlikely that these attributes will be assured by a checklist approach to required content areas within the curriculum.
Be sure you understand the take home message of the Schulte et al. (1992) study described in class concerning treatment tailoring.
Standard CBT was most effective (not tailored).
What were the conditions of the shulte et al study?
Recall that this study compared 3 groups of agoraphobic clients:
-Standard treatment (manualized)
-Individually tailored treatment
-Yoked Control (each member received the treatment tailored to another client in the second condition)
*the yoked controls got the treatment that was tailored to someone else (not them).
What does the Schulte et al. (1992) study suggest about the wisdom of trying to tailor a standardized empirically supported treatment like Barlow’s Panic Control Therapy?
Results showed that the standardized (manualized) treatment was much more effective than either the tailored or the yoked control groups. This suggests that clinicians should be cautious in second guessing the standard treatment for the same reasons they should be cautious in second guessing an actuarial formula in clinical assessment.
In the article by Ron Levant, the author makes several arguments that Larry Beutler refutes in his article– specifically, you should be able to describe the following:
1. Levant argues that we are not even close to having ESTs for most client presenting problems
But as Beutler points out, actually we currently have one or more EST for most Axis I presenting problems
2.Levant argues that the participants in randomized clinical trials are not like the clients seen in real life practice (less complex and less severe).
Beutler makes it clear that Levant is wrong.
ESTs generalize to the real world far better than Levant suggests
3. Levant argues that clinician judgment should be given equal weight to empirical evidence.
Be able to critique that viewpoint.
Levant argues that clinician judgment should be given equal weight to empirical evidence.
Be able to critique that viewpoint.
~The parceling of variance in this way ignores the
possibility that techniques and relationship factors
interact and coexist. Can’t we have both relationship
and technique? How, for example, does Levant suppose
that the relationship develops if not through the
procedures and actions (techniques?) of the therapist—
the things the therapist tries to do?
~This latter study also
found very similar outcomes between highly structured,
research treatments and the usual treatments and high
similarity among patient groups, with research treat-
ments being applied to somewhat more complex and
serious problems than the usual ones seen in outpatient
clinics. Such data lead Barlow, Levitt, and Bufka (1999)
to conclude that, ‘‘efficacy studies may be generalizable
not only to individuals who do not meet inclusion
criteria, but also to typical clinical populations’’
~L would give
to clinical judgment and patient values the same degree
and level of credibility and assumed validity as findings
from controlled research. Asserting that ‘‘clinical
judgment’’ is equivalent to controlled research findings
when we are treating a ‘‘unique person’’ is a good sound
bite, but is also a recipe for trouble. The simple fact is
that clinical judgment is not uniform across clinicians
and is fraught with errors—most of which are made
because we are simply human (Garb, 1998). Our
perceptions are colored by the most recent experiences
that we have had, we tend to remember our successes
better than our failures, we tend to attribute some of our
faults to others, and we misperceive the cause of
proximal events. All of these errors have been built into
the human by mother nature to help us maintain a stable
picture of our world. But they make our judgments
fallible and, unfortunately, we believe even in our
incorrect judgments. Ascertaining truth independently
of the influence of these human errors of judgment and
belief is precisely why the scientific method was
developed in the first place.
Distinguish between the contexts of discovery and verification
context of discovery is having the confidence that what you think might be true is true?

contexts of discovery can't be used as contexts of verification
Know the various types of evidence discussed in class and be able to discuss their limitations where possible
1. expectancy
2. demand characteristics- participants want to be good; you can cause them to behave in a way that is consistent with your expectations.
3. confounds- an alternative explanation for an observation that can't be ruled out
4. fraud- it might simply not be true
5. cognitive dissonance- we have motivation not to see ourselves as inconsistent with our beliefs
6.selective perception/confirmatory bias/self deception - we look for evidence consistent with our expectations
7. natural course/spontaneous remission/maturation
8. regression to the mean- a group of extreme people will become less extreme at another point in time; occurs be/c of measurement error and instability.
9. misdiagnosis- spontaneous remission for cancer can happen in people who didn't have it to begin with.
10. placebo-expectancy is powerful, so giving them a sense of hope and control can lead to meaningful change
11. hedged bets-you can't rule out other intervention (ie, person who drinks St. john's wort tea and takes antidepressants could attribute their improvement entirely to the tea, if that's what they want to believe).
define a confound
a) A confound is a factor that has not been controlled for in a study, which offers a competing explanation for the findings that therefore cannot be ruled out.
Be able to list, define, recognize, and discuss the major potential confounds that prevent us from drawing clear conclusions from testimonials, anecdotal evidence, and uncontrolled case studies. What are some examples?
D. Examples include:
1. Fraud
2. Demand
3. Cognitive dissonance
4. Self deception, selective perception, confirmation bias
5. Spontaneous remission / Natural course of the problem
6. Regression to the mean
7. Misdiagnosis
8. Placebo effect
9. Hedged Bets
describe “post hoc ergo propter hoc”
(the fallacy of concluding a give factor (A) caused some change in another factor (B) simply because A came before B.
Know the third variable problem in interpreting correlational evidence
-the groups differ in more ways than a single variable
-the coronary disease groups are self-selecting so the change can be correlated with motivation and another variable
What are the two issues in interpreting correlational evidence?
~The 3rd variable problem
~problem of direction
What is the problem of direction?
-Poeverty is significantly correlated with schiz but we don't know in which direction this operates.
-Prob of direction can lead to the spurious correlation-correlation is not meaningful.
Be able to explain the concept of a control group as providing a hypothetical counterfactual
E. What is a hypothetical counterfactual meant to permit you to do?
1. It provides a means of saying what would have happened if we hadn’t done what we did – for example, if we hadn’t treated this group, how would they have done?
F. What characteristics must a control group have to provide one?
1. It must be identical in all important ways to the experimental group.
What is a hypothetical counterfactual meant to permit you to do?
It provides a means of saying what would have happened if we hadn’t done what we did – for example, if we hadn’t treated this group, how would they have done?
What characteristics must a control group have to provide a hypothetical counterfactual?
It must be identical in all important ways to the experimental group.
⬢ Know the two ways of trying to make your control and experimental groups equivalent:
G. Matching vs. random assignment
Random assignment is superior to matching when you have a large sample. Why?
?????
Random assigment is not random selection/sampling.

random assignment maximizes the chances that the groups will be similar in all meaningful ways. But that works best when you have large numbers of subjects. Think of it this way: The reason is that the more chances you have, the more likely the outcome will follow the odds. . Again, this is because the odds work best over many trials.

This all means that if you have a small number of subjects, you are probably better off just matching your groups for 1 or a few characteristics instead of randomly assigning them. If you randomly assign them, chances are you will not get equivalent groups.
2. Know why random assignment does not guarantee that the groups are truly equivalent.
?????
Random assignment is not random selection/sampling

~random assignment maximizes the chances that the groups will be similar in all meaningful ways. But that works best when you have large numbers of subjects. Think of it this way: Imagine that you have 1000 checkers of which 500 are red and 500 are black. Now put them all in the same bag and begin randomly drawing out checkers - one at a time - and place them in two piles. The first checker you draw goes in one pile and the second goes in the other and so on until you have two piles of 500 checkers each. The odds are very high that your two piles of checkers will each hold the same or nearly the same number of red and black checkers. That is, you would end up with approximately 250 red and 250 black checkers in each pile. The reason is that the more chances you have, the more likely the outcome will follow the odds. Thus, imagine instead that you have 8 checkers - 4 red and 4 black. Although the odds are still the same, because you have so few chances, it is much more likely that you will have two piles of checkers that are not the same in terms of red and black. However, if you repeated the process 100 times and took an average of the number of red and black checkers in each pile, you would come up with something that is very close to 2 red and 2 black in each pile. Again, this is because the odds work best over many trials.

This all means that if you have a small number of subjects, you are probably better off just matching your groups for 1 or a few characteristics instead of randomly assigning them. If you randomly assign them, chances are you will not get equivalent groups.
In broad conceptual terms, what two things does a measure of effect size do that allow you to draw general conclusions about the effectiveness of psychotherapy?
1.it puts everything into equivalent units of measurement – that is, differences between the average person in the treated group and the average person in the control group is expressed in standard deviation units.
2. it preserves all the information regarding the size of the difference – in this regard, it is useful to contrast this approach with the “Wins-Losses” approach which discards information about the size of a win or loss so that all wins are the same and all losses are the same.
definition of effect size
(as represented by Cohen's d)
It expresses the difference between treatment and control groups in terms of strd dev units
What are some problems with the box score (wins/losses) approach?
1. you've thrown away information about the magnitude of the difference
2. Statistical significance of a difference is more accurately considered a function of the magnitude of the diff and the sample size (bigger sample size incr pwr).
why is a comparison based on effect sizes likely to be better than a comparison based on the record of wins versus losses where a win is defined as a study in which the treated group did better than the control group and this difference was statistically
- Specifically, you should know that statistical significance is a function of sample size as well as the size of the difference between the treatment and control groups.
o Therefore, you could easily have 10 studies that all found the same size difference between the treatment group and the control group but that difference was a “win” in only 2 cases (both very large sample size studies) and a “loss” in the other 8 because they had very small sample size. That is, the same size difference can sometimes be statistically significant and sometimes not depending on sample size.
In conceptual and percentile terms, what does an effect size of 1.0 mean?
It means that the average person in the treated group did better than 84% of the people in the control group.
- NOTE: It does NOT mean that the average treated person did 84% better than controls. Nor does it mean that 84% of the treated group did better than the control group.
know what effect sizes of 2.0 (98th percentile) and .5 (69th percentile) mean.
~An effect size of 2 means that the average person in the treated group did better than 98% of the people in the control group.
~ An effect size of 0.5 means that It means that the average person in the treated group did better than 69% of the people in the control group.
Recall that the study by Lipsey and Wilson described in class (the meta-analysis of meta-analysis studies) produced a grand average effect size of .5
As discussed in class, Lipsey and Wilson thought this value was too large and thus likely inflated
file drawer bias and placebo effect.
What is the file drawer bias?
studies finding a significant effect for treatment may be more likely to get published. Therefore, if we just use published studies, the effect size may be inflated. However, Lipsey and Wilson found that the average effect size was still about .4 even for unpublished studies.
What is the placebo effect?
- Placebo effect: You should be able to explain why studies comparing treatment to a no treatment control group should generally produce larger effects than studies comparing treatment to placebo. But when Lipsey and Wilson compared studies using a no-treatment control versus those using a placebo control, they found that even the effect in placebo controlled studies was nearly .5 (i.e., .48). The average effect in no-treatment controlled studies was about .68. You should further understand that this implies that the placebo effect is about .2 (that is, the difference between these two effect sizes).
You should be able to explain why studies comparing treatment to a no treatment control group should generally produce larger effects than studies comparing treatment to placebo
The diff between wait list and tx is larger than diff between wait list and placebo because (the tx?) has the active tx effect the placebo effect.
Know what the Dodo bird verdict is and be able to critique it. That is, be able to describe in general terms the logical and empirical grounds on which this verdict can be questioned.
Logical grounds: Even if all treatments that have been tested were equivalent (they aren’t), that would not logically support the conclusion that an untested treatment must therefore also be as effective as those treatments.
Empirical grounds: As discussed in class, evidence suggests that treatments are not equally effective – cognitive, behavioral, and cognitive-behavioral treatments have been shown to be superior to humanistic and psychodynamic alternatives.
Specifically, you should be able to describe what was wrong with the Smith, Glass, and Miller (1980) meta-analysis discussed in class
(1) Smith, Glass, & Miller (1980)
(a) Found equivalence only because they grouped the different types oddly. Here are the raw numbers:
(i) Cognitive therapy = 1.31
(ii) Cognitive-Behavioral therapy (CBT) = 1.24
(iii) Behavior therapy = .91
(iv) Psychodynamic = .78
(v) Humanistic = .63
(vi) General counseling .42
(b) They found a significant difference between these subclasses in general and for specific types of problems (e.g., depression).
(i) Nevertheless, they sorted them into two odd groups and drew conclusions based on those groups:
(a) “Behavioral” = .98

(I) Cognitive-Behavioral = 1.24
(ii) Behavioral = .91 (b) “Verbal”=.85
(i) Cognitive therapy = 1.31
(ii) Psychodynamic .78
(iii) Humanistic = .63
(ii) But it is unclear why these authors did not include General Counseling in their “Verbal” category.
(a) If they had, they would have gotten something like the following (assuming equal weighting of effect sizes for each type of therapy):
(i) “Behavioral” = .98
(ii) “Verbal” = .78
(b) But even more puzzling is their decision to include Cognitive Therapy in the “Verbal” category rather than in what should be the “Cognitive Behavioral” category.
(c) If they had, they would have gotten something like
(I) “Cognitive-Behavioral Therapy” 1.15
(ii) “Psychodynamic/Humanistic” Therapies = .71 (or .61 if general counseling were included)
(iii) Thus, the effect size for CBT appears to be .44 - .54 SDs larger than for non-CBT approaches
(a) Consistent with that conclusion, in an earlier study Smith & Glass (1977) found

that the effect sizes for CBT were generally from .39 - .68 SDs larger than those for psychodynamic and humanistic therapies.
In Chapter 6, the authors discuss common factors in psychotherapy (see pp. 146-148). These factors occur in nearly all psychotherapy approaches and thus may account at least partly for the fact that psychotherapy is effective on average. You should be ab
First, the psychotherapist’s office is a setting designated by our culture as a place to receive help for emotional distress. It is a contemporary sanctuary of a sort that is safe, private & confidential. The office has the accoutrements of expertise and (e.g., diplomas, licenses, certificates, thick books) and exudes what
is referred to as the “edifice complex” (Torrey, 1972). According Frank, the office strengthens the client’s expectations of help.
The second shared feature of psychotherapies is the therapeutic relahip within which the client and therapist possess well-defined roles.
According to Frank, the client is demoralized and highly motivated to be helped, regardless of the specific problem, symptoms, or diagnosis with which he or she might present. The client expects the therapist to be an empathic expert, and the therapist responds in kind with support, warmth, skillfulness, and hope.
The setting and the relationship provide the therapist with significant levrage to bring about psychological change. This basis of therapeutic influence is enhanced by the third component common to all psychotheraconceptual scheme or theoretical system to explain the client’s suffering. Although psychotherapies are often dramatically different in terms of their inderlying concepts and principles, they all address the causes of abnormal symptoms, the pathways and goals of changes, and procedures and techniques for the realization of symptom reduction & + behav change.
The 4th theraputic feature derives from the therapist's conceptual scheme and constitutes the treatment procedure in which both the therapist and client actively participate. These procedures have sometimes been known as theraputic rituals. The effectiveness of a theraputic ritual is not be/c of a specific theory but is based upon how closely client expectations match up with the theory.
What are ESTs for PTSD and what are the components of the EST?
Several cognitive-behavioral therapy packages are the most effective ESTs for PTSD
a) Elements:
(1) Exposure
(2) Cognitive Therapy
b) Such treatment programs are consistently found to be more effective than no treatment and placebo control groups.
What is EMDR?
Eye movement desensitization.
While a client maintains an image, the therapist induces side to side movements that the client is supposed to visually track. The nature of trauma pathology and its effective treatment is predicated on a model called accelerated info processing, which is akin to a psychological immune system.
Be able to describe and critique the various claims Francine Shapiro and others have made about EMDR and the various maneuvers that have been used to “sell” this pseudoscience to professionals and the public and to avoid critical scrutiny.
Shapiro: trauma info hasn't been processed properly; you need to process this info with eye movement.
Actually: this indicates a lack of a coherent underlying theory
Shapiro: EMDR has been well research
Actually: Research indicates that eye movement patients do just as well with the version that has NO eye movement. EMDR might just be a different version of exposure therapy.
Further warning signs: It's being promoted for all kinds of things ; extreme claims of its efficacy are made (ie 97% cure rate); shifting standards; slippery use of the concept of treatment fidelity; supression of debate; inappropriate citations
Be able to describe what the data actually say regarding the efficacy of EMDR for treating PTSD and related problems.
Especially know how EMDR stacks up to already established cognitive behavioral interventions for PTSD.
Know what Van Etten and Taylor’s (1998) meta-analysis (discussed in class) shows regarding the efficacy of EMDR and CBT based on client self reports versus objective observer ratings:
For example, behaviour
therapy was more effective than EMDR and
SSRIs on observer-rated total PTSD symptoms at
posttreatment. However, by follow-up, the differences
between behaviour therapy and EMDR were
nonsignificant
4. Do the eye movements matter? How do we know?
a) So what is the most likely source of any efficacy that EMDR does have?
No, eye movements don't matter. 11 well designed studies say that the eye movement patients do just as well with the version of treatment that has no eye movement.
The likely source of improvement might be that EMDR might be a diff version of exposure therapy.
5. What hallmarks of pseudoscience are illustrated by EMDR
~It's being promoted for all kinds of things
~Extreme claims of efficacy
~Shifting standards
~slippery use of the concept of tx fidelity ( if you don't find good results, you haven't been trained properly
~supression of debate
~inappropriate citations
What is CISD? What does it involve?
~CISD is conducted w/in 24-72 hours of the event. It is done in a group context lasting 2-4 hours

Premise: disclosure of reactions in a group setting prevents psych problems through the use of 7 steps:
1. intro
2. statement of facts
3. disclosure of thoughts
4. disclosure of emotional reactions (strong -)
5. specification of possible symptoms
6. education regarding the consequences of trauma exposure (meant to normalize responses)
7. Planned re-entry to the social context
a) Know the difference between Mitchell’s claims regarding CISD's efficacy and what the data from randomized clinical trials say.
* The metodologically sound studies overwhelmingly indicate that CISD is not effectie relative to control cond't

~Rose showed no effect
~Bisson- no sig diff after, but at 13 mo people who'd had CISD had HIGHER scores on measures of PTSD
What are the empirically supported treatments for depression?
Cognitive behav therapy and interpersonal therapy
Know the major classes of antidepressant medications
~MAOIs
~TCAs (effect NE & SER)Imipra
~SSRIs-Fluoxatine, Paroxitine
~Others-mixed reuptake inhibitors like Wellbutrin
Know what the research shows regarding antidepressant efficacy compared to that of pill placebo
About 50% of depressed patients respond to antidepressants, while pill placebo response rates are between 30-40%
Be able to summarize the arguments made by Wallach and Kirsch in their chapter as to why they believe that most of the effect of antidepressants are attributable to expectancy
~Wallach and Kirsh believe that most of the antidepressant effects are due to expectancy

~They ask- Is most antidepressant effect due to placebo?

~However, the placebo's mode of action vs. the drug's mode of action must be considered.

~Placebo effect is much larger for mild-moderate depressives than it is for severely depressed.

~Placebo response occurs sooner than the response to actual drug, which suggests that the placebo and the drug have different modes of action

~The placebo response is less durable ( meaning that it isn't maintained as much over time).

~Relapse rates in placebo are 35-50%, while in the drug it's only 12-25%
Be able to describe and critique the active placebo notion
Wallach & Kirsch's active placebo notion is that you need an active placebo to determine if most of the antidepr effects are due to placebo.

active placebo = one that creates side effects but is not an antidepressant.

They suggest:
~amylobarbitone (barbituate, sedative)
~lithium- mood stabilizer that is primarily anti manic but can have some antidepressant properties
~lithyronine- a synthetic thyroid hormone that can be helpful to some depressives because a significant subset of depressed people have thyroid problems
~adinazolam- an atypical benzo that can have antidepressant effects
C. Acupuncture as a treatment for depression
1. Be able to describe and critique the acupuncture study you read and we discussed in class
Allen and colleages conclude that acupuncture is an effective and specific treatment for depression. They say that it's more specific than placebo.

HOWEVER, Allen used change scores because they didn't have equally depressed people to begin with. Because of this, the control group (wait list) can not serve as an adequate counterfactual

~Should be wait list on top, followed by placebo and then active treatment, graph actually read placebo, waitlist, and active tx.
a) Know why the study is invalid as a test of acupuncture’s efficacy compared to placebo
Alllen's control group did not contain equally depressed individuals and thus did not contain an adequate counterfactual. Allen used "change scores"
1. Is there a plausible mechanism by which Hypericum could impact depression?
yes. hypericum may induce the expression of 5 HT receptors centrally and peripherally. Immunocompetent cells have 5 HT receptors, and ser mediates immune responses thru them. Cytokines from the immune response can induce higher availability of ser in the brain
2. What does the empirical evidence say about Hypericum’s efficacy?
a) For mild-moderate depressive symptoms?
b) For moderate-severe depression?
a) may work for mild to moderately depressed individuals but placebo is likely (be/c its this range that is the most responsive to placebo).
~doesn't work for severe depression

Deck Info

64

permalink