This site is 100% ad supported. Please add an exception to adblock for this site.

Cinical Psychological Science Exam 1

Terms

undefined, object
copy deck
What are the primary sources of the scientist-practiconer gap?
1. unproven psychotheraputic methods are proliferating
2. media coverage of fringe treatments is not subjected to critical scrutiny
3. Some clinicians offer their opinion as fact
4. some tv and radio psych penetrates a vunerable audience
5. clinicians continue to use "intuition" instead of empirically-supported treatments.
6. thriving self-help industry
Does the "burden of proof" rest upon the claimant or the critic?
Claimant, though the authors say that the reverse is happening
What are the 3 major ways that unsubstantiated mental health claims can be problematic?
1.- Techniques might be harmful
-suggestive techniques used to uncover repressed memories may lead to further psychopathology
- "Doing something is not the liscense to do anything" so "doing something is NOT always better than doing nothing"
2. opportunity cost= therapies that don't do anything deprive people of their time and money
3. unsubstantiated claims undermine the public's faith in the profession.
What are the ten differences between science and pseudoscience?
1. use of ad hoc hypoth to serve as protection against falsification
2. absence of self-correction
3. evasion of peer review
4. emphasis on confirmation rather than refutation
5. reversed burden of proof
6. absence of connectivity
7. overreliance on testimonial and anecdotal evidence
8. use of obscurantist language
9. absence of boundary conditions
10. mantra of holism (claims cannot be judged in isolation)
What is the prob with the "mantra of holism"?
Mantra of holism=claims can't be judged in isolation
1. assumes that clinicians can integrate massive ammts of complex info in their heads
2.avoids subjecting claims to possible falsification
How do you distinguish between science and pseudoscience?
The distinction does NOT have to do with the content studied or questions asked — rather it has to do with the approach taken to
studying that content.
What are the hallmarks of science? (8)
Open-mindedness, Use of systematic empiricism, Falsifiability, Logic,Comprehensiveness, Sufficiency,Honesty,
Replicability
systematic empiricism
Involves collecting data in a fashion designed to eliminate or severely minimize the influence of error or bias.
Falsifiability
Examination of solvable problems (Empirical questions)
Logic
Proposed conclusions must follow logically from one’s premises (assuming those premises are true)
Principle of parsimony (associated with suficiency) indicates....
Explanations for things should be sought first in terms of known factors before hypothesizing new factors to explain things.
definition of the principle of parsimony
When several theories account equally well for the existing data, the simplest of those theories should be preferred.
Science is a set of principles for seeking answers to solvable questions, not....
a body of answers to solvable questions.
Limited resources dictate need to limit what topics get serious attention to those which have passed an initial threshold. Merely suggesting a possibility is insufficient...
the ticket of admission is to provide at least preliminary empirical evidence that the possibility should be taken seriously.
What are the hallmarks of pseudoscience? (14)
1. An overuse of ad hoc hypotheses designed to immunize a claim from falsification
2. Absence of self-correction (& subsequent stagnation)
3. Evasion of critical scrutiny (e.g., peer review)-lack of public verifiability aka replication.
4.Emphasis on confirmation rather than refutation
5. Reversed burden of proof
6.Absence of connectivity
7.Overreliance on testimonial or anecdotal evidence
8.Use of obscurantist language
9.Absence of boundary conditions
10. The mantra of holism
11. Double standards
12.Closed-mindedness
aka Ideological thinking
13. Conspiracy mindedness
14. Promising the impossible
Failure to use systematic empirical methods is a prob with pseduosci. What does this mean?
Lack of adequate controls to rule out alternative explanations.
Pseudosci is characterized by the Absence of connectivity. What does this mean?
Failure to consider how the theory fits with what is already known
Peudosci involves the Absence of boundary conditions. Why is this a prob?
Most well-supported theories specify conditions under which they should and should not apply
What features are characteristic of pseudosci's "mantra of holism"?
-Claimants often argue that their ideas cannot be tested in isolation
-Insistence that complex individual interactions of a vast array of factors must be considered to truly understand a phenomenon
Pseudoscience's closed-mindedness can be characterized by ideological thinking. What is this?
The rigid defense of a position no matter what evidence is offered to the contrary
What is pseudosci's Conspiracy mindedness?
The fact that the scientific community rejects a theory or claimed finding is attributed to conspiracies to suppress such theories and evidence.
What is Chronbach's definition of a psychological test?
"A test is a systematic procedure for comparing the behavior of two or more persons'' (p. 21). Note that the "behavior'' may be oral ( e.g., an individual telling what he or she sees when looking at a Rorschach card) or written (e.g., marking down "true'' or "false'' responses on the MMPI).
For many tests, the system used to measure and compare behavior is standardized. What does this mean?
A standardized test presents a standardized set of questions or other stimuli (such as inkblots) under generally standardized conditions; responses from the individual are collected in a standardized format and scored and interpreted according to certain standardized norms or criteria. The basic idea is that all individuals who take the test take the same test under the same conditions.
When selecting a standardized psychological assessment instrument, what aspects of validity do you consider? (4)
Predictive validity,Concurrent validity, Content validity, Construct validity
Predictive validity
indicates the degree to which test results are accurate in forecasting some future outcome.
Concurrent validity
the degree to which test results provide a basis for accurately assessing some other current performance or condition.
Content validity
indicates the degree to which a test, as a subset of a wider category of performance, adequately or accurately represents the wider category of which it is a subset.
Construct validity
indicates the degree to which a test accurately indicates the presence of a presumed characteristic that is described (or hypothesized) by some theoretical or conceptual framework.
When selecting psychological assessment instruments, what aspects of reliability do you consider?
reliability coefficients, stability, equivalence, split-half reliability, internal consistency, test-retest reliability
definition of reliability (2)
- refers to the degree to which a test produces results that are free of measuring errors.
-is another way of describing the consistency of the results of a test.
coeficient of internal consistency
a coeficient derived from the Spearman-Brown formula that estimates split-half reliability
split-half reliability
test items will be divided independently into two halves as a way to estimate the reliability of the test
equivalence
reliability between different forms of the same test
stability
Reliability between subsequent administrations (perhaps under different conditions) of the same test
test-retest reliability (or the coefficient of stability)
The coefficient may indicate the reliability between subsequent administrations of the same test
reliability coefficients
Statistical techniques have been developed that indicate the degree to which a test is reliable. The coefficient will be a number that falls in the range of zero (for no reliability) to one (indicating perfect reliability).
What does it mean to say a test is a systematic procedure?
1. It must be standardized
2. A test cannot yield consistent results unless it is given in consistent manner across time, by different administrators, etc.
Reliability of a test
Are test scores free from measurement error?
What are the types of reliability? (4)
1. Test-retest reliability
2. Equivalent forms reliability
3. Internal consistency reliability
4. Inter-rater agreement
Test-retest reliability
Consistency across time
equivalent forms reliability
Consistency across different item sets meant to tap the same construct
examples of equivalent forms reliability
(1) Example 1: Split-half reliability
(2) Example 2: diffe rent versions of the test -MMPI compared to MMPI II
Internal consistency reliability
Do the items of a test tend to correlate with one another?
Inter-rater agreement
Do different raters or interviewers tend to agree?
Validity of a test
Definition
-that a test measures what you're trying to measure,
-you have to know the purpose of a test to determine validity
validity
Does the test measure what it is supposed to measure?
Is reliability a sufficient condition for validity?
Reliability is a necessary but NOT sufficient condition for validity
If a test is reliable, what can you say about its validity?
It may be valid.
If a test is not reliable, what can you say about its validity?
It cannot be valid
If a test is valid, what can you say about its reliability?
If a test has adequate validity, it must possess adequate reliability.
That is, it must be relatively free of error if it measures the construct of interest
Types of validity
1. Construct validity
(b) All validity is construct validity
2. Sub-types of validity
a) Face validity
b) Content validity
c) Concurrent validity
d) Predictive validity
face validity
Does the content of the test appear to be relevant
to the construct of interest?
content validty
Does the Beck Depression Inventory cover all of the symptoms of depression?
What are norms and why do we need them?
Test scores are meaningless without a normative basis for comparison.
APA's policy upon use of assesment
Psychologists who perform interventions or administer, score,
interpret, or use assessment techniques are familiar with the
reliability, validation, and related standardization or outcome studies of, and proper applications and uses of, the techniques they use.
APA's stance on competance and the proper use of assesment
Psychologists who develop, administer, score, interpret, or use psychological assessment techniques, interviews, tests, or
instruments do so in a manner and for purposes that are appropriate in light of the research on or evidence of the usefulness and proper application of the techniques.
Three main foci of Garb et al
Adequacy of CS norms, Validity of CS scores,
Roots of the Controversy
What are the probs w/ Exner's norms?
-convience sampling instead of random sampling
-Skewed toward super-normality leading to overpathologizing of typical responses
What are the 5 arguments that Roscharach proponents use?
Argument 1: The CS sample is “above average” in psychological functioning — thus general population samples should look more disturbed.
Argument 2: So-called “Non-Patient” samples included psychiatric patients — the claim that 5 of the samples that differ from the CS norms are really full of people with problems
Argument 3:
Normative discrepancies are due to culture or ethnicity
Argument 4: Psychopathology is becoming more prevalent
Argument 5: Discrepancies are due to improper test technique
What reasons do Garb et all give for refuting the following argument? Argument 1: The CS sample is “above average” in psychological functioning — thus general population samples should look more disturbed.
-it's a post-hoc explan
-no claim of CS norm's super-normality was made until AFTER criticisms emerged
-Exner's supposedly normal sample included people who had sought psych help
Which of the CS indices on the Rosharach are valid?
thought disorder index, shiz index
Which of the CS indices on the Rosharach is clearly invalid?
the depression index- it fails to correlate with any other measures of depression
Informal validation
Klopfer held that infornial observations by individual interpreters was sufficient to demonstrate the validitiy of the Rorschach
Intuitive Information Integration
Klopfer held that individual Rorschach scores do not usually bear a straightforward relationship to personality characteristics but a skilled interpreter can intuitively integrate the scores into a complete picture
The question of Incremental Validity
Does the Rorschach add anything that can’t be obtained easier and more cheaply?
the retrospective probability of a test tells us almost nothing about....
the predictive probability of that test in a given population of interest.
To determine the predictive probablitity of a test...
We must also know the base rate prevalence of the characteristic or outcome that we wish to predict, in the population in which we wish to predict it.
retrospective probablity
~we already know the outcome we are ultimately interested in predicting — that a person is or is not a child molester.
~That is, once we know someone is a sexual abuser of children, we can now say with a high degree of confidence that he (or more rarely, she) is likely to have been abused himself as a child.
predictive probability
given that we only know a person ‘s history of abuse, what is the likelihood that he will be an abuser?
Detecting or predicting low base rate phenomena can generally only be done if you are willing to tolerate high rates of....
false positives.
Screening tests for low base rates phenomena are generally set to minimize ___ ____, and to assume that...
false negatives,& to assume that there is some more precise additional test to give to those who are identified by the screening test.
The sensitivity of a test
P(positive test given autism present)
The specificity of a test
P(negative test given autism absent)
base rate
rate of a condition in the general population
Positive predictive power
P(suicide attempt given positive test)
Negative predictive power
P(no suicide attempt given negative test)
sensitivity
P(test is positive / condition is present)
specificity
P(test is negative I condition is absent)
if something does not have positive predictive pwr, it is likely that it is a...
false pos
if something does not have neg predict pwr....
false negatives
Retrospective accuracy (2 kinds)
sensitivity and specificity
Predictive pwr (2 kinds)
pos predict pwr, neg predict pwr.
definition of sensitivity
The probability that the index in question will be positive given that the person really has the condition or characteristic in question.
definition of specificity
The probability that the index will be negative given that the person really does NOT have the condition or characteristic in question.
Are the two types of retrospective accuracy dependent upon the base rate of the population?
no.In other words, it doesn’t matter what percentage of the population has the problem, the senstivitv and specificity of the index will always be the same.
definiton of pos predic pwr
The probability that a person for whom the index is positive really does have the condition or characteristic in question.
definition of neg predic pwr
The probability that a person for whom the index is negative really does NOT have the condition or characteristic in question.
pos predic pwr
P(’condition is present / test is positive)
neg predic pwr
P(condition is absent/test is negative)
Are the 2 forms of predict accuracy dependent upon base rates?
yes - In other words, you must know what percentage of the population has the condition or characteristic in question before you can know the predictive value of the index in question.
Detecting or predicting rare conditions or characteristics can generally only be done if you are willing to tolerate high rates of...
false positives.
False positive
When an index is positive but the person really does NOT have the condition or characteristic in question
False negative
When an index is negative but the person really does have the condition or characteristic in question
Even when psychological tests do well in retrospective tests of validity, their predictive validity is ______ when base rates of the thing being predicted are ______.
very low, low
Even when psychological tests do well in retrospective tests of validity, their predictive validity is _____ when base rates of the thing being predicted are _____.
relatively high, high
When determining the predictive validity in the presence of a low base rate, how will the test fare?
not well at all
When will the predictive validity of a test be highest?
They will do best when the base rate of presence is the same as absence, i.e., 50% for each.
7 common fallacies and pitfalls that plague psychological testing and assessment
mismatched validity, confirmation bias, confusing retrospective and prospective accuracy (switching conditional probabilities), unstandardizing standardized tests, ignoring the effects of low base rates, misinterpreting dual high base rates, and uncertain gatekeeping.
Mismatched Validity
Some tests are useful in diverse situations, but no test works well for all tasks with all people in all situations
Confirmation Bias
Often we tend to seek, recognize, and value information that is consistent with our attitudes, beliefs, and expectations. If we form an initial impression, we may favor findings that support that impression, and discount, ignore, or misconstrue data that don't fit.
Confusing Retrospective & Predictive Accuracy (Switching Conditional Probabilities)
Predictive accuracy begins with the individual's test results and asks: What is the likelihood, expressed as a conditional probability, that a person with these results has condition (or ability, aptitude, quality, etc.) X? Retrospective accuracy begins with the condition (or ability, aptitude, quality) X and asks: What is the likelihood, expressed as a conditional probability, that a person who has X will show these test results? Confusing the "directionality'' of the inference ( e.g., the likelihood that those who score positive on a hypothetical predictor variable will fall into a specific group versus the likelihood that those in a specific group will score positive on the predictor variable) causes of numerous assessment errors.
Unstandardizing Standardized Tests
Standardized tests gain their power from their standardization. Norms, validity, reliability, specificity, sensitivity, and similar measures emerge from an actuarial base: a well-selected sample of people providing data (through answering questions, performing tasks, etc.) in response to a uniform procedure in (reasonably) uniform conditions. When we change the instructions, or the test items themselves, or the way items are administered or scored, we depart from that standardization and our attempts to draw on the actuarial base become questionable.
Ignoring the Effects of Low Base Rates
So out of the 5,000 candidates who were screened, the 90% accurate test has classified 508 of them as crooked (i.e., 9 who actually were crooked and 499 who were honest). Every 508 times the screening method indicates crookedness, it tends to be right only 9 times. And it has falsely branded 499 honest people as crooked.
Misinterpreting Dual High Base Rates
The 2 factors appear to be associated because both have high base rates, but they are statistically unrelated.
~It seems almost self-evident that there is a strong association between that particular religious faith and developing PTSD related to the earthquake: 81% of the people who came for services were of that religious faith and had developed PTSD. Perhaps this faith makes people vulnerable to PTSD. Or perhaps it is a more subtle association: this faith might make it easier for people with PTSD to seek mental heath services.
Uncertain Gatekeeping
Psychologists who conduct assessments are gatekeepers of sensitive information that may have profound and lasting effects on the life of the person who was assessed. The gatekeeping responsibilities exist within a complex framework of federal ( e.g., HIPAA) and state legislation and case law as well as other relevant regulations, codes, and contexts.
predictive accuracy
refers to the degree (expressed as a probability) that a test is accurate in classifying individuals or in predicting whether or not they have a specific condition, characteristic, and so on.
Retrospective accuracy
begins not with the test but with the specific condition or characteristic that the test is purported to measure. In the example above, the retrospective accuracy of this hypothetical MMPI-2 shoplifting test denotes the degree (expressed as a probability) that an employee who is a shoplifter will be correctly identified ( i.e., caught) by the test.
affirming the consequent
In this fallacy, the fact that x implies y is erroneously used as a basis for inferring that y implies x. Logically, the fact that all versions of the MMPI are standardized psychological tests does not imply that all standardized psychological tests are versions of the MMPI.
Is there evidence for clinical expertise based on experience?
Generally, No. Experience does not improve the accuracy of clinical judgement.
caveat: experienced clinicans are better at deciding what info to use.
Given the same info, experts are __ ____ at using it than similarly trained novices.
Experts are no better at using info than novices, given that they've been given same info.
illusory correlation
a percieved correlation that does not exist
availability bias
easily recalled examples inflate a person's estimation of an event's likelihood.
representativeness
thinking in stereotypes
ex. librarian example
overpathologizing bias
someone who sees maladjusted people all of the time is likely to see pathology where there is none.
hindsight bias
be/c we have seen the outcome, we become confident that an event was inevitable
overconfidence
experts aren't more correct than novices but they are more confident

Deck Info

116

permalink