This site is 100% ad supported. Please add an exception to adblock for this site.

Psychology 395 Research Psychology Exam Four


undefined, object
copy deck
Meta Analysis
Statistical summary of accumulated, scientific knowledge on a specific hypothesis from a combination of findings from published studies on the topic. Example: Rosenthal's analysis of 346 studies of self-fulfilling teacher expectancies.
File Drawer Phenomenon
When empirical studies fail to show statistically significant results, researchers' tendency to put the research into their file drawers instead of submitting reports to journals for publication, resulting in a bias of published research toward significant results, not failures. Example: Researchers fail to find their leadership training program related to subsequent leadership effectiveness, so they file the study away and don't try to publish the results.
Skewed Distribution
Data-set for one variable in which values near one end of the range are more frequent than values near the other end. Has a tail trailing off in one direction and a short tail extending in the other. A distribution is positively skewed if the long tail goes off to the righ, or negatively skewed if the long tail goes off to the left. Example: If 100 students take a test, and 40 score over 90% while 3 score below 10%, the distribution is skewed.
Standard Deviation
Descriptive statistic used to indicate variability among the values of one variable measured on interval or ratio scale: the square root of the variance (or the average squared deviation from the mean, un-squared). Example: If 7 students take a 5-question quiz, and their scores (number correct) are 1, 2, 2, 3, 4, 4, and 5, the standard deviation equals 2.
Alpha Level
Probability of a Type I error, or incorrectly identifying as statistically significant an observed difference between groups or relationship between variables that occurred by chance alone. Examples: a) p < .05, the standard used in Psychology. b) p < .01. (See also: false positive; inferential statistics; statistical power.)
Analysis of Variance
ANOVA) = Data-analysis using a kind of inferential statistic (F-test) to detect differences among ≥2 comparison groups defined by ≥1 factor(s), and for ≥2 factors, their interaction(s). Example: in analyzing results of an experiment on effects of 3 audience sizes on bystanders' response-time, using an F-test to compare the averages for the 3 experimental
Inferential Statistics
Numerical indexes computed to indicate the likelihood that observed differences among groups, or relationships between variables, did not occur by chance. Examples: a) z-test; b) Pearson r. (Inferential statistics test the likelihood that ≥ 1 sample(s) came from the same population, the basis for statistical significance. See also descriptive statistics.)
Multivariate Test
In an empirical study with ≥2 measured variables, a preliminary test of statistical significance that assesses the likelihood of chance differences on all of the measured variables simultaneously, conducted before assessing the significance of differences on the measured variables one at a time. Example: In a study that compares a treatment group and control group on 20 different, measured variables, a Hotelling's T test compares the two groups on all 20 variables at once. (See also univariate test. A multivariate test avoids inflating the probability of a Type I error in testing differences on multiple measures. For example, testing differences on 20 measures should yield 1 difference "significant" at p < .05 by chance alone.)
Parametric Experiment
Type of between-subjects experiment in which the independent variable represents a continuum and the comparison groups includes ≥3 different levels of that factor. Example: Study of 5 different glucose dosages on shortterm recall. (This design allows characterization of the relationship of independent and dependent variables even if non-linear.)
Type I Error
One of 2 kinds of incorrect decisions in statistical inference (see also Type II error): Incorrectly identifying asstatistically significant a difference among groups or relationship among variables that has occurred by chance alone (also called false positive or alpha error). Example: In an experiment on an anti-cancer drug, researchers conclude that the drug treatment group had significantly reduced tumor size compared with the placebo group, when in reality the drug had no effect – and, by chance, the patients in the treatment group had a higher proportion of patients in remission.
Type II Error
Failure to identify as statistically significant a difference among groups or relationship among variables that actually exists in the population, also called a false negative or beta error. Example: In an experiment on an anti-cancer drug, researchers falsely conclude that the drug has no effects, because they use a sample too small to detect a difference. (One of 2
kinds of incorrect decisions in applying inferential statistics. See also Type I error.)
Univariate Test
Test of statistical significance concerning one measured variable, either for differences among groups, or association with another variable. Example: In an experiment comparing a treatment group and control group on keying speed, a test of differences between the groups on average keying speed (the measured variable). (See also multivariate test.)
Focus Group
Interview conducted face-to-face with a group of about 6 to 12 selected individuals, using a prepared series of questions, follow-ups, and probes, which the interviewer poses aloud, then records the range of responses and selected quotations. Example: Group interview of selected TV viewers about their opinions about 3 candidates for TV news anchor.
Non-Response Bias
Distortion in survey research that occurs when respondents differ from the target population or sample, because individuals in certain sub-groups decline to participate (non-response). (Differential response rates by segments of the target population or sample can leave a biased, final sample even after researchers have selected a representative, initial sample
to invite.) Example: In a literacy survey sent to a random sample of city residents, those with less than a 5th grade education comprise 15% of the sample, but less than 1% of them return questionnaires, so the final sample of respondents is biased: it has a much higher average education than either the initial sample or the larger population.)
Social Desirability Bias
⬢ Social desirability bias = Tendency of respondents to a questionnaire to give answers they perceive as likely to gain approval or avoid disapproval. Examples: a) In a post-election survey, falsely claiming to have voted for the winner. b) In a survey of sexual behavior, denying unconventional practices.
Survey Research
Empirical investigation based primarily on interviews and/or questionnaires administered to a population relevant to the purpose of the study. Example: Public opinion poll administered to a representative sample registered voters in Tennessee. (See also sample survey; self report.)
(in a rating scale) = Descriptive label for one of several alternative responses for an item in a questionnaire. Example: Labels for ratings, "Excellent, Very Good, Good, Fair, Poor."
Likert Scale
• Likert Scale = A multi-item questionnaire intended to measure one variable, consisting of a series of statements with instructions for respondents to express agreement or disagreement by selecting from 5 to 7 alternatives (such as: "agree"; "slightly agree"; "neither agree nor disagree"; "slightly disagree"; and "disagree"), with responses combined into a single index. Example: 10-item job-satisfaction scale consisting of statements like, “I am satisfied with my supervisor,” with instructions to indicate agreement / disagreement from 5 choices, with job satisfaction scored by summing the responses to the 10 items.
Line Graph
Presents data as a series of points connected by a line and is appropriate when the independent variable is quantitatively manipulated. Most widely used method to illustrate functional relationships among variables.
Pearson Rho
scaled on an interval or a ration scale. The Pearson correlation coefficient provides an index of the direction and magnitude of a relationship between two sets of scores. A positive correlation indicates a direct relationship, a negative correlation indicates an inverse relationship. As the strength of the relationship increases, the value of the correlation coefficient increases toward either negative or positive one. A positive or negative one indicates a perfect linear relationship.

Deck Info