## Glossary of EPPP Test Construction

### Deck Info

#### Description

#### Tags

#### Recent Users

#### Other Decks By This User

- What is the item difficulty index(p)?
- indicates the percent of examinees in the sample who answerede the item correctly

most situations a p=.50 is optimal except true/false tests where optimal p=.75

the closer that p=.50, the more differentiating the index is

- What is item discrimination?
- extent to which a test item discriminates between examinees who obtain high versus low scores on a test

- What is the basis of classical test theory?
- views an obtained test score as reflecting a combination of truth and error

- What is the problem with classical test theory?
- items are dependent upon original sample

inability to compare scores obtained on different tests

- What is the basis of item response theory?
- involves the use of an item characteristic curve that provides information on relationships between examinee's level on a trait measured by the test and the probability that he will respond correctly to the item

- What are the 3 advantages of item response theory?
- sample invariant

possible to equate test scores

easier to develop computer-adapted tests

- According to classical test theory, what are the components of an examinee's obtained test score?
- and true score (T) plus and error component (E)

obtained score (X) = Truth + Error

- What does the error component represent in classical test theory?
- represents measurement error which is due to factors that are irrelevant to what is being measured and have an unsystematic effect on the score

- What is norm-referenced interpretation?
- comparing an examinee's test score to scores obtained by people included in a normative (standardization) sample

helps identify individual differences

percentile ranks, standard scores, age and grade equivalent scores

- What is criterion referenced interpretation?
- score interpreted in termso f total amount of test mastered (% correct) or in terms of some external criterion

- What is reliability?
- extent to which test performance is immune to the effects of measurement error

- What is a reliability coefficient?
- indicates whether the attribute measured by the test is being assessed in a consistent, precise way

- How do you interpret a reliability coefficient?
- the proportion of variability in obtained test scores that reflects true score variability

reliability coefficient is never squared

r(xx)=true score variablity

1-r(xx)=error

- What are the different forms of reliability?
- test-retest (coefficient of stability)

alternate forms (coefficient of equivalence)

split-half (coefficient of internal consistency)

coefficient alpha (coefficient of internal consistency)

inter-rater reliability (coefficient of concordance)

- What type of reliability is appropriate to measure time sampling error?
- test-retest (coefficient of stability)

measure attributes that are relatively stable over time

- What type of reliability is appropriate to measure time sampling and content sampling errors?
- alternate forms (coefficient of equivalence)

not appropriate when attribute measured is expected to fluctuate over time

most rigorous and best method for estimating reliability

- Why is alternate forms reliability often not assessed?
- difficulty in developing forms that are truly equivalent

- what are 2 methods for evaluating internal consistency?
- split-half

coefficient alpha

- What is the problem with using split-half reliability?
- reliability coefficient based on test scores from one-half of entire test

reliability tends to decrease as the length of test decreases-split half usually underestimates test's true reliability

- How can you correct for the problems with split-half reliability?
- use the Spearman-Brown prophecy formula-provides an estimate of what the reliability coefficient would have been if it had been based on the full length of the test

- When do you use the Kuder-Richardson Formula 20 (KR-20)?
- when test items are measured dichotomously

variation of coefficient alpha

not appropriate for speeded tests

- What is a drawback of using coefficient alpha?
- lower boundary of a test's reliability

- What is the purpose of using coefficient alpha?
- measure inter-item consistency

- When is it appropriate to use inter-rater reliability?
- whenever test scores depend on a rater's judgement

- When is a kappa coefficient used?
- the reliablity coefficient for inter-rater reliabliity

- What are the factors that affect the reliability coefficient?
- test length

range of test scores

guessing

- What is the acceptable level of a reliability coefficient?
- .80 or larger

- What is the standard error of measurement?
- an index of the amount of error that can be expected in obtained scores due to the unreliability of the test

calculation of the confidence interval

- What is the formula for the standard error of measurement?
- square root of 1-r(xx) (reliability coefficient) multipled by the standard deviation of test scores

- What affects the magnitude of the standard error?
- standard deviation of test scores and test's reliability coefficient

lower the test's standard deviation and higher reliability coefficient = smaller standard error of measurement

- How can you interpret the standard error of measurement?
- type of standard deviation

interpret in terms of areas under the normal curve

68%, 95%, 99% confidence intervals 1, 2, 3 standard deviations

- What is validity?
- test's accuracy in providing information it was designed to provide

- What are the 3 categories of validity?
- content validity

construct validity

criterion-related validity

- What type of validity is important when scores on a test provide information on how much each examinee knows about a domain?
- content and construct validity

- What type of validity is important when scores on a test provide information on each examinee's status with regard to the trait being measured?
- content and construct validity

- What type of validity is important when scores will be used to predict scores on some other measure and you are interested in the predicted scores?
- criterion-related validity

- What is content validity?
- test items sample content or behavior test was designed to measure

- How do you establish content validity?
- through the judgement of experts

- What type of tests consider content validity to be important?
- achievement-type tests

work samples

- What additional evidence supports good content validity?
- large coefficient of internal consistency

high correlations with other tests that measure the same domain

pre/post test evaluations with a program designed to increase familiarity with material will show changes

- What is construct validity?
- the test is found to measure theoretical trait or construct designed to measure

- What are some methods to establish construct validity?
- assess internal consistency

study group differences (adequate?)

hypotheseis testing-do the scores change following the experiment

assess convergent (high correlations with the same trait) and divergent (low correlations with different traits) validity

assess factoral validity

- What are monotrait-monomethod coefficients?
- same trait-same method

correlation between measure and itself

reliability coefficients

should be large

- What are monotrait-heteromethod coefficients?
- same trait-different method

correlation between different measures of the same trait

convergent validity

- What are heterotrait-monomethod coefficients?
- different trait-same method

correlations between different traits measured by the same method

discriminant (divergent) validity

- What are heterotrait-heteromethod coefficients?
- different trait-different method

correlation between different traits measured by different methods

discriminant validity when small

- What do factor loadings in factor analysis measure?
- square it to determine the amount of variability in test scores explained by the factor

- What is communality in factor analysis?
- common variance

amount of variability in test scores that is due to the factors that the test shares in common to some degree with the other tests included in the analysis

- From the perspective of factor analysis, what are the components of a test's reliability?
- communality

specificity

error

- What is the relationship between reliability and communality?
- communality is a lower-limit estimate of a test's reliability coefficient

- What are the two types of rotation of a factor matrix?
- orthogonal

oblique

- What type of rotation has uncorrelated factors?
- orthogonal

- What type of rotation has correlated factors?
- oblique

attributes measured by the factor are not independent

- When can you calculate a factor's communality from the factor loadings?
- when factors are orthogonal

communality is equal to the sum of the squared factor loadings

- What is a measure of shared variability?
- squared factor loading

- What is criterion-related validity?
- strong correlation between test and a criterion

- How is criterion-related validity assessed?
- correlating the scores of a sample of individualson the predictor with their scores on the criterion

- What are the 2 types of criterion-related validity?
- concurrent & predicitive validity

- What is the difference between concurrent and predictive validity?
- the time when the predictor and the criterion are administered

predict future status vs. estimating current status

- What is an acceptable level for a validity coefficient?
- .20 or .30

rarely exceed .60

- How do you interpret validity coefficient?
- since correlation between 2 measures-square the coefficient and interpret in terms of shared variability

- How do you provide a measure of shared variability?
- square the correlation between 2 measures (tests or variables)

how much variability in Y is explained by X

- What is the standard error of estimate?
- used to construct a confidence interval around a predicted criterion score

- What is the formula for standard error of estimate?
- square root of 1-validity coefficient squared multiplied by standard error of the estimate

- When does the standard error of estimate = 0?
- when the validity coefficient is equal to +/-1

- What is incremental validity?
- the increase in correct decisions that can be expected if the predictor is used as a decision-making tool

involves using a scatterplot

- In a scatterplot of criterion and predictor scores, if the goal is to maximize the proportion of true positives, how do you do this?
- set a high predictor cutoff score-will reduce the number of false positives

- What is the formula for incremental validity?
- positive hit rate - base rate

- What is the base rate?
- proportion of people selected without the use of the predictor

dividing successful people (true positive + false negatives) by the total number of people

- What is the positive hit rate?
- proportion of people who would have been selected on the basis of their predictor scores and who are successful on the criterion

true positives/ total positives

- What determines if a person is positive or negative?
- predictor

- What determines if a person is true or false?
- criterion

- What is the correction of attenuation formula used for?
- to estimate what a predictor's validity coefficient would be if the predictor and/or criterion were perfectly reliable

tends to overestimate the actual validity coefficient that can be achieved

- What information is needed to calculate the correction of attenuation formula?
- predictor's current reliability coefficient

criterion's current reliability coefficient

criterion-related validity coefficient

- What happens to the validity coefficient when it is cross-validated?
- tends to shrink because all of the same chance factors operating in the original sample will not ve present in the new sample

- What is a nonlinear transformation?
- whenever a distribution of transformed scores differs in shape from the distribution of raw scores-the score transformation is this

percentile ranks-because always flat in shape

- What is a standard score?
- indicates the examinee's position in the normative sample in terms of standard deviations from the mean

permit comparisons of scores from different tests

z-scores

T-scores, deviation IQs, and SAT scores

- What is the formula for calculating a z-score?
- raw score - mean of distribution

divided by the distribution's standard deviation

- What is a linear transformation?
- transformation of raw scores to z-scores

- What is the purpose of criterion-referenced (mastery) testing?
- to make sure that all examinees eventually reach the same performance level

- What is a type of criterion-referenced testing?
- percentage score

or interpreting test scores in terms of their likely status on an external criterion

- When do you use a regression equation and expectancy tape when interpreting test scores?
- criterion-referenced interpretation

- What is banding?
- score adjustment method involves considering people within a specific score range (band) as having identical scores

- What is exploratory factor analysis?
- identify the minimum number of underlying "factors" (dimensions) needed to explain the intercorrelations among a set of tests, subtests, or test items

- What is principal components analysis?
- used to identify a set of variablesthat explains all (or nearly all) of the total variance in a set of test scores

- What eigenvalue is ued to retain components in a principal components analysis?
- 1.0 or higher