This site is 100% ad supported. Please add an exception to adblock for this site.

Psyc 305 Exam III

Terms

undefined, object
copy deck
I. Normal Curve Areas
A. Z scores and the Percent of the Normal Curve
B. Using the Normal Curve Table
1. Having a Z score, determining a percentage
34% is one SD above mean, 34x2 = 68%, 68+28=96%. 1) Choose percentage (ex 95%), add 50% to __% = 95%.
For Z = 1.64, %meantoZ = 44.95 or 45.05
2. Having a percentage, determining a Z score
II. What is Hypothesis Testing?
If z = 1, %meantoZ = 34.13. Percent LOWER than z-score: Add 34.13 + 50 = 84.13. Percent HIGHER than z-score...
II. Hypothesis testing is a procedure for testing whether the results of a study provide evidence, exceeding a specified level of probability, for a particular theory or supposition about the population.
II. What is Hypothesis Testing? cont.
Not proving; did sample come from population, assume hypoth will not be supported. How unlikely is it that they are from the same population. Assume no diff->'experiment' had no effect. Trying to reject null hypoth. Degree of likelihood.
A. The Null Hypothesis: H0
B. The Research (or Alternative) Hypothesis: H1
A. μ1 = μ2, μ1 < μ2, μ1 > μ2
Drawn from sample population.
B. μ1 ≠ μ2, μ1 > μ2, μ1 < μ2
There is a diff, diff populations diverged due to experiment, different in differential research. Means are not same; usu specified as greater or less than.
Mu1 or μ1: group of interest, Mu2: control group. (Know something about population 2, mean, SD, etc.) Nec to conduct tests, must be normally distributed.
If the sample mean is between/ends of both population curves, which population distribution does it really belong to? How far from mean?
98% from pop 2 fall lower...too high/unlikely, closer to mean of pop2 (not always mutually exclusive populations, overlap. If close enough to pop 1 to prevent 'almost proving.' Never hypoth that there will not be a diff - no evidence: Fail to Reject Null Hypoth. vs rejected. gain evidence to support research hypoth. Fail to reject: inconclusive.
III. Five Steps of Hypothesis Testing
A. Step 1: Restate the question as an hypothesis test
B. Step 2: Determine qualities of the comparison distribution
C. Step 3: Determine cutoff score (critical value)
D. Step 4: Determine sample score
E. Step 5: Decide whether or not to reject the null hypothesis
Step 1
Step 2
Step 3
1) Non-direc: not equal/two-tailed? Direc: higher/lower/one-tailed? 2) Mean, SD, normal. (on population level, est, if sample is comparable to another of same size from general population) 3)How far off in order to be considered unlikely? Pre-set criteria of percentage of people (as for signific). 5% chance, extreme, p<.01 example. Decide on p-level, corres z-score, or point at which 95% are higher. Det z-crit and find z-score.
A. One-tailed tests
Directional hypoth, only one end of distrib, one cutoff. Would not know if extreme opposite than expected occured, usu theoretical tests.
H0: μ1 < μ2
H1: μ1 > μ2
or
H0: μ1 > μ2
H1: μ1 < μ2
B. Two-tailed tests
No set direction, different, not sure what to expect, generally used, more difficult to reject null hyp and find signific findings, usu assumed without mentioning; p<.05: same signific standards. Zcreit of 1.64->too big. Split alottment/rejec area/z-scrit by two. Ex. %meantoZ - 47.5.
H0: μ1 = μ2
H1: μ1 ≠ μ2
III. Sampling Distributions
A. Comparing a population and a sample
B. Sampling distribution of means
A) If sample is comparable to other samples drawn from population. Not whether comparable to population. If diff: fail to reject null hypoth. B) Collections of means of all possible samples, est char of sampling distrib; distriub is made up of means of indiv samples. Is our mean close enough to pop mean?
Sample vs Population vs Distrib of Means
M, μ, μM = μ; SD2, ϒ2, ϒ2M = ϒ2/n; SD, ϒ, ϒM = ϒ2M = ϒ/ n. (2=Squared)
Standard Error
ϒM = ϒ/n. Distribution normal even if population was not. (N>30) Shape of distriub of means, SD is actually SE.
Z tests
Confidence Intervals
- Need a known population w/ data available, comparing two samples usu w/ diff statistic, theoretical foundation. - A push to use confid intervals more often; gives more information for hypoth testing, 'diff' - more specific.
Confidence Intervals
Definition: A confidence interval is a range of scores constructed around a sample mean, within which we are likely to find the true mean of the population from which that sample was drawn.
A. Estimating the population mean
1. point estimations
Predict a particular value, ex. yhat is a point estimate in regression. Useful when possible.
2. interval estimations
B. The theory behind confidence intervals
Reasonably sure of range, as narrow as possible. Use normal curves, 95% or 99% confident that mean is w/in interval. B) Where is the mean of the other pop?
C. Computing confidence intervals, Step 1
Step 1: Estimate the mean and standard error of the distribution of means for the hypothetical population from which you drew your sample.
[know nothing: sample mean would be fo distrib; SE of popul 1-> see pop 2 of same construct -> similar variability statistically-> use pop2's SE.
Step 2:
Use the normal curve table to find the Z scores that will give you the desired interval (95% or 99%) surrounding the mean of the distribution of means you have from step 1.
⬢ for a 95% confidence interval it is + 1.96, because it is % Mean to Z of 47.5
⬢ for a 99% confidence interval it is + 2.57, because it is % Mean to Z of 49.5.
Step 3:
Convert these Z scores to raw scores using the mean and standard error of your distribution of means that you determined in step 1.
(Sample Mean) + Zcrit x ϒM = Upper Limit
(Sample Mean) - Zcrit x ϒM = Lower Limit
Relation between confidence intervals and hypothesis tests
If mean of compar distrib were 210; chance that 210 is true mean, two-tailed hypoth test would have failed to reject. If mean of compar distrib falls w/in confid int->would have failed to reject, would mean sample drawn is from that pop.
More about the Standard Error
Used to compute confid int, mean and SE in journals to find confid interval' 99% about 2.5xSE. Size of SE det size of confid interval, smaller SE-> smaller interval, if sample size inc, SE reduced, Larger Research Sample.
1) Central limit theorem
2) Sampling Distributin
3) Mean
4) Var and SD
1) Few means of samples on extremes. 2) Narrower than popul, tend toward normality if n>= 30. 3) Estimated to be Mu. 4) Proportionately smaller; n->your research sample. *5) SE of distrib of means, used in hypoth testing, used in confid intervals. (vs SD!)
Five Steps of Hypothesis Testing
1) Identify pop 1 (research) and pop 2 (comparison)
2) Sampling distrib of means, not Population but based on population...usu smaller than SD of popul.
Five Steps of Hypothesis Testing (3)
p<.05 and one-tailed: 45% btwn mean and z cutoff (1.64); p<.05 and two-tailed: 47.5% btwn mean and z cutoff and 2.5% btwn cutoff and end (1.96).
Five Steps of
Power (3)
p<.01 and one-tailed: 49% (2.33); two-tailed: 49.5% (2.57)
HYPOTH TESTING EX
1)
1) SE: SD for pop 2, the control, = SD2/square root of n. SD 25/square root of 25 = 25/5 = 5. SE1 = SE 2.
HYPOTH TESTING EX
2)
2) p<.05, zcrit=1.64.
Mean2,1 + (Zcrit)x(SE1,2) = #.
HYPOTH TESTING EX
3)
4)
3) Zscore= (#, rawscore - Mu2+ nec inc/dec for effect size) / SE1,2) => -95, on %Mean to Z yields a number + 50 b/c neg. This ans is the chance of correctly rejecting the null hypothesis if the desired improvement were really achieved.
Effect Size Definition:
In an hypothesis test this is a standardized statistic that describes the size of the difference between two population means
Effect Size Formula:
Effect Size Conventions:
μ1 – μ2
________ = d
σ

small: d = .20
medium: d = .50
large: d = .80
Statistical Power:
the probability that the results of our analysis will be significant, assuming that the research hypothesis is, in reality, true
Type I Error (alpha):
the probability that you will reject the null hypothesis when, in fact, you should not have

"false positive"
Type II Error (beta):
the probability that you will fail to reject the null hypothesis when, in fact, the null hypothesis is false

"false negative"
Relation Between Power, Type I Error,
and Type II Error
Research Hypothesis True ->
Correct Decision, Reject Null OR Type II Error
(beta)
Calculating Power:
Step 1:
Step 2:
Gather information about the two distributions

From the comparison distribution you need the mean and the standard error.
2) Determine the raw score cutoff point on the comparison distribution
You find the Zcrit score (for example for a p<.05, Zcrit=1.64), and convert that to a raw score on the comparison distribution.
*** Raw score ~ UL, Mu2+-(zcrit)(SE)
Calculating Power:
Step 3:
Take the raw score from step 2 and convert it to a Z score on the research distribution of means

Using the mean and standard error for your research distribution that you estimated in step 1, calculate the Z score on the research distribution that corresponds to the raw score cutoff from step 2.
***Zscore; Raw ans from #2 - Mu1 / SE
Calculating Power:
Step 4:
Using the normal curve table, determine the probability of getting a score more extreme that that Z score on the research distribution

Find the “Mean to Z” value, and calculate the distance between your value and the relevant end of the distribution.
****Mean to Z, add 50 if neg.
Research article power and effect
Usu not reported but should. Seen for META-ANALYSES (large numbers of particip, much Power)
Effect sizes:
Actual magnitude/difference between means. Importance. Standardized. (Correl has effect size built in; R squared is an effect size, mult R as well; others exist)
Power
More complex than effect size; the prob the results will be signif if true. Correct detec. False POS: should NOT have rejected Null. False NEG: Diff did exist. **In diagram, power is at Zcrit or more extreme.
If mean were where did not overlap:
Ideally:
Power:
Incorrectly fail to reject null. Ideally large (80%) should be in power region vs small chance of iden signific. Power is not calc by SPSS; only learning ONE-TAILED
Power vs Effect size
1) Detect real differences like magnifying glass, strenth means detec small diff despite small effect size. No overlap means 100% power. Ex. huge effect size could still have a TI error but unlikely, P=.00001. . .
Five Steps of Hypothesis Testing: Step 1: Step 2:
Restate the question as a hypothesis test.
2) Determine qualities of the comparison distribution (sampling distribution of means) ⬢ mean ⬢ standard error ⬢ normality (n > 30?)
Five Steps of Hypothesis Testing:
Step 3:
Determine cutoff score (critical value Z score)
⬢ use your predetermined p-value and the normal curve table to do this
Step 4:
Determine sample mean Z score
⬢ convert raw to Z using mean and SE determined in step 2
Step 5:
Decide whether or not to reject the null hypothesis
⬢ if Zsample is closer to the mean than the Zcrit, you fail to reject the null hypothesis
⬢ if Zsample is further away from the mean than the Zcrit, you reject the null hypothesis
Three Steps for Computing Confidence Intervals: Step 1:
Estimate the mean and standard error of the distribution of means for the hypothetical population from which you drew your sample.
• You use the mean of your research sample as a proxy for the mean of its distribution of mean
• for variance, it’s okay statistically to assume that the variance of the comparison distribution of means is the same as the variance of your hypothetical distribution.

Deck Info

47

permalink