Sampling
The sampling and estimation reading focuses on two things. The first is how to determine quality samples and perform sampling to obtain a good estimator for population description. The second focus of the reading is to build on what we know about confidence intervals in order to prepare us for hypothesis testing.
Sample quality depends on many things, from randomness of selected population members to controlling for a variety of possible biases that can occur collections, such as data mining for outcomes that support a personal estimation or falling for survivorship bias and selecting from a limited set of well-performing outcomes only. Sampling can be random or systematic, and sample data can be stratified into sub-classifications. Like we mentioned earlier, data can be cross-sectional data or time-series data.
The goal of controlling for these attributes of the data is to reduce sample error, which is difference between an observed sample parameter and the population parameter it is meant to estimate. We can estimate the standard of error in sample means using the following equation.
- Standard error of sample mean: sample standard deviation/√(sample size)
- Standard error of population mean: σ /√(n) where (n) is the size of the population
Estimation
By applying the central limit theorem and standard error equations, we can use sample means to draw conclusions about the population mean, regardless of the distribution of the population.
- Central Limit Theorem: Given a population described by mean µ with finite variance σ2 and a large enough sample (n), the sample means X̄ over iterative trials will approach a normal distribution and X̄ will equal the population µ. The variance of the sample distribution will be σ2/n, the population variance over the sample size.
Using the CLT we can take a point estimator like sample mean and make a broader statement about the population mean, even if the population parameters are not normally distributed. While on the subject, the CFA wants you to know that unbiasedness, efficiency and constistency are properties of good point estimators.
- Unbiased: Little to no standard error.
- Efficient: No other unbiased estimator of the parameter has a smaller variance around the mean.
- Consistent: As sample size increases, the sample estimator more accurately reflects the population.
With confidence intervals, instead of taking a point estimator of a sample parameter and claiming that it is the exact value of the population parameter, we instead construct a range that represents the location of the population mean within a chosen degree of confidence, represented by 1 – α.
- Confidence Intervals: X̄ ± zα/2(σ/√(n))
Essentially, we sum and subtract a standardized sample mean with its standard of error multiplied by a degree of confidence reliability factor, creating a range of possible population means with a certain degree of confidence that the true population mean will be inside that range.
For instance, if we used a 90% confidence interval, where α/2 = .05, the reliability factor zα/2 would be 1.65. With a calculated range of [-13,9] then, we could say that we are 90% confident that the population mean will be between -13 and 9 given or sample mean µ and known population variance σ2.
If the population variance was not known, we would use the sample variance, and in this case, we would be depending on the CLT. Now in this case, especially if we had a small sample size, we would use a t-distribution’s reliability factors. This introduces the concept of degrees of freedom, which defines which t-distribution we are using. The higher the df the closer we are to the normal.
- Degrees of freedom: n – 1.
- Confidence Intervals, Student-T: X̄ ± tα/2(σ/√(n)), where tα/2 is determined by the df and desired confidence level.
The tα/2 and zα/2 values are found in tables. Note that even with an unknown population variance, if we have a large enough sample size, it is fine to use the normal distribution instead of the student-T. The following chart from Investopedia summarizes when to use which test statistic.
Distribution | Population Variance | Sample Size | Appropriate Statistic |
Normal | Known | Small | z |
Normal | Known | Large | z |
Normal | Unknown | Small | t |
Normal | Unknown | Large | t or z |
Non-Normal | Known | Small | unavailable |
Non-Normal | Known | Large | z |
Non-Normal | Unknown | Small | unavailable |
Non-Normal | Unknown | Large | t or z |
Hypothesis Testing
We can now take our understanding of confidence intervals to calculate the possibility of hypothetical situations through hypothesis testing. We will ask a question about a mean value, which is the null hypothesis. This also creates an alternate hypothesis, which is automatically concluded if the null hypothesis is rejected. This illustrates the binary nature of hypothesis that can be tested.
There are 6 steps to hypothesis testing:
- State the null hypothesis
- Identify the appropriate test-statistic and probability distribution
- Specify significance level (similar to reliability factor from before)
- State the decision rule (determine the critical value/rejection point)
- Calculate test-stat using data collected
- Compare test-stat to critical value
- Make economic/investment decision
Hypothesis testing is related to the concept of estimations with confidence intervals we just did. In estimation, we use a sample parameter to make a specific statement about the location of the population parameter. In hypothesis testing, we instead ask whether a sample parameter is likely to be consistent with the corresponding population parameter.
There are two types of binary hypothesis we can form for testing, and we use either a one-tailed test or a two-tailed test for these tests. If our hypothesis makes a greater than or less than statement, we use a one tail test, and our rejection point will be determined by the α significance level. If our hypothesis tests an equal to statement, we use a two-tail test and the rejection point will be determined by an α/2 significance level. The rejection point is the value or range which if our test statistic lies beyond of we reject the null hypothesis.
- Test-statistic, single mean (z): (X̄ – µ)/(s/√(n)
For a hypothesis test with equality statement, H0: Θ = Θ0 we have two rejection points at the ±zα/2 significance level. For α = .05, where ±zα/2 = ±1.96, we reject if z<-1.96 and z>1.96.
For a hypothesis test with a less than statement, H0: Θ < Θ0, we reject if z > zα.
For a hypothesis test with a greater than statement, H0: Θ > Θ0, we reject if z < zα.
There are two types of mistakes we can make, a type I error, where we reject a true null hypothesis, and a type II error, where we do not reject a false null hypothesis. The smaller our α or α/2 value, the lower the chance we have of making these errors. The power of a test is the probability that we of correctly rejecting a false null. The higher the power, the more stringent the test. The smaller our α or α/2 value, the lower the chance we have of making these errors.
Another measure of significance is the p-Value. The smaller the p-value, the more confident we can be of our test-statistic result. If the p-Value is smaller than our level of significance, then we can accept our null hypothesis.
There are four types of distributions we can use for our hypothesis tests. So far we have used the normal distribution and student-t distribution, which we use with test statistics concerning single means with known or unknown population variances. Furthermore, we can also use the t-distribution with test statistics that concern differences between means and that concern mean differences specifically.
For tests concerning a single variance, we can use the χ2 (Chi – squared) test statistic and distribution. For tests concerning comparisons between two variances, we use the F-test statistic and distribution.
- Test-statistic, difference between two mean: [(X̄1 – X̄2) – (µ1 – µ2)]/[(s2P/n1) + (s2P/n2)]1/2
- S2P: [(n1 – 1)s12 + (n2 – 1)s22]/(n1 – n2 – 2)
- Test-statistic, mean differences: (d̄ – µd0)/sd̄
- d̄: (1/n)Σdi or sum of the sample mean differences
- Sd̄: std of d̄i
- Test statistic, χ2: ((n-1)s2)/σ2, with (n)-1 degrees of freedom
- Test statistic, F: s1/s2, with (n)-1 degrees of freedom
When we are not confident about using the assumptions for one of the parametric tests above, or we do not have enough data to gauge a full population, we can still use non-parametric tests for hypothesis testing.