F distribution and F-test

Basic Definitions:


It is the ratio of two independent chi-square variables divided by their corresponding degrees of freedom. When the population variances are equivalent, this simplifies to be the ratio of sample variances.

Analysis of Variance (ANOVA): It is a technique employed to test a hypothesis regarding the means of three or more populations.

One-Way Analysis of Variance: When there is only one independent variable then it is the analysis of Variance. In null hypothesis all population means are equivalent, the alternative hypothesis is that at least one mean is distinct.

Between Group Variation: The variation due to interaction among the samples, represented by SS(B) for the Sum of Squares between groups. If the sample means are very close to one other (and thus the Grand Mean) this will be small. There are k samples comprised with one data value for each and every sample (that is, the sample mean), therefore there are k-1 degrees of freedom.

Between Group Variance: It is a variance due to the interaction between samples and is represented by MS(B) for Mean square between groups. This is the middle group variation which is divided by its degrees of freedom.

Within Group Variation: It is the variation due to differences in individual samples and is represented by SS(W) for Sum of Squares within the groups. In this the sample is considered independently and there is no interaction between the samples. The degrees of freedom are equivalent to the sum of individual degrees of freedom for each sample. As each sample has degrees of freedom equivalent to one less than their sample sizes, and there are k samples, therefore the total degrees of freedom is k less than the net sample size: df = N - k.

Within Group Variance: It is the variance due to differences in individual samples and is represented by MS(W) for Mean Square within the groups. It can be computed by dividing the group variation by its degree of freedom.

Scheffe' Test: A test which is used to find where the differences between means lie whenever the analysis of Variance points out the means are not all equivalent. The Scheffe' test is usually used whenever the sample sizes are distinct.

Tukey Test: It is a test used to find where the differences between the means lie whenever the analysis of variance points out the means are not all equivalent. The Tukey test is usually used whenever the sample sizes are all similar.

Two-Way Analysis of Variance: It is an extension to one-way analysis of variance. In this, there are two independent variables. In a two way ANOVA, there are three sets of hypothesis. According to first null hypothesis, there is no interaction between the two factors. And in second null hypothesis, the population means of the first factor are equal. In third null hypothesis, the population means of the second factor are equivalent.

Factors: It is two independent variables in the two-way ANOVA.

Treatment Groups: The groups formed by making all the possible combinations of two factors. For illustration, if the first factor consists of 3 levels and the second consists of 2 levels, then there will be 3 x 2 = 6 distinct treatment groups.

Interaction Effect: It can be defined as the effect one factor on the other.

Main Effect: It is the effects of independent variables.


The ratio of two independent chi-square variables divided by their corresponding degrees of freedom make the F-distribution.


As F is made by chi-square, most of the chi-square properties hold over to the F distribution.

a) F-values are all non-negative.
b) Distribution is non-symmetric.
c) The mean is around 1.
d) There are two independent degrees of freedom, one for numerator and the other for the denominator.
e) There are lots of different F distributions, one for each and every pair of degrees of freedom.

F-Test: This test is designed to test whether the two population variances are equal. It can be done by comparing the ratio of two variances. Therefore, if the variances are equivalent, then the ratio of variances will equal to 1.

When the null hypothesis is true, then F-test statistic given above can be simplified. This ratio of sample variances will be the test statistic employed. When the null hypothesis is false, then we will refuse the null hypothesis that the ratio was equivalent to 1 and our supposition that they were equal.

F = s12/s22

There are some different F-tables. Each one consists of a distinct level of significance. Thus, at first find the accurate level of significance, and then look-up the numerator and denominator degrees of freedom to determine the critical value.

We will notice that all the tables only provide the level of significance for right tail tests. As the F distribution is not symmetric, and there are no negative values, we might not simply take the opposite of the right critical value to determine the left critical value. The way to determine a left critical value is to reverse the degrees of freedom, look-up the right critical value, and then obtain the reciprocal of this value. For illustration, the critical value with 0.05 on left with 12 numerator and 15 denominator degrees of freedom is found of taking reciprocal of the critical value with 0.05 on right with 15 numerator and 12 denominator degrees of freedom.

Avoiding Left Critical Values:

As the left critical values are very hard to compute, they are frequently avoided together. In many textbook this process is followed. We can force the F-test into right tail test by putting the sample with the big variance in the numerator and smaller variance in the denominator. It doesn’t matter that sample has the bigger sample size, only that sample has the bigger variance.

The numerator degree of freedom will be a degree of freedom for which the sample has bigger variance (as it is in the numerator) and the denominator degree of freedom will be the degree of freedom for which the sample has smaller variance (as it is in the denominator).

If a two-tail test is being executed, you still have to divide alpha by 2, however you only look-up and compare the accurate critical value.

Assumptions /Notes:

A) The bigger variance must always be placed in the numerator.
B) The test statistic F = s1^2 / s2^2 where s1^2 > s2^2
C) Now divide the alpha by 2 for a two tail test and then determine the right critical value.
D) If standard deviations are given rather than variances, they should be squared.
E) If the degree of freedom are not specified in the table, go with the value with the bigger critical value (this occurs to be the smaller degrees of freedom). And hence you are less likely to refuse in error (type I error)
F) The populations must be normal from the samples obtained.
G) The samples should be independent.

Latest technology based Statistics Online Tutoring Assistance

Tutors, at the www.tutorsglobe.com, take pledge to provide full satisfaction and assurance in Statistics help via online tutoring. Students are getting 100% satisfaction by online tutors across the globe. Here you can get homework help for Statistics, project ideas and tutorials. We provide email based Statistics help. You can join us to ask queries 24x7 with live, experienced and qualified online tutors specialized in Statistics. Through Online Tutoring, you would be able to complete your homework or assignments at your home. Tutors at the TutorsGlobe are committed to provide the best quality online tutoring assistance for Statistics Homework help and assignment help services. They use their experience, as they have solved thousands of the Statistics assignments, which may help you to solve your complex issues of Statistics. TutorsGlobe assure for the best quality compliance to your homework. Compromise with quality is not in our dictionary. If we feel that we are not able to provide the homework help as per the deadline or given instruction by the student, we refund the money of the student without any delay.

2015 ©TutorsGlobe All rights reserved. TutorsGlobe Rated 4.8/5 based on 34139 reviews.