A test of whether data fits a particular theoretical distribution ( goodness of fit), or else of whether 2 categorical variables are independent ( association). In both cases, this is based on the result that the sum of the squared differences between the observed values and those expected under H0 divided by the expected values has an approximate chi-squared distribution if H0 is true.
A general procedure for determining the probability that two different distributions are actually samples of the same population. In nuclear counting measurements, this test is frequently used to compare the observed variations in repeat counts of a radioactive sample to the variation predicted by statistical theory.
The name of a statistical test and of a probability distribution that has a particular shape. The chi-square test is used to determine whether the distribution of counts or occurrences is random across categories. EX: If a die is fair (not loaded), each of the numbers 1 through 6 should occur with the same frequency. If three handpieces are equally safe, the number of accidents with each should be jointly independent of which handpiece is used and how often it is used. The result of a chi-square test is “looked up†in a table of chi-square distributions to determine the chance of such a test value occurring by chance, the p-value. Usually preformed as a hand calculation. {See templates – Choosing a statistical test, chi-square.} [See degrees of freedom, independence
A statistical test applied to nominal or categorical data.
A test used with classification tables to determine the influence (if any) of one factor (rows) on a second factor (columns) by assessing whether there is a difference in the proportion of an outcome in two or more groups. Examples of factors might be smoking (yes/no) against lung cancer (yes/no): in other words, does smoking status lead to a larger risk of lung cancer, or is it irrelevant? The chi-square test is not for use on continuous data, but specifically for counts. See also Classification table.
A statistical test used to compare the difference between relative frequency of observed events to the frequency expected based on the assumption that is to be tested. Coefficient of Determination R^2, the square of the correlation coefficient, which estimates the percent of the total variation in the response can be attributed to the variation of the input variables given a regression equation or model. It also is used to evaluate the adequacy of a regression model.
Used to test if the standard deviation of a population is equal to the specific value
An inferential test that compares the spread of scores that would be expected to occur by chance with the actual spread that has been observed. If the observed scores vary sufficiently from scores expected by chance then the result is statistically significant. Whether the scores vary sufficiently from chance is calculated by the test. This test is used on data that is categorical or nominal (when you are looking at the proportions of participants who fall into different categories.)
The statistical test used to test the null hypothesis that proportions are equal or, equivalently, that factors or characteristics are independent or not associated.
statistical significance test based on frequency of occurrence; it is applicable both to qualitative attributes and quantitative variables. Among its many uses, the most common are tests of hypothesized probabilities or probability distributions ( goodness of fit), statistical dependence or independence (association), and common population (homogeneity). The formula for chi square (χ2) depends upon intended use, but is often expressible as a sum of terms of the type ( − where is an observed frequency and its hypothetical value.
A chi-square test is any statistical hypothesis test in which the test statistic has a chi-square distribution when the null hypothesis is true, or any in which the probability distribution of the test statistic (assuming the null hypothesis is true) can be made to approximate a chi-square distribution as closely as desired by making the sample size large enough.