Definitions for

**"Statistical significance"****Related Terms:**Statistically significant, P-value, Significance, Significant, P value, Likelihood, Level of significance, Chi-square, Bayes' theorem, Power, Significance test, Probability theory, Prior probability, Probability distribution, Test statistic, Posterior probability, Significance level, Confidence level, Null hypothesis, Uncertainty, Theoretical probability, Chi-square test, Statistical test, Alpha, Binomial distribution, Probability, Random, Statistical power, Likelihood function, Likelihood ratio, Critical value, Bernoulli trial, Conditional probability, Confidence interval, Law of large numbers, Stochastic model, Deterministic, Uncertainty, Hypothesis test, Significant difference, Bias, Probability density function, Experimental probability, Variability, Hypothesis testing, Likely, Poisson distribution, Sampling error, Event

a result is said to be significant when there is no more than a 5% chance that the same result could have been produced by random fluctuations. This is expressed as p = .05 .

Statistical significance refers to the scientific legitimacy of a research finding. Research findings are typically considered statistically significant only if the results would occur by chance less than five times out of one hundred. Statistical significance is based on a mathematical cut-off. It communicates very little about whether or not the finding is useful in real life (21). See practical significance for more information. In a study, a researcher finds a small relationship between happiness and time spent outdoors. In real life, however, the actual influence of time outdoors on happiness proves so small that being out all day has little affect. In this example, the research findings are statistically significant, but not practically significant.

a difference found among groups after a comparative randomized investigation that is not likely to be caused by chance alone. The probability of it occurring by chance alone is often reported as P0.05.

a low probability (usually less than 5 percent) that the results of a research study are due to chance factors rather than to the independent variable. (56, 653)

A method that tests whether the result given is so rare that it is unlikely to be due to chance alone. Examples include a p-value (for probability) or a T-test. The most common cut-off is 5%, that is if this result would occur by chance only one in twenty times, it would be considered to be significant.

The degree to which a value is greater or smaller than would be expected by chance. Typically, a relationship is considered statistically significant when the probability of obtaining that result by chance is less than 5% if there were, in fact, no relationship in the population.

The probability that an event did not happen by chance alone. A result is deemed statistically significant if statistical methods have been used to prove that a certain event is highly unlikely.

A result that has a low probability of having occurred by chance alone and is by convention regarded as important.

The probability that and degree to which the results of an experimental study describe an actual relationship between two factors beyond that which might be expected by pure coincidence.

Probably true (not due to chance). A measure of the likelihood that a relationship would occur purely by chance. Thus, a statistical estimate of the effects of a program may be said to be significantly different from zero at the 5 percent level if there is less than a 1 in 20 chance that the effects could have occurred purely by chance.

the likelihood that an observed relationship among variables is actually present rather than a fluke of sampling or measurement. Researchers use significance tests to calculate the chance that we observe a relationship that does not actually exist - a "Type I error." Significance is usually referred to in terms of "confidence" or "confidence levels." In media polling, it is often represented as "plus or minus n%."

A term based on statistical tests that is used to denote the probability that the observed association could have occurred by chance alone. Does not refer to medical or biological significance of an association. For example, a statistical significance at the 1-percent level indicates a 1-in-100 chance that a result can be ascribed to chance.

the probability of obtaining a given result by chance. High statistical significance does not necessarily imply importance.

A determination of the probability of obtaining the particular distribution of the data on the assumption that the null hypothesis is true. Or, more simply put, the probability of coming to a false positive conclusion [See pg. 2 of McLarty, J. W., "How Many Subjects are Required for a Study?" IRB 9 (No. 5, September/October 1987): 1-3.]. If the probability is less than or equal to a predetermined value (e.g., 0.05 or 0.01), then the null hypothesis is rejected at that significance level (0.05 or 0.01).

The likelihood that an association between exposure and disease risk could have occurred by chance alone.

Statistical term indicating that a result has ninety-five percent certainty of being due to a factor (such as the Fast ForWord products) other than chance.

An estimate whose size or magnitude is not due merely to chance. Usually expressed with a specified probability and confidence level (for example 95%).

A point at which statistics indicate that a set of measurements or observations does not just actually differ from normal (i.e. it's abnormal) or from a control group, but that the observed difference is unlikely (typically less than a 1-in-20 risk) to have come about just by the effects of chance (we then say that there is a P-value -- for Probability -- of less than 0.05 ... which is what 1-in-20 is when expressed as a proportion). Footnote: One logical follow-on from this is that if you study 20 different variables for such statistical significance the odds are that one will have a P-value less than 0.05, just by the expectations of chance! In practice, when multiple observations like this are being made statisticians will put a tougher criterion on what is 'statistically significant', such as P less than 0.02 or 0.01. For academic legitimacy, P values should be set before looking at the data, not afterward.

A conclusion made about the results of statistical tests. If results are statistically significant, it is unlikely they happened by chance or by errors in sampling. Statistical significance does not mean that the results automatically have practical significance or importance. Also see p value.

A measure of how confidently an observed difference between two or more groups can be attributed to the study interventions. The p value is the most commonly encountered way of reporting statistical significance. The methods assume that the study is free of bias. Clinical significance is entirely independent from statistical significance.

The probability that an observed difference (for example, between two arms of a vaccine trial) is due to the vaccine rather than to chance alone. This probability is determined by using statistical tests to evaluate collected data.

A result is said to be statistically significant when it is highly unlikely that chance produced the result. To be considered significant, the chance probability must be less than 1 in 20 (5%, or 0.05); see also Mean Chance Expectation.

A probability estimate in which the outcome of a measurement is less that 5 percent (5 in 100) of occurring purely by chance. ( McGuinness, 1997)

The level of confidence with which one can conclude that a difference between two or more groups (generally a treatment and control group) is the result of the treatment delivered rather than the selection process or chance. A probability value of .05 is widely accepted as the threshold for statistical significance in the social and behavioral sciences; a probability value below this threshold ( .05) indicates that a difference of this magnitude could happen by chance less than 5 percent of the time.

Refers to the probability or likelihood that an event occurred by chance alone. The results of a study are said to be statistically significant if the probability that the results occurred by chance is less than 0.05 (that is fewer than 1 chance in 20). In medical research, the level of statistical significance can depend on factors such as the number of study participants and the magnitude of the differences in outcomes observed between study participants.

Interpreted as the probability of a Type I error. Test statistics that meet or exceed a critical value are interpreted as evidence that the differences exhibited in the sample statistics are not due to random sampling error and therefore are evidence supporting the conclusion there is a real difference in the populations from which the sample data were obtained.

When quantitative differences found between populations are labeled as statistically significant, it means the differences are considered highly likely to be real and are not due to mere coincidence (random error). For example, if the diabetes rate for Hispanics is higher than the rate for other racial/ethnic groups and those differences are statistically significant, it means the rates probably reflect true disparities between groups.

Level at which an investigator can conclude that observed differences are not due to chance alone; for example, a p value of .05 (also called significance at the .05 level) indicates that there is about 1 chance in 20 that the differences observed occurred by chance alone.

How likely a given result (in, eg, an epidemiological study) was to have come about just by random chance. Conventionally, if the likelihood of it coming about by chance, in the absence of any actual causal risk is 5% or less, the result is said to be statistically significant

An inference that the probability is low that the observed difference in quantities being measured could be due to variability in the data rather than an actual difference in the quantities themselves. The inference that an observed difference is statistically significant is typically based on a test to reject one hypothesis and accept another.

a conclusion that an intervention has a true effect, based upon observed differences in outcomes between the treatment and control groups that are sufficiently large so that these differences are unlikely to have occurred due to chance, as determined by a statistical test. Statistical significance indicates the probability that the observed difference was due to chance if the null hypothesis is true; it does not provide information about the magnitude of a treatment effect. (Statistical significance is necessary but not sufficient for clinical significance.)

Statistical significance tells you whether two scores are really different from each other. One way to determine statistical significance is to check whether the confidence intervals around the scores overlap.

Degree to which the observed study results could have occurred by chance alone. This can be determined by the application of a statistical test to a set of data to generate a P value. Statistical significant means the study results are unlikely to be due to chance alone.

The probability that an event or difference occurred by chance alone. In clinical trials, the level of statistical significance depends on the number of participants studied and the observations made, as well as the magnitude of differences observed.

The mathematical measure of the probability that the results of a study are attributable to chance rather than to the effect of the therapy or agent being evaluated. If this probability is low enough, given the size of the study and strength of the results, the results are considered to be "statistically significant."

The degree to which an observed result, such as a difference between two measurements, can be relied upon and not attributed to random error in sampling and measurement.

the probability that an event or difference occurred as the result of the intervention (drug or vaccine) rather than by chance alone. This probability is determined by using statistical tests to evaluate collected data. Guidelines for defining significance are chosen before data collection begins.

the probability that an observed outcome of an experiment or trial is due to chance alone. In general, a result of a clinical trial is considered statistically significant if there is a less than 5% probability that the difference observed would occur by chance alone if the treatments being compared were equally effective (e.g., a p-value of less than .05).

The probability of obtaining an effect or association in a study sample as or more extreme than the effect or association observed if there was actually no effect.

The findings of a study may be just an unusual fluke. A statistical test can determine whether or not the results of the study are likely to be a fluke or not. That test calculates the probability of the result being caused by chance: it provides a p value (probability). If the p value is less than 0.05, then the result is not due to chance. A result with a p value of less than 0.05 is statistically significant. The 0.05 level is equal to odds of 19 to 1 (or a 1 in 20 chance). (See also p value, confidence interval, power, and probability).

The probability of obtaining an effect or association in a study sample as or more extreme that the one observed if there was actually no effect in the population. Based on the hypothesis that if there truly is no effect, the results of a study are unlikely to have occurred. A P value of less than five percent (P0.05) means the result would occur less than five percent of the time if there were no effect, and is generally considered evidence of a true treatment effect or a true relationship.

An explicit assumption by the analyst that a relationship revealed in the sample data also exists in the population as a whole, based on the relatively small probability that it would result only from sampling error if it did not exist in the population.

The probability that difference between the outcomes of the control and experimental group are great enough that it is unlikely it is due solely to chance. The probability that the null hypothesis can be rejected at a predetermined significance level (0.05 or 0.01).

This is the confidence we have in confirming or rejecting a hypothesis. For example with a correlation coefficient the significance relates to the confidence we have that the coefficient is not equal to 0.

Used to evaluate the likelihood that chance variability may be considered an explanation for observed results. An appropriate mathematical test of statistical significance is calculated to determine the p value, which is the probability that the observed results may be due to chance alone. If the p value is less than an arbitrarily chosen value, commonly selected as 0.05, the findings are accepted as statistically significant at the 5 percent level. This indicates there is less than 5 percent probability that the observed results are due to chance alone.

The probability that the null hypothesis is true, calculated using inferential statistics. For example, if an inferential test of differences yields the result P = 0.02, then this means that the calculated probability that the difference between two sets of data is due to chance (and not due to the independent variable) is 0.02 (which is the same thing as 1 in 50 or 2%).

is the level of recurrence or probability the tested signal or hypothesis would not be masked by noise in another identical but statistically independent sample [pg 15, 3

Generally interpreted as a result that would occur by chance; e.g., I time in 20, with a P-value less than or equal .05. It occurs when the null hypothesis is rejected.

is a measure of how likely it is that the reported result or difference was obtained by chance. For example, a result that is significant at the .05 level, the likelihood that the result was obtained by chance is less than 5 times out of 100. If the result of difference was significant at the .01 level, the result or difference was likely to have occurred less than once out of one hundred times.

The probability that a result is not likely to be due to chance alone. By convention, a difference between two groups is usually considered statistically significant if chance could explain it only 5% of the time or less. Study design considerations may influence the a priori choice of a different level of statistical significance.

The trustworthiness of an obtained statistical measure as a statement about reality; for example, the probability that the population mean falls within the limits determined from a sample. The expression refers to the reliability of the statistical finding and not to its importance.

In statistics, a result is called significant if it is unlikely to have occurred by chance. "A statistically significant difference" simply means there is statistical evidence that there is a difference; it does not mean the difference is necessarily large, important or significant in the usual sense of the word.

A mathematical test that indicates that groups being compared are different

Results of a test to find out if a trend really is rising or falling, or whether any apparent rise or fall can be explained by random variation in the measurement.