failing to reject a null hypothesis that is, in fact, false.

Failing to reject the null hypothesis when it is false.

In the statistical test of an hypothesis the error incurred by accepting the null hypothesis when the null hypothesis is false and some alternative to the null hypothesis is true.

Failing to find a true and existing difference or association based on a statistical test. This is caused by violating the assumptions of a statistical test, using the wrong test, or by random error. EX: An experiment is performed on two obduration techniques, but it is inconclusive. The possibility that no difference will be found when one really exists is called the Type II error. A priori, Type I error is set by picking an alpha level for the test. The probability of a Type II error is abbreviated Î². [See alpha, operating characteristic curve, power, power curve, Type I error

a Type II error occurs when the decision maker fails to reject the null hypothesis when it is actually false. See false negative decision error.

Failure to reject a null hypothesis that is actually false. See Beta.

a false negative - that is there is a real difference but the statistical test fails to show the difference to be statistically significant

a statistical concept described in introductory textbooks

The event that a false null hypothesis is not rejected

An error of statistical inference when the null hypothesis is retained when it is false. This is an error of "not seeing enough in the data."

Retaining a false null hypothesis. Also referred to as Beta.

same as false-negative error.

in survey research, the occurrence whereby the survey reveals a result that is not statistically significant, when in fact it is; a situation that grows out of the fact that a survey does not include all individuals or objects in the population of interest to the researcher. See Type I error.

In a hypothesis test, a type II error occurs when the null hypothesis H0 is not rejected when it is, in fact, false. A type II error is frequently due to sample sizes being too small.

The error of accepting the null hypothesis when it is false.

(beta): The acceptance of the null hypothesis (Ho) when it is not true (i.e., determining that the effluent is not toxic when the effluent is toxic) (EPA, 2000).

Not rejecting a false null hypothesis. The probability of such an error is called the beta risk. For more details, see hypothesis tests.

When a test wrongly shows an effect or condition to be absent ( e.g. that a woman is not pregnant when, in fact, she is). When a researcher fails to reject the null hypothesis (see False Negative).

in statistical process control, incorrectly inferring the process is in control when the process is actually out of control. In hypothesis testing, incorrectly failing to reject the null hypothesis.

Failing to find a difference when one actually exists.

A type II error is a MISS - the erros of inferring that the experimental hypothesis is false when in "reality" it is true.

The error that results if a false null hypothesis is not rejected or if a difference is not detected when there is a difference.

Accepting something that should have been rejected. Also known as beta error.