See ALPHA RISK.
The researcher's data-based decision that the null hypothesis is false when it is really true. This incorrect conclusion is not the result of a mistake in the analysis. By chance, a “large” dispersion among the means (which should happen with probability ) has actually occurred this time.
False positive - wrongly concluding that there is a significant difference. See also Multiple testing (multiplicity).
If the null hypothesis is true but is rejected, this results in a false positive result.
rejecting a null hypothesis that is, in fact, true.
In the statistical test of an hypothesis the error incurred by rejecting the null hypothesis is true.
An incorrect decision to reject something (such as a statistical hypothesis or a lot of products) when it is acceptable.
Claiming a statistically significant difference or association exists when it does not. This is caused by violating the assumptions of a statistical test, using the wrong test, or by random error. EX: An experiment is performed showing that one method of obdurating cannels is better than another. If the claim is made that the method is superior, there is still a possibility of being wrong, making a Type I error. A priori, Type I error is set by picking an alpha-level for the test. [See Alpha, Operating characteristic curve, power, Type II error
a Type I error occurs when a decision maker rejects the null hypothesis when it is actually true. See false positive decision error.
Rejection of a null hypothesis that is actually true. See Alpha.
a false positive - deciding that there is a real difference when in fact there is no difference
The event that a true null hypothesis is rejected
An error of statistical inference when the null hypothesis is rejected when it is true. This is an error of "seeing too much in the data."
Rejecting a true null hypothesis. Commonly interpreted as the probability of being wrong when concluding there is statistical significance. Also referred to as Alpha, p-value, or significance.
same as false-positive error.
in survey research, the occurrence whereby the survey reveals a statistically significant result, when in fact it is not so; a situation that grows out of the fact that a survey does not include all individuals or objects in the population of interest to the researcher. See Type II error.
In a hypothesis test, a type I error occurs when the null hypothesis is rejected when it is, in fact, true ie H0 is wrongly rejected. A type I error is usually considered to be more serious, and therefore more important to avoid, than a type II error.
See 'False Rejection'.
The error of rejecting the null hypothesis when it is true.
(alpha): The rejection of the null hypothesis (Ho) when it is, in fact, true (i.e., determining that the effluent is toxic when the effluent is not toxic) (EPA, 2000).
Incorrectly rejecting the null hypothesis when it is true.
When a test wrongly shows an effect or condition to be present ( e.g. that a woman is pregnant when, in fact, she is not). When a researcher falsely rejects the null hypothesis (see False Positive).
in statistical process control, incorrectly inferring the process is out of control when the process is actually in control. In hypothesis testing, incorrectly rejecting the null hypothesis.
A type I error is a FALSE HIT - the error of inferring that the experimental hypothesis is true when in "reality" it is false.
The error that results if a true null hypothesis is rejected or if a difference is concluded when there is no difference.
Rejecting something that is acceptable. Also known as an alpha error.
Deciding to reject the null hypothesis when the null hypothesis is in fact true. The investigator determines that there is something going on when in fact there is not.