A method of comparing or confirming performance of predictive models by estimating the error that would be produced by a model. Cross validation is also used to compare one method of inducing a predictive model (called an inducer) with another inducer. Cross validation can also be used to refine the parameters of a particular inducer. In this case the inducer's parameters are refined to minimise the error in the model produced. A -fold cross validation will build models from datasets.
The application of a scoring system or set of weights empirically derived in one sample to a different sample drawn from the same population to investigate the stability of relationships based on the original weights.
A technique for testing the validity of a variogram model by kriging each sampled location with all of the other samples in the search neighborhood, and comparing the estimates with the true sample values. Interpretation of results, however, can often be difficult. Unusually large differences between estimated and true values may indicate the presence of “spatial outliers”, or points which do not seem to belong with their surroundings.