The number of digits to which floating-point numbers are represented. Double-precision numbers can have greater precision than single-precision numbers.
The number of bits or digits in which a value can be represented. The precision of a value indicates how close a floating-point approximation can be to the exact numeric value being represented.
The number of digits past the decimal that are used to express a quantity. á‰µáŠ½áŠáˆˆáŠ›áŠá‰µ View
decimal Simple Defines the granularity (e.g. thousandths would be 0.001) of the related property.
decimal Simple Defines the minumum granularity of the related property.
The number of digits required to accurately represent a number. For example, the value 3.2 requires two decimal digits of precision, and the value 3.002 requires four decimal digits. In numeric data formats, the precision is equal to the number of bits (both implicit and explicit) in the significand.
Refers to the number of significant digits used to store numbers, and in particular, coordinate values. Precision is important for accurate feature representation, analysis and mapping. ArcInfo supports single precision and double precision.
The number of significant digits in a real number. See also double-precision constant, kind type parameter, and single-precision constant.