1. Pertaining to the use of two computer words to represent a number. 2. In floating point arithmetic, the use of additional bytes or words representing the number, in order to double the number of bits in the mantissa.

A floating point value having more bits for the mantissa.

Specifies the data type approximate numeric, with implementation-defined precision that is greater than the implementation-defined precision of REAL.

A real (floating-point) value that occupies 8 bytes of memory (MASM type REAL8). Double-precision values are accurate to 15 or 16 digits.

Synonym for long-precision.

In the Java programming language specification, describes a floating point number that holds 64 bits of data. See also single precision.

A two-word storage representation for floating-point numbers.

The degree of accuracy that requires two computer words to represent a number. Numbers are stored with 17 digits of accuracy and printed with up to 16 digits.

An internal representation of numbers that can have fractional parts. Double precision numbers keep track of more digits than do single precision numbers, but operations on them are more expensive. This is the way awk stores numeric values. It is the C type double.

In computing, double precision is a computer numbering format that occupies two storage locations in computer memory at address and address+1. A double precision number, sometimes simply a double, may be defined to be an integer, fixed point, or floating point.