Definitions for "Floating Point"
programming: Mathematics in which an essentially unlimited number of digits after the decimal may be used. Mathematically speaking these are "real" numbers (as opposed to integers); you will want to have a coprocessor if you need to do a lot of floating point calculations for applications such as 3-D modeling.
a digital representation of a number with a specified number decimal places, or fractional part, used to represent real numbers; contrast with integer
How real numbers are stored on a computer. Numbers are stored as a sign (, +1 or -1), a mantissa (, ), and an exponent () in a similar form to , e.g. 512.43 in base 10 would be . The number of bytes used for each part is larger for doubles than single floats, meaning operations are more accurate. Floating point works well but you can get big problems if you aren't careful (e.g. don't subtract large numbers from each other, don't compare floating point numbers to be an exact number). There are long documents on floating point numbers (see the IEEE standard). See for very useful information if you're doing numerical work http://docs.sun.com/db?p=/doc/800-7895
Keywords:  mainframe, golf, giant, sector, park
A floating sector of Mainframe - it's mainly made up of a giant park and golf course. RB: 1
Keywords:  unbalanced, floppy, disk, line
Floating Unbalanced Line Floppy Disk