To allow for a larger range without sacrificing precision, computer systems use a technique known as floating point. This representation is based on the familiar ``scientific notation'' for expressing both very large and very small numbers in a concise format as the product of a small real number and a power of 10, e.g. . This notation has three components: a base (10 in this example); an exponent (in this case 23); and a mantissa (6.022). In computer systems, the base is either 2 or 16. Since it never changes for any given computer system it does not have to be part of the representation, and we need only two fields to specify a value, one for the mantissa and one for the exponent.
As an example of how a number is represented in floating point, consider again the number 6.625. In binary, it is
If a 16-bit system has a 10-bit mantissa and 6-bit exponent, the number would be represented by the string 1101010000 000010. The mantissa is stored in the first ten bits (padded on the right with trailing 0's), and the exponent is stored in the last six bits.
As the above example illustrates, computers transform the numbers so the mantissa is a manageable number. Just as is preferred to or in scientific notation, in binary the mantissa should be between and . When the mantissa is in this range it is said to be normalized. The definition of the normal form varies from system to system, e.g. in some systems a normalized mantissa is between and .