next up previous

2.6 Data Representations     continued...

Systems usually deal with fixed-length strings of binary digits. The smallest unit of memory is a single bit, which holds a single binary digit. The next largest unit is a byte, now universally recognized to be eight bits (early systems used anywhere from six to eight bits per byte). A word is 32 bits long in most workstations and personal computers, and 64 bits in supercomputers. A double word is twice as long as a single word, and operations that use double words are said to be double precision operations.

Storing a positive integer in a system is trivial: simply write the integer in binary and use the resulting string as the pattern to store in memory. Since numbers are usually stored one per word, the number is padded with leading 0's first. For example, the number 52 is represented in a 16-bit word by the pattern 0000000000110100.

The meaning of an n-bit string s when it is interpreted as a binary number is defined by the formula, , i.e. bit number i has weight:

Compiler writers and assembly language programmers often take advantage of the binary number system when implementing arithmetic operations. For example, if the pattern of bits is ``shifted left'' by one, the corresponding number is multiplied by two. A left shift is performed by moving every bit left and inserting 0's on the right side. In an 8-bit system, for example, the pattern 00000110 represents the number 6; if this pattern is shifted left, the resulting pattern is 00001100, which is the representation of the number 12. In general, shifting left by n bits is equivalent to multiplying by .