next up previous

2.7 Performance Models     continued...

As a historical note, microprocessor and microcomputer designers in the 1970s tended to build systems with instruction sets like those of machine A above. The goal was to include instructions with a large ``semantic content,'' e.g. multiplication is relatively more complex than loading a value from memory or shifting a bit pattern. The payoff was in reducing the overhead to fetch instructions, since fewer instructions could accomplish the same job. By the 1980s, however, it became widely accepted that instruction sets such as those of machine B were in fact a better match for VLSI chip technology. The move toward simpler instructions became known as RISC, for Reduced Instruction Set Computer. A RISC has fewer instructions in its repertoire, but more importantly each instruction is very simple. The fact that operations are so simple and so uniform leads to some very powerful implementation techniques, such as pipelining, and opens up room on the processor chip for items such as on-chip caches or multiple functional units, e.g. a CPU that has two or more arithmetic units. We will discuss these types of systems in more detail later, in the section on superscalar designs (Section 3.5.2). Another benefit to simple instructions is that cycle times can also be much shorter; instead of being only moderately faster, e.g 10MHz vs. 9MHz as in the example above, cycle times on RISC machines are often much faster, so even though they fetch and execute more instructions they typically outperform complex instruction set ( CISC) machines designed at the same time.

In order to compare performance of two machines with different instruction sets, and even different styles of instruction sets (e.g. RISC vs. CISC), we can break the total execution time into constituent parts [11]. The total time to execute any given program is the product of the number of machine cycles required to execute the program and the processor cycle time: