The first computer to be termed a ``supercomputer'' was the CDC 6600, introduced in 1966. Later model CDC 6600s had a peak performance rate of 3 million floating point operations per second, or 3 Megaflops (Mflops). Computers of the 1990s are capable of peak performance rates of one Gigaflop (one thousand Megaflops). Teraflop (one million Megaflops) performance rates are predicted by the turn of the century. Table 3 shows the peak performance rate for some representative machines. With this rapid increase in performance, it has long been recognized that the definition of the term supercomputer must be dynamic. More than just peak performance rates must be considered when designating a computer as a super computer. Other factors that must be considered include memory size and memory bandwidth.
Recently the term ``supercomputer'' has been displaced by the term ``high performance computer'' or ``high performance computing environment''. This shift in terminology has resulted from the recognition that, in a computational science setting, when real problems are being tackled (rather than just CPU benchmarks) it is the entire computing environment that must offer high performance, not just the CPU. In addition to a computer with a high computational rate and a large, fast memory, a high performance computing environment must include high speed network access, reliable and robust software and compilers, documentation and training and scientific visualization support.