3.5.4 Multiprocessor Vector Machines



next up previous
Next: 3.5.5 Distributed Memory MIMD Up: 3.5 Survey of High Previous: 3.5.3 Shared Memory MIMD

3.5.4 Multiprocessor Vector Machines

 

Most of the vector supercomputer manufacturers produce multiprocessor systems based on their vector processors. Since a single node is so expensive and so finely tuned to memory bandwidth and other architectural parameters, the multiprocessor configurations have only a few processors. The largest currently is the Cray C-90, which has up to 16 processors.

An 8-processor Cray Y-MP is a shared memory MIMD system in the style of the BBN Butterfly, with processors connected to a set of memories via a multistage switching network. The switching network is a 3-level crossbar. A major difference between the switching networks in the Butterfly and Y-MP is that in the Y-MP there are many more memory modules than processors since the individual processors were designed to connect to an interleaved memory. There are enough memory modules - 32 per processor - and enough flexibility in the switch to allow each processor to connect to several banks at once so it can transfer vectors into and out of vector registers at a rate of one item per clock cycle. The assignment of virtual addresses to memory modules differs from the Butterfly arrangement, also, since in the interleaved organization consecutive addresses need to be in different modules. An exception to the rule that few processors are used in multiprocessor vector machines is a 222-processor system recently announced by Fujitsu. The nodes in this machine will be interconnected via a large, single-stage crossbar switch. Each node in the system consists of a local interleaved memory, a scalar processing unit, and a vector processing unit. The network interface implements a single address space from the individual local memories. Each node in the VPP500 will have a peak performance of about 1.5 GFLOPS, so a full 222 processor system will have a theoretical peak performance of over 300 GFLOPS.



next up previous
Next: 3.5.5 Distributed Memory MIMD Up: 3.5 Survey of High Previous: 3.5.3 Shared Memory MIMD



verena@csep1.phy.ornl.gov