A bus is used to transfer information between several different modules. Small and mid-range computer systems, such as the Macintosh shown in Figure 1 have a single bus connecting all major components. Supercomputers and other high performance machines have more complex interconnections, but many components will have internal buses.
Communication on a bus is broken into discrete transactions. Each transaction has a sender and receiver. In order to initiate a transaction, a module has to gain control of the bus and become (temporarily, at least) the bus master. Often several devices have the ability to become the master; for example, the processor controls transactions that transfer instructions and data between memory and CPU, but a disk controller becomes the bus master to transfer blocks between disk and memory. When two or more devices want to transfer information at the same time, an arbitration protocol is used to decide which will be given control first. A protocol is a set of signals exchanged between devices in order to perform some task, in this case to agree which device will become the bus master.
Once a device has control of the bus, it uses a communication protocol to transfer the information. In an asynchronous (unclocked) protocol the transfer can begin at any time, but there is some overhead involved in notifying potential receivers that information needs to be transferred. In a synchronous protocol transfers are controlled by a global clock and begin only at well-known times.
The performance of a bus is defined by two parameters, the transfer time and the overall bandwidth (sometimes called throughput). Transfer time is similar to latency in memories: it is the amount of time it takes for data to be delivered in a single transaction. For example, the transfer time defines how long a processor will have to wait when it fetches an instruction from memory. Bandwidth, expressed in units of bits per second (bps), measures the capacity of the bus. It is defined to be the product of the number of bits that can be transferred in parallel in any one transaction by the number of transactions that can occur in one second. For example, if the bus has 32 data lines and can deliver 1,000,000 packets per second, it has a bandwidth of 32Mbps.
At first it may seem these two parameters measure the same thing, but there are subtle differences. The transfer time measures the delay until a piece of data arrives. As soon as the data is present it may be used while other signals are passed to complete the communication protocol. Completing the protocol will delay the next transaction, and bandwidth takes this extra delay into account. Another factor that distinguishes the two is that in many high performance systems a block of information can be transferred in one transaction; in other words, the communication protocol may say ``send n items from location x.'' There will be some initial overhead in setting up the transaction, so there will be a delay in receiving the first piece of data, but after that information will arrive more quickly.
Bandwidth is a very important parameter. It is also used to describe processor performance, when we count the number of instructions that can be executed per unit time, and the performance of networks.