Perhaps the most fundamental performance question that can be asked of an algorithm running on a parallel system is ``does it run faster, and if so by how much?''. Ideally, if one uses P processors to solve a given problem, the execution time would be cut by a factor of P. This leads to a definition of speedup, which is the ratio of the execution time on one processor to the execution time on P processors:
For example, if a program takes 18 minutes to run on one processor, but only 4.7 minutes on four processors, the speedup is a factor of .
The strength of such a measure is that it uses observed execution time and thus takes into account any overhead in the parallel system for breaking a job into parallel tasks and intertask communication time. Comparing time on one processor vs. time on P processors can be misleading, however. One might be tempted to write a program for a P-processor machine, time it first on one processor and then on P processors, and call the ratio the speedup. Plotted for different values of P this procedure gives an accurate measure of the scalability of the algorithm used, but it does not answer the question how much faster a problem may be solved using P processors since a parallel algorithm usually incurs overheads that are not found in sequential algorithms. Ortega and Voigt  defined speedup as the ratio of the solution time for the best serial algorithm with that required by the parallel algorithm: