As with the Simulated Annealing algorithm, the procedures controlling the generation of new solutions are so simple that the computational cost of implementing a GA is usually dominated by that associated with the evaluation of the problem functions. It is therefore important that these evaluations should be performed efficiently and essential if the optimization is to be performed on a serial computer. Advice on how these calculations can be accelerated can be found in the chapter appropriate to the form of the system equations to be solved.
Unlike SA, which is intrinsically a sequential algorithm, GAs are particularly well-suited to implementation on parallel computers. Evaluation of the objective function and constraints can be done simultaneously for a whole population, as can the production of the new population by mutation and crossover. Thus, on a highly parallel machine, a GA can be expected to run nearly N times as fast for many problems, where N is the population size.
If it is possible to parallelize the evaluation of individual problem functions effectively, some thought and, perhaps, experimentation will be needed to determine the level at which multitasking should be performed. This will obviously depend on the number of processors available, the intended population size and the potential speed-ups available. If the number of processors exceeds the population size (a highly parallel machine!), multi-level parallelization may be possible.
A significant component of a GA code is the random number generator, which is essential to the processes of selection, crossover and mutation. (See, for example, the sample code supplied with this section.) Random number generators are often provided in standard function libraries or as machine-specific functions. It is important, particularly when tackling large scale problems requiring thousands of iterations, that the random number generator used have good spectral properties - see the chapter on Random Number Generators for more details.