next up previous

3 Code Debugging and Performance Analysis

In the traditional modeling exercise, once the modeler begins model execution the situation is pretty much one of ``wait and see.'' Other than possibly a few diagnostics that the modeler may have chosen to print to the screen, the code continues to crunch to completion. By displaying a color map of some section of the model region, say after each time step, the modeler can gain important information about the behavior of the model. Then, if the simulation has appeared to go wrong, the execution can be halted, thus saving valuable computer and modeler time. Also, it is possible to steer a calculation and even do some intermediate recalibration. Additionally, the graphical information displayed to the screen can often yield valuable debugging information. By looking at an evolving raster map during the simulation, the modeler can find the cause of a problem such as a computational instability or a problem with a boundary condition.

Another area in which visualization has great potential is that of code performance analysis and tuning. A modeler can obtain a good understanding of the relative importance of various subroutines by monitoring a subroutine histogram during the execution of the program. The total area under such a histogram represents the total execution time, and the area under each subroutine bar gives a measure of the percentage of time spent executing it. In complicated programs the flow of logic can be displayed by a network diagram that shows the various subroutines as nodes on the network and during execution displays vectors from the calling program to the called program, with the active node highlighted. This kind of information is especially valuable when working on advanced architectures, such as a massively parallel machine on which the goal is to balance the computational load across a large number of processors.