So far, the dynamics-based model has been executed without any modifications and results were viewed and animated. In this section, we will include observational data into the model and look at the difference they make in model performance.

As we said earlier, ITDASM turns data assimilation
on or off. Getting accurate results from a tidal model (or any
model for that matter) involves an iterative procedure, where
parameters such as the bottom depth, bottom friction coefficient
(CD in * pegtdu10*) and open boundary conditions are ``tuned'' to
arrive at the best possible result. Even then, the results are
limited by how much we know and what the model limitations are.
For example, we need to know the tidal distribution at the Straits
of Hormuz accurately and we need to have the correct value of
friction and bottom depth at each grid point, and a model
resolution approaching one or two km, to do the tides well in the
Persian Gulf. The model resolution in this example is only 18 km
and this will influence the result, because the bottom depths can
not be represented accurately at this coarse resolution. Also it
is difficult to prescribe bottom friction accurately in a
barotropic model. For these reasons, a model run will never agree
``perfectly'' with observations (see the time series comparison
produced using * pegtdt*). The solution to this problem is data
assimilation, bringing observed tidal data to bear upon the model
results.

Data assimilation is an inescapable part of all prediction modeling in geophysics involving real nonlinear systems, due to the ubiquitous nature of chaos [23]. Because inevitable errors in initial conditions, no matter how small you make them, eventually cause the model to depart radically from reality, occasional or continuous updating of the model with observational data is essential to keep models faithful to nature. Data assimilation is routinely used in weather forecast. Every 12 hours, the forecast model is reinitialized with observations from around the globe and the model rerun by the NWP centers to make forecasts for the next two days. In the oceans, a better method may be to inject observed data into the model as they become available and update the model continuously. This continuous assimilation method is what is used in this tidal model.

The
tidal elevation at any point in time is known from observations of
tidal data from tide gauges at coastal stations around the world.
The tidal components derived from these data can be obtained from
an historical archive in Canada. As the model is running, the model
produced SSH can be replaced by a weighted combination of the
model SSH and what the SSH should be according to observations, at
those grid points where data are available. Thus the model will be
constrained to follow nature at those points where we have
observations. The hope is that, in the rest of the domain, the
model will do better as a result. This is what is done in the TIDASM
routine in * tides.f*. The relevant observational data comes from
* pegtdstn*. Data assimilation can be turned on by putting ITDASM=1.
The frequency of assimilation can be controlled by prescribing
DTT. This should be a multiple of DTE, the external time step. We
urge you to make runs with and without data assimilation to see
the power of assimilation, without which most of our
endeavors in the field of geophysical prediction would be
worthless.

(See exercises 7, , 7, 7, and 7.)