Review on Mathematical Perspective for Data Assimilation Methods: Least Square Approach

Show more

1. Introduction

In general, models could be classified as [1] [2] :

1) Process specific models (Causality Process) are based on conservation of laws of nature e.g.: Shallow water modeling, Navier Stokes Equation.

2) Data specific models (correlation based) are based on developed experimental models e.g.: Time Series models, Machine Learning, Neural Networks. etc.

Atmospheric and oceans models are process specific models which Navier stokes equations are the core of the solver to predict, simulate and estimate system states. Process specific models have different classifications from different perspectives, time, space, and structure of the model, in addition to another classification which is deterministic and stochastic.

It is assumed that the state of dynamical system evolves according to first order nonlinear equation

${X}_{K+1}=M\left({X}_{K}\right)$ (1)

where ${X}_{K}$ is the current state of the system; M(.) is the mapping function from the Current state X at time K to the next state X at K + 1

・ If the mapping function M(.) doesn’t depend on the time index K then it is called a time invariant or autonomous system

・ If the M(.) varies with time index k that is ${X}_{K+1}={M}_{K}\left({X}_{k}\right)$ , then it is called time-varying system (dynamic system)

・ If ${X}_{K+1}=M{X}_{K}$ for M non-singular matrix, then it is called time invariant linear system.

・ If matrix M varies with time, that is ${X}_{K+1}={M}_{K}{X}_{k}$ , then it is called time varying linear or non-autonomous system.

・ In the special cases when M(.) is an identity map, that is $M\left(x\right)=x$ , then it is called static system

The mentioned above is the deterministic case; the Randomness in the model can enter in three ways:

(A) Random initial conditions (B) Random forcing (C) Random Coefficients

So, a random or a stochastic model is given by:

${X}_{K+1}=M\left({X}_{K}\right)+{W}_{K+1}$ (2)

where the random sequences $\left\{{W}_{K}\right\}$ , denotes to the external forcing; typically $\left\{{W}_{K}\right\}$ captures uncertainties in the model including model error.

So, the classes of models that are used in the data assimilation could be classified as:

(1) Deterministic-Static (3) Stochastic-Static

(2) Deterministic-Dynamic (4) Stochastic-Dynamic

The paper is organized as follow: Section 2 states the mathematical foundation for data assimilations which describes linear and nonlinear least square in addition to weighted least square and Recursive Least square. Section 3 introduces the deterministic static model linear and nonlinear cases. Section 4 shows stochastic static model: Both linear and nonlinear cases. Section 5 describes deterministic dynamic linear case and the recursive case. Section 6 explains stochastic dynamic linear, nonlinear, reduced and hybrid filters.

2. Mathematical Background

Mainly all the data assimilation techniques are based on least square estimation, so classification of the estimation problems based on different criteria. The Estimation problems can be underdetermined problem when number of observations (m) are less than number of state (n) (m < n). And can be over determined when number of observations (m) is larger than number of states (n) (m > n) [1] . Also the Estimation problem can be classified according to the mapping function from the state space to observation space which can be linear or nonlinear. and nonlinearity should be handled

Other type of classification is offline or online problems. Where offline problems if the observations are known priori. In other words we have historical set of data and we treat with them. While Online/Sequential problems are computing a new estimate of the unknown state X as a function of the latest estimate and the current observation. So, online formulation is most useful in real time applications. The last type of classifications is Strong and Weak Constraints, The Strong Constraint is occurred when the estimation is performed under the perfect model assumption. While in case of allowing for errors in the model dynamics, it is considered Weak Constraint [1]

Since, Data Assimilation based on Least Square Approach [1] [2] [3] [4] . First we introduce linear version of least square then will move to nonlinear version. After that weighted version for both linear and nonlinear will be introduced.

2.1. Linear Least Square Method

Let $Z=HX$ (3)

Given the observation vector Z and the mapping function/Interpolation matrix H (Full Rank) find the unknown state vector X.

Then the error Vector which represents the difference between the Observations Z and the required estimated state X can be represented as follow

$e=Z\u2013HX$ (4)

The Previous term is called also error term or Innovation term. So we need to find the best estimate that minimizes the error. So the problem is converted from Estimation Problem to Optimization Problem. One of the basic classifications for problem is over determined Problem (m > n) or under determined problem (m < n) and we consider first over determined case (m > n). To measure The vector e using The Euclidean norm and the fact of Minimization of norm is equal to minimization of square norm. The cost function will be as follow

$J\left(x\right)={\Vert Z-HX\Vert}_{2}^{2}={\left(Z-HX\right)}^{\text{T}}\left(Z-HX\right)$ (5)

And minimization of X is Obtained under the following condition $\nabla J\left(x\right)=0$ . This lead to

$X={\left({H}^{\text{T}}H\right)}^{-1}{H}^{\text{T}}Z$ (6)

And this equation was known as normal equation method. In case of over determined problem m > n. While in case under determined case (m < n). The estimation for X will be as follow.

$X={H}^{\text{T}}{\left(H{H}^{\text{T}}\right)}^{-1}Z$ (7)

And this undetermined problem where m number of observation is less than n, number of unknowns is known in geophysical (physical process physical properties of the earth and its surrounding space Environment) because of the cost of collection of the observation is high. And in case uniquely determined case (m = n) which mean that the error e is zero. The estimated state vector X will be

$X={H}^{-1}Z$ (8)

As we can see from the above three formulation that minimization for the cost function ends to Solving system of linear equation, and solving the linear system (A = Xb) of equations can be done using:

o Direct methods (Cholesky decomposition, Q-R decomposition, Singular Value decomposition, …) [5] .

o Iterative methods (Jacobi Method, Gauss-Seidel Method, Successive Over-Relaxation method, …) [5] .

2.2. Non Linear Least Square Method

The problem here will be: Given set of observation Z and knowing the function form h which is nonlinear function. Find the state vector X as shown

$Z=h\left(X\right)$ (9)

And the Innovation term will be

$e=Z-h\left(X\right)$ (10)

And the cost function will be

$J\left(X\right)={\Vert Z-h\left(X\right)\Vert}_{2}^{2}={\left(Z-h\left(X\right)\right)}^{\text{T}}\left(Z-h\left(X\right)\right)$ (11)

And so that, the idea here is extension for the linear case by replacing the nonlinear term h(x) with its Linear approximation of Taylor series expansion around an Operating Point let it ${X}_{c}$ and in this case it is called first order approximation for nonlinear least square as follow

$h\left(X\right)=h\left({X}_{c}\right)+{D}_{h}\left({X}_{c}\right)\left(X-{X}_{c}\right)$ (12)

where ${D}_{h}\left({X}_{c}\right)$ is the Jacobin matrix of h which is an $m\times n$ matrix given by:

${D}_{h}\left({X}_{c}\right)=\nabla h\left({X}_{c}\right)=\left[\frac{\partial {h}_{i}}{\partial {X}_{j}}\right]\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{where}\text{\hspace{0.17em}}\text{\hspace{0.17em}}1\le i\le m;\text{\hspace{0.17em}}1\le j\le n$ . (13)

And by substitution of first order approximation of Taylor series expansion in the cost function and giving it name ${Q}_{1}\left(x\right)$ and the index one refer for first order

$\begin{array}{c}{Q}_{1}\left(x\right)=J\left(X\right)={\Vert Z-h\left(X\right)\Vert}_{2}^{2}\\ ={\left(z-h\left(X\right)-{D}_{h}\left({X}_{c}\right)\left(X\u2013{X}_{c}\right)\right)}^{\text{T}}\left(z-h\left(X\right)-{D}_{h}\left({X}_{c}\right)\left(X\u2013{X}_{c}\right)\right)\end{array}$ (14)

And simplifying the notation by define $g\left(X\right)=\left(Z\u2013h\left(X\right)\right)$

$\begin{array}{c}{Q}_{1}\left(x\right)=J\left(X\right)={\Vert Z-h\left(X\right)\Vert}_{2}^{2}\\ ={\left(g\left(x\right)-{D}_{h}\left({X}_{c}\right)\left(X-{X}_{c}\right)\right)}^{\text{T}}\left(g\left(x\right)-{D}_{h}\left({X}_{c}\right)\left(X-{X}_{c}\right)\right)\end{array}$ (15)

And by comparing the previous equation by the linear version.

$J\left(x\right)={\Vert Z-HX\Vert}_{2}^{2}={\left(Z-HX\right)}^{\text{T}}\left(Z-HX\right)$ (16)

You will find that every Z was replaced by g(x) and every H is replaced by ${D}_{h}\left({X}_{c}\right)$ also every X is replaced by $\left(X-{X}_{c}\right)$

And if the gradient is $\nabla {Q}_{1}=0$ ; we will obtain

$X-{X}_{c}={\left[{D}_{h}^{\text{T}}\left({X}_{c}\right){D}_{h}^{\text{T}}\left({X}_{c}\right)\right]}^{-1}\left[{D}_{h}^{\text{T}}\left({X}_{c}\right)g\left({X}_{c}\right)\right]$ (17)

And this is iterative approach and given an initial value for ${X}_{c}$ and solving the above equation using direct methods or iterative methods then iterate again until $X-{X}_{c}$ < prescribed threshold

And by similarity the second order algorithm for non Linear Least Square will be same steps except that Taylor series expansion will be full Quadratic approximation

$h\left(X\right)=h\left({X}_{c}\right)+{D}_{h}\left({X}_{c}\right)\left(X-{X}_{c}\right)+\frac{1}{2}{D}_{h}^{2}\left({X}_{c}\right){\left(\left({X}_{c}\right)\left(X-{X}_{c}\right)\right)}^{2}$ (18)

where the ${D}_{h}^{2}\left({X}_{c}\right)={\nabla}^{2}h\left(X\right)$ is the hussian of $h\left(X\right)$ and by substituting this second order expansion in the nonlinear cost function and giving it name ${Q}_{2}\left(x\right)$ since the index 2 refer to second order approximation and Putting the gradient $\nabla {Q}_{2}=0$ equal zero you will get the following

$X-{X}_{c}={\left[{D}_{h}^{\text{T}}\left({X}_{c}\right){D}_{h}^{\text{T}}\left({X}_{c}\right)+g\left({X}_{c}\right){\nabla}^{2}h\left(X\right)\right]}^{-1}\left[{D}_{h}^{\text{T}}\left({X}_{c}\right)g\left({X}_{c}\right)\right]$ (19)

And this is iterative approach and given an initial value for ${X}_{c}$ and solving the above equation using direct methods or iterative methods then iterate again until $X-{X}_{c}$ < prescribed threshold. All the above methods were for deterministic least square where

$Z=HX$ or $Z=h\left(X\right)$ (20)

2.3. The Weighted Least Square Method

If there additive random noise was V presented. Which mean the observation is noisy with as follow

$Z=HX+V$ or $Z=h\left(X\right)+V$ (21)

・ Mean E(V) = zero which mean that the instrument is well calibrated Unbiased and If it is ≠ zero which mean that there is bias (for example under or over reading).

・ Covariance $Cov\left(V\right)=R$ , The covariance matrix for the instrument. which it is property for the instrumentation.

So, for the linear form of Least Square, the cost function will be for linear form will be

$J\left(x\right)={\Vert Z-HX\Vert}_{{R}^{-1}}^{2}={\left(Z-HX\right)}^{\text{T}}{R}^{-1}\left(Z-HX\right)$ (22)

And the non Linear form

$J\left(x\right)={\Vert Z-HX\Vert}_{{R}^{-1}}^{2}={\left(Z-h\left(X\right)\right)}^{\text{T}}{R}^{-1}\left(Z-h\left(X\right)\right)$ (23)

Also by the same methodology that used above, the best estimate form for Stochastic linear least square after equality for the Gradient to zero will be

$X={\left({H}^{\text{T}}{R}^{-1}H\right)}^{-1}{H}^{\text{T}}{R}^{-1}Z$ (24)

And for the stochastic nonlinear least square first order approximation will be

$X-{X}_{c}={\left[{D}_{h}^{\text{T}}\left({X}_{c}\right){R}^{-1}{D}_{h}^{\text{T}}\left({X}_{c}\right)\right]}^{-1}\left[{D}_{h}^{\text{T}}\left({X}_{c}\right){R}^{-1}g\left({X}_{c}\right)\right]$ (25)

And for the stochastic nonlinear least square Second order approximation will be

$X-{X}_{c}={\left[{D}_{h}^{\text{T}}\left({X}_{c}\right){R}^{-1}{D}_{h}^{\text{T}}\left({X}_{c}\right)+b\left({X}_{c}\right){\nabla}^{2}h\left({X}_{c}\right)\right]}^{-1}\left[{D}_{h}^{\text{T}}\left({X}_{c}\right){R}^{-1}g\left({X}_{c}\right)\right]$ (26)

where ${R}^{-1}g\left({X}_{c}\right)=b\left({X}_{c}\right)$ for simiplyifing the notation.

2.4. The Recursive Least Square Estimation Approach (Offline/Online Approach)

All the above analysis was assumed that the number m of observations is fixed. which mean by another term it is offline version of least square. So if we don’t know number observation m and they are arrive sequentially in time. There’re two ways to add this new observations:

・ The first approach is to solve the system of linear equation repeatedly after arrival of every new observation. But this approach is very expensive from Computational point of view

・ The second approach is to formulate the following problem which is based on Knowing the Optimal estimate ${X}^{*}\left(m\right)$ based on the m observations. we need to compute ${X}^{*}\left(m+1\right)$ based on $m+1$ observation. In more clear words we need to calculate to reach to formulation that can compute the new estimate function of the old estimate plus sequantional term. And this approach is called situational or recursive framework

It was known as mentioned above that the optimal linear least square estimate for $Z=HX$ is ${X}^{*}\left(m\right)={\left({H}^{\text{T}}H\right)}^{-1}{H}^{\text{T}}Z$ . Let ${Z}_{m+1}$ be the new observation then the $Z=HX$ can be expanded in the form of matrix-vector relation as:

$\left[\begin{array}{c}Z\\ {Z}_{m+1}\end{array}\right]=\left[\begin{array}{c}H\\ {h}_{m+1}^{\text{T}}\end{array}\right]X$ (27)

So, the Innovation will be:

${e}_{m+1}\left(X\right)=\left[\begin{array}{c}Z\\ {Z}_{m+1}\end{array}\right]-\left[\begin{array}{c}Z\\ {Z}_{m+1}\end{array}\right]X$ (28)

So, the Cost function will be:

$J\left(X\right)={e}_{m+1}^{\text{T}}{e}_{m+1}$ (29)

$J\left(X\right)={\left(Z-HX\right)}^{\text{T}}\left(Z-HX\right)+{\left({Z}_{m+1}-{h}_{m+1}^{\text{T}}X\right)}^{\text{T}}\left({Z}_{m+1}-{h}_{m+1}^{\text{T}}X\right)$ (30)

So, by taking the gradient and equal it to zero $\nabla J\left(x\right)=0$ . We can get the following

${X}^{*}\left(m+1\right)={X}^{*}\left(m\right)+{K}_{m+1}{h}_{m+1}\left[{Z}_{m+1}-{h}_{m+1}^{\text{T}}{X}^{*}\left(m\right)\right]$ (31)

where

${K}_{m}={H}^{\text{T}}H,\text{\hspace{0.17em}}\text{\hspace{0.17em}}{K}_{m+1}={K}_{m}+{h}_{m+1}{h}_{m+1}^{\text{T}}$ (32)

The cost of computation for the second term of this equation is less than solving the all system again.

The basic building block for understanding the Data assimilation based on Least Square approach was introduced.

3. Deterministic-Static Models

In Atmospheric science, when we want to assimilate observation data to the model at time step, Two source of information are available, one of them after mapping the observation Z to X and the other source is the Prior information.

The Linear/Non Linear Case

The formulation for the problem, we need the best estimate for X given the two source of information

・ The first source is the given the Observation Z and mapping function H, where the innovation term was $e=Z-h\left(X\right)$

・ The Second source is given the Prior or Background information ${X}_{B}$ where the innovation term is $e=X-{X}_{B}$

So the cost function for this case will

$J\left(X\right)={J}_{B}\left(X\right)+{J}_{o}(X)$ (33)

For linear Case $h(.)=H$

$J\left(X\right)=\frac{1}{2}{\left(X-{X}_{B}\right)}^{\text{T}}\left(X-{X}_{B}\right)+\frac{1}{2}{\left(Z-HX\right)}^{\text{T}}\left(Z\u2013HX\right)$ (34)

For Nonlinear case $h(.)=h\left(X\right)$

$J\left(X\right)=\frac{1}{2}{\left(X-{X}_{B}\right)}^{\text{T}}\left(X-{X}_{B}\right)+\frac{1}{2}{\left(Z-h\left(X\right)\right)}^{\text{T}}\left(Z-h\left(X\right)\right)$ (35)

4. Stochastic-Static Models

The formulation for the problem will be the same as in Deterministic/Static problem, we need the best estimate for X given the two source of information

・ The first source is the given the Observation Z and mapping function H, where the innovation term was $e=Z-h\left(X\right)$ and the observation has noise with its covariance R

・ The Second source is given the Prior or Background information ${X}_{B}$ where the innovation term is $e=X-{X}_{B}$ , and the Background has its Covariance B

So the cost function for this case will

$J\left(X\right)={J}_{B}\left(X\right)+{J}_{o}\left(X\right)$ (36)

For linear Case: $h(.)=H$

$J\left(X\right)=\frac{1}{2}{\left(X-{X}_{B}\right)}^{\text{T}}{B}^{-1}\left(X-{X}_{B}\right)+\frac{1}{2}{\left(Z-HX\right)}^{\text{T}}{R}^{-1}\left(Z-HX\right)$ (37)

For Nonlinear case: $h(.)=h\left(X\right)$

$J\left(X\right)=\frac{1}{2}{\left(X-{X}_{B}\right)}^{\text{T}}{B}^{-1}\left(X-{X}_{B}\right)+\frac{1}{2}{\left(Z-h\left(X\right)\right)}^{\text{T}}{R}^{-1}\left(Z-h\left(X\right)\right)$ (38)

5. Deterministic-Dynamic

The Dynamical models can be classified as:

where ${X}_{K}$ is the state of the dynamical system, so if ${X}_{O}$ the initial condition is known, so computing the ${X}_{K}$ is a forward problem

It is assumed that ${X}_{O}$ is not known. And Estimating ${X}_{O}$ based on noisy indirect information is the inverse problem that it is required to solve.

The observations also can be classified as:

And assume the ${V}_{K}$ is White noise, has zero mean and Known covariance matrix R, which depend on the nature and the type of instruments used

So, formulation for statement of problem is: given set of noisy observations and the model equations, it is required to estimate the initial

Condition ${X}_{O}$ that give best fit between the background states and noisy observations

To conclude there are four different types of problems:

1) Linear Model-Linear observation

2) Linear model-non Linear observation

3) Nonlinear model-linear observation

4) Non linear model-Non linear observation

We will consider only one case only the which is the simplest formulation with both model and observations are linear and the other cases could be checked in [1] [2] .

The Definition of the cost function which is weighted sum of the squared errors is given as:

For linear case

$J\left(X\right)=\frac{1}{2}{\displaystyle {\sum}_{i=1}^{N}{\left({Z}_{Ki}-H{X}_{Ki}\right)}^{\text{T}}{R}^{-1}\left({Z}_{Ki}-H{X}_{Ki}\right)}$ (41)

For nonlinear case

$J\left(X\right)=\frac{1}{2}{\displaystyle {\sum}_{i=1}^{N}{\left({Z}_{Ki}-h\left({X}_{Ki}\right)\right)}^{\text{T}}{R}^{-1}\left({Z}_{Ki}-h\left({X}_{Ki}\right)\right)}$ (42)

Depend on the whether the observations are linear or not linear. And the goal is to minimize the J(X) w.r.t ${X}_{o}$

In case there background will be included for linear case

$J\left(X\right)=\frac{1}{2}{\left(X-{X}_{B}\right)}^{\text{T}}{B}^{-1}\left(X-{X}_{B}\right)+\frac{1}{2}{\displaystyle {\sum}_{i=1}^{N}{\left({Z}_{Ki}-H{X}_{Ki}\right)}^{\text{T}}{R}^{-1}\left({Z}_{Ki}-H{X}_{Ki}\right)}$ (43)

And for nonlinear case

$J\left(X\right)=\frac{1}{2}{\left(X-{X}_{B}\right)}^{\text{T}}{B}^{-1}\left(X-{X}_{B}\right)+\frac{1}{2}{\displaystyle {\sum}_{i=1}^{N}{\left({Z}_{Ki}-h\left({X}_{Ki}\right)\right)}^{\text{T}}{R}^{-1}\left({Z}_{Ki}-h\left({X}_{Ki}\right)\right)}$ (44)

There are two approaches for minimization of this cost functions

5.1. Deterministic-Dynamic Linear Case

5.1.1. Linear Case-Method of Elimination

This method is mainly based on substitute the following equation ${X}_{K}={M}_{K}{X}_{o}$ in the cost function J(X) then, get the following equation

$J\left({X}_{o}\right)=\frac{1}{2}{\displaystyle {\sum}_{i=1}^{N}{\left({Z}_{Ki}-H{M}_{K}{X}_{o}\right)}^{\text{T}}{R}^{-1}\left({Z}_{Ki}-H{M}_{K}{X}_{o}\right)}$ (45)

Then we get the gradient for the $\nabla J\left({X}_{o}\right)=0$

For simplicity

$J\left({X}_{o}\right)=\frac{1}{2}{X}_{o}^{\text{T}}A{X}_{o}-{b}^{\text{T}}{X}_{o}+C$ and it is quadratic in ${X}_{o}$ (46)

$A={\displaystyle {\sum}_{i=1}^{N}{M}_{K}^{\text{T}}{H}^{\text{T}}{R}^{-1}H{M}_{K}}$ (46-a)

$b={\displaystyle {\sum}_{i=1}^{N}{M}_{K}^{\text{T}}{H}^{\text{T}}{R}^{-1}{Z}_{K}}$ (46-b)

$C=\frac{1}{2}{\displaystyle {\sum}_{i=1}^{N}{M}_{K}^{\text{T}}{R}^{-1}{Z}_{K}}$ (46-c)

So, the gradient is $\nabla J\left({X}_{o}\right)=A{X}_{o}-b=0$ which leads to

${X}_{o}={A}^{-1}b$ (47)

But, this approach is not practical, since it involves matrix-matrix products in the computation of A and b, so there is need to another way

5.1.2. Linear Case-Lagrangian Multipliers Formulation

Define Lagrangian L:

$\begin{array}{l}L\left({X}_{o},\lambda \right)=\frac{1}{2}\underset{\_}{{\displaystyle {\sum}_{i=1}^{N}{\left({Z}_{K}-H{X}_{K}\right)}^{\text{T}}{R}^{-1}\left({Z}_{K}-H{X}_{K}\right)}}+{\displaystyle {\sum}_{i=1}^{N}{\lambda}_{K}^{\text{T}}\underset{\_}{\left({X}_{K}-M{X}_{K-1}\right)}}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{Objective}\text{\hspace{0.17em}}\text{function}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{Model}\end{array}$ (48)

And the Necessary conditions for the minimum

${\nabla}_{{X}_{o}}L=0$ (49-a)

${\nabla}_{{X}_{K}}L=0$ (49-b)

${\nabla}_{{\lambda}_{K}}L=0$ (49-c)

So ${\nabla}_{{\lambda}_{K}}L=0\to {X}_{K}-M{X}_{K-1}=0\to {X}_{K}=M{X}_{K-1}$ (50-a)

${\nabla}_{{X}_{K}}L=0\to {H}^{\text{T}}{R}^{-1}\left[H{X}_{K}-{Z}_{K}\right]+{\lambda}_{K}-{M}^{\text{T}}{\lambda}_{K+1}=0$ (50-b)

${\nabla}_{{X}_{N}}L=0\to {H}^{\text{T}}{R}^{-1}\left[H{X}_{N}-{Z}_{N}\right]+{\lambda}_{N}=0$ (50-c)

Defining:

${f}_{K}={H}^{\text{T}}{R}^{-1}\left[{Z}_{K}-H{X}_{K}\right]=\text{normalizedforecasterrorviewedfrommodelspace}$ (51)

And substitute it in the last two equations

$-{f}_{K}+{\lambda}_{K}-{M}^{T}{\lambda}_{K+1}=0\to {\lambda}_{K}={M}^{\text{T}}{\lambda}_{K+1}+{f}_{K}$ (52-a)

$-{f}_{N}+{\lambda}_{N}=0\to {\lambda}_{N}={f}_{N}$ (52-b)

So, as shown that the formulation for calculate $\lambda $ is backword relation. we can iterate backward starting from ${\lambda}_{N}$ to Compute ${\lambda}_{1}$ . And this technique is known as backword adjoint dynamics. Then after getting ${\lambda}_{1}$ substitute in ${\nabla}_{{X}_{o}}J\left({X}_{o}\right)=-{M}^{\text{T}}{\lambda}_{1}$ and get the gradient then use any minimization algorithm to get ${X}_{o}$ , The 4-D Var Algorithm (First order adjoint method) can be summarized as follow:

1) Start with an arbitrary ${X}_{o}$ , and compute the model solution using ${X}_{K+1}=M{X}_{K}$

2) Given the observation $\left\{{Z}_{K}\right\}$ where $1\le K\le N$

3) Set → ${\lambda}_{N}={f}_{N}$ and Solve ${\lambda}_{K}={M}^{\text{T}}{\lambda}_{K+1}+{f}_{K}$ to find ${\lambda}_{1}$

4) Compute the gradient ${\nabla}_{{X}_{o}}J\left({X}_{o}\right)=-{M}^{\text{T}}{\lambda}_{1}$

5) Use this gradient in minimization algorithm to find the optimal ${X}_{o}$ by repeating the steps 1 through 4 until convergence.

5.2. Recursive Least Squares Formulation of 4D Var (Online Approach)

In the previous part of 4Dvar section the solution for off-line 4Dvar problem of assimilating given set of observations in deterministic-dynamic model using classical least square method.

Now there is need to develop an online or recursive method for computing the estimate of the state as new observations arrive. which mean we need to compute the ${X}_{N+1}$ in terms of ${X}_{N}$ and the new observation ${Z}_{N+1}$ .

Consider linear deterministic dynamical system without model noise

${X}_{K+1}=M{X}_{K}$ (53)

where the initial condition ${X}_{o}$ is random variable with $E\left({X}_{o}\right)={m}_{o}$ and $Cov\left({X}_{o}\right)={P}_{o}$ . and the observations ${Z}_{K}$ for $K=1,2,3,\cdots $ are given as

${Z}_{K}={H}_{K}{X}_{K}+{V}_{K}$ (54)

where ${H}_{K}$ is full rank and ${V}_{K}$ is the observation vector noise with the following known properties

$E\left({V}_{k}\right)=0$ and $Cov\left({V}_{K}\right)={R}_{K}$ (55)

So, the objective function that

${J}_{N}={J}_{N}^{P}+{J}_{N}^{o}$ (56)

where

${J}_{N}^{P}\text{}=\frac{1}{2}{\left({m}_{o}-{X}_{o}\right)}^{\text{T}}{P}_{o}^{-1}\left({m}_{o}-{X}_{o}\right)$ (57)

${J}_{N}^{o}=\frac{1}{2}{\displaystyle {\sum}_{K=1}^{N}{\left({Z}_{K}-{H}_{K}{X}_{K}\right)}^{\text{T}}{R}_{K}^{-1}\left({Z}_{K}-{H}_{K}{X}_{K}\right)}$ (58)

Since, our goal to find an optimal ${X}_{N}$ that minimize the ${J}_{N}$ , it is needed to express ${m}_{o}$ and ${X}_{K}$

In tem of the corresponding N values ${X}_{N}$ and ${m}_{N}$ . So,

Since, ${X}_{N}={M}_{N-1}{M}_{N-2}\cdots {M}_{K}{X}_{K}=M\left(N-1:K\right){X}_{K}$ (59)

Then ${X}_{K}={M}^{-1}\left(N-1:K\right){X}_{N}=B\left(N-1:K\right){X}_{N}$ (60)

Since ${m}_{K+1}={M}_{K}{m}_{K}=M\left(K:0\right){m}_{o}$ (61)

Hence, the trajectory of the model starting from ${m}_{o}=B\left(N-1:0\right){m}_{N}$

Substitute for ${X}_{K}$ and ${m}_{o}$ into ${J}_{N}$

$\begin{array}{c}{J}_{N}\left({X}_{N}\right)=\frac{1}{2}{\left({m}_{N}-{X}_{N}\right)}^{\text{T}}\left[{B}^{\text{T}}\left(N-1:0\right){P}_{0}^{-1}B\left(N-1:0\right)\right]\left({m}_{N}-{X}_{N}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\frac{1}{2}{\displaystyle {\sum}_{K=1}^{N}{\left({Z}_{K}-{H}_{K}B\left(N-1:K\right){X}_{N}\right)}^{\text{T}}{R}_{K}^{-1}\left({Z}_{K}-{H}_{K}B\left(N-1:K\right){X}_{N}\right)}\end{array}$ (62)

And differentiate ${J}_{N}\left({X}_{N}\right)$ w.r.t ${X}_{N}$ twice to get the gradient and Hessian. Then setting the gradient to zero and simplifying the notation

$\left({F}_{N}^{p}+{F}_{N}^{o}\right){X}_{N}=\left({f}_{N}^{p}+{f}_{N}^{o}\right)$ (63)

where

${F}_{N}^{p}={B}^{\text{T}}\left(N-1:0\right){P}_{0}^{-1}B\left(N-1:0\right)$ (64)

${F}_{N}^{o}={\displaystyle {\sum}_{K=1}^{N}{B}^{\text{T}}\left(N-1:K\right){H}_{K}^{\text{T}}{R}_{K}^{-1}{H}_{K}B\left(N-1:K\right)}$ (65)

${f}_{N}^{p}={F}_{N}^{p}{m}_{N}$ (66)

${f}_{N}^{o}={\displaystyle {\sum}_{K=1}^{N}{B}^{\text{T}}\left(N-1:K\right){H}_{K}^{\text{T}}{R}_{K}^{-1}{Z}_{K}}$ (67)

By induction the minimization for ${X}_{N+1}$ of ${J}_{N+1}\left({X}_{N+1}\right)$ is given by

$\left({F}_{N+1}^{p}+{F}_{N+1}^{o}\right){X}_{N+1}=\left({f}_{N+1}^{p}+{f}_{N+1}^{o}\right)$ (68)

The goal of the recursive framework is to express ${X}_{N+1}$ as function of ${X}_{N}$ and ${Z}_{K+1}$ . this calls when expressing ${F}_{N+1}^{p}$ , ${F}_{N+1}^{o}$ , ${f}_{N+1}^{p}$ , ${f}_{N+1}^{o}$ in terms of ${F}_{N}^{p}$ , ${F}_{N}^{o}$ , ${f}_{N}^{p}$ , ${f}_{N}^{o}.$

So using equations from (61) to (64) and ${B}_{K}={M}_{K}^{-1}$ and the following equation

$M\left(j:i\right)=\{\begin{array}{l}{M}_{j}{M}_{j-1},\cdots ,{M}_{I+1}{M}_{I}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}j\ge i\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}j\ge i\end{array}$ (69)

Then N + 1 formula in terms of N can be get

${F}_{N+1}^{p}+{F}_{N+1}^{o}={B}_{N}^{\text{T}}\left[{F}_{N}^{p}+{F}_{N}^{o}\right]{B}_{N}+{H}_{N+1}^{\text{T}}{R}_{N+1}^{-1}{H}_{N+1}$ (70)

${f}_{N+1}^{p}+{f}_{N+1}^{o}={B}_{N}^{\text{T}}{F}_{N}^{p}{B}_{N}{m}_{N+1}+{B}_{N}^{\text{T}}{f}_{N}^{o}+{H}_{N+1}^{\text{T}}{R}_{N+1}^{-1}{Z}_{N+1}$ (71)

And since

${\left({P}_{N+1}\right)}^{-1}={F}_{N+1}^{p}+{F}_{N+1}^{o}$ (72)

${\left({P}_{N}\right)}^{-1}={F}_{N}^{p}+{F}_{N}^{o}$ (73)

${B}_{N}{m}_{N+1}={B}_{N}{M}_{N}{m}_{N}={m}_{N}$ (74)

${X}_{N+1}^{f}={M}_{N}{X}_{N}$ or ${X}_{N}={B}_{N}{X}_{N+1}^{f}$ (75)

${f}_{N+1}^{p}+{f}_{N+1}^{o}={B}_{N}^{\text{T}}{\left({P}_{N}\right)}^{-1}{B}_{N}{X}_{N+1}^{f}+{H}_{N+1}^{\text{T}}{R}_{N+1}^{-1}{Z}_{N+1}$ (76)

By combining Equations (70) and (76) into Equation (68) and defining ${\left({P}_{N+1}^{f}\right)}^{-1}={B}_{N}^{\text{T}}{\left({P}_{N}\right)}^{-1}{B}_{N}$ after combination

${X}_{N+1}={\left[{\left({P}_{N+1}^{f}\right)}^{-1}+{H}_{N+1}^{\text{T}}{R}_{N+1}^{-1}H\right]}^{-1}\left[{\left({P}_{N+1}^{f}\right)}^{-1}{X}_{N+1}^{f}+{H}_{N+1}^{\text{T}}{R}_{N+1}^{-1}{Z}_{N+1}\right]$ (77)

The right hand side of this equation is sum of two terms the first one is

${\left[{\left({P}_{N+1}^{f}\right)}^{-1}+{H}_{N+1}^{\text{T}}{R}_{N+1}^{-1}{H}_{N+1}\right]}^{-1}\left[{\left({P}_{N+1}^{f}\right)}^{-1}{X}_{N+1}^{f}\right]$ (78)

So adding and subtracting ${H}_{N+1}^{\text{T}}$ ${R}_{N+1}^{-1}$ ${H}_{N+1}$ ${X}_{N+1}^{f}$ this term is equal to

$\begin{array}{l}{\left[{\left({P}_{N+1}^{f}\right)}^{-1}+{H}_{N+1}^{\text{T}}{R}_{N+1}^{-1}{H}_{N+1}\right]}^{-1}\left[{\left({P}_{N+1}^{f}\right)}^{-1}+{H}_{N+1}^{\text{T}}{R}_{N+1}^{-1}{H}_{N+1}-{H}_{N+1}^{\text{T}}{R}_{N+1}^{-1}{H}_{N+1}\right]{X}_{N+1}^{f}\\ ={X}_{N+1}^{f}-{K}_{N+1}{H}_{N+1}{X}_{N+1}^{f}\end{array}$ (79)

where

${K}_{N+1}={\left[{\left({P}_{N+1}^{f}\right)}^{-1}+{H}_{N+1}^{\text{T}}{R}_{N+1}^{-1}{H}_{N+1}\right]}^{-1}{H}_{N+1}^{\text{T}}{R}_{N+1}^{-1}$ is called kalman gain matrix and

combine it with Equation (77). The desired recursive expression will be gained

${X}_{N+1}={X}_{N+1}^{f}+{K}_{N+1}\left[{Z}_{N+1}\u2013{H}_{N+1}{X}_{N+1}^{f}\right]$ (80)

6. Stochastic-Dynamic Model

This type of data assimilation problems is same as deterministic/dynamic problems, except that, this type introduces an additional term in the forecast equation which is noise vector that associated with the model (i.e. model error).

For Stochastic-dynamic model we can divide the filters to Linear and Nonlinear filters Linear filter has evolution function $M(.)$ in the model and the mapping function $h(.)$ is linear while in Nonlinear filters those two functions are nonlinear

6.1. Linear Filters

Kalman Linear Filter

Kalman filter approach was first introduced at reference [6] [7]

Problem formulation:

This section will show how model and observation with error will be presented then will formulate the algorithm:

A1-Dynamic model: it will be assumed linear, non autonomous, dynamical system that evolves according to

${X}_{K+1}={M}_{K}{X}_{K}+{w}_{K+1}$ (81)

where ${M}_{K}$ is nonsingular system matrix that varies with time K and ${w}_{K}$ denote to model error. It is assumed that ${X}_{o}$ and ${w}_{K}$ satisfy the following conditions (A) ${X}_{o}$ is random variable with known mean vector $E\left({X}_{o}\right)={m}_{o}$ and known covariance matrix $E\left[\left({X}_{o}-{m}_{o}\right){\left({X}_{o}-{m}_{o}\right)}^{\text{T}}\right]={P}_{o}$ (B) The model erro is unbiased, mean $E\left({w}_{K}\right)=0$ for all k, and temporally uncorrelated (white noise) $E\left({w}_{K}{w}_{j}^{\text{T}}\right)={Q}_{K}$ when if $j=k$ and $E\left({w}_{K}{w}_{j}^{\text{T}}\right)=0$ otherwise (C) The model error ${w}_{K}$ and the initial state is uncorrelated $E\left({w}_{K}{X}_{o}^{\text{T}}\right)=0$ for all k

B1-Observations: The observation ${Z}_{K}$ is the observation at time k and related to ${X}_{K}$ via

${Z}_{K}={H}_{K}{X}_{K}+{V}_{K}$ (82)

where ${H}_{K}$ represent the time varying measurement system and ${V}_{K}$ represent the measurement noise with the following properties (A) ${V}_{K}$ has mean zero $E\left({V}_{K}\right)=0$ (B) ${V}_{K}$ is temporary uncorrelated: $E\left({V}_{K}{V}_{j}^{\text{T}}\right)={R}_{K}$ if $j=k$ while otherwise $E\left({V}_{K}{V}_{j}^{\text{T}}\right)=0$ . (C) ${V}_{K}$ is uncorrelated with the initial state ${X}_{o}$ and the model error ${w}_{K}$ which mean $E\left({X}_{o}{V}_{K}^{\text{T}}\right)=0$ for all $K>0$ and $E\left({V}_{K}{W}_{j}^{\text{T}}\right)=0$ for all K and j

C1-statement of the filtering problem: Given that ${X}_{K}$ evolves according to equation 81 and set of observations. Our goal to find an estimate ${X}_{K}$ that minimize the mean square error

And the following is the summary of the Kalman filter procedure (Covariance Form)

Model ${x}_{k+1}={M}_{k}{x}_{k}+{w}_{k+1}$

$E\left({w}_{k}\right)=0$ , $Cov\left({w}_{k}\right)={Q}_{k}$

${x}_{0}$ is random with mean ${m}_{0}$ and $Cov\left({x}_{0}\right)={P}_{0}$

Observation ${z}_{k}={H}_{k}{x}_{k}+{v}_{k}$

$E\left({v}_{k}\right)=0$ , $Cov\left({v}_{k}\right)={R}_{k}$

Model Forecast ${\stackrel{^}{x}}_{0}=E\left({x}_{0}\right)$ , ${\stackrel{^}{P}}_{0}={P}_{0}$

${x}_{k}^{f}={M}_{k-1}{\stackrel{^}{x}}_{k-1}$

${P}_{k}^{f}={M}_{k-1}{\stackrel{^}{P}}_{k-1}{M}_{k-1}^{\text{T}}+{Q}_{k}$

Data Assimilation

${\stackrel{^}{x}}_{k}={x}_{k}^{f}+{K}_{k}\left[{z}_{k}-{H}_{k}{x}_{k}^{f}\right]$

${K}_{k}={P}_{k}^{f}{H}_{k}^{\text{T}}{\left[{H}_{k}{P}_{k}^{f}{H}_{k}^{\text{T}}+{R}_{k}\right]}^{-1}={\stackrel{^}{P}}_{k}{H}_{k}^{\text{T}}{D}_{k}^{-1}$

${\stackrel{^}{P}}_{k}={P}_{k}^{f}-{P}_{k}^{f}{H}_{k}^{\text{T}}{\left[{H}_{k}{P}_{k}^{f}{H}_{k}^{\text{T}}+{R}_{k}\right]}^{-1}{H}_{k}{P}_{k}^{f}=\left[I-{K}_{k}{H}_{k}\right]{P}_{k}^{f}$

The computation of the covariance matrices ${P}_{k+1}^{f}$ and ${P}_{k+1}$ is the most time-consuming part since in many of the applications n > m. so that reduced order filters has been introduced [1] .

6.2. Non Linear Filters

6.2.1. First Order Filter/Extended Kalman Filter (EKF)

The extended kalman filter is an extension for kalman filter idea in case the evolution function $M(.)$ in the model and the mapping function $h(.)$ is non linear

For nonlinear model

${X}_{K+1}=M\left({X}_{K}\right)+{w}_{K+1}$

For nonlinear observation

${Z}_{K}=h\left({X}_{K}\right)+{V}_{K}$

The main idea for First order filter/extended kalman filter is to expand the $M\left({X}_{K}\right)$ around

${X}_{K}$ and $h\left({X}_{K+1}\right)$ around ${X}_{K+1}^{f}$ in first order taylor series expansion. when $M\left(X\right)$ and $h\left(X\right)$ are linear it reduces to kalman filter. The following is the summary steps for Extended Kalman filter

Model ${x}_{k+1}=M\left({x}_{k}\right)+{w}_{k+1}$

Observation ${z}_{k}=h\left({x}_{k}\right)+{v}_{k}$

Forecast Step ${x}_{k+1}^{f}=M\left({\stackrel{^}{x}}_{k}\right)$

${P}_{k+1}^{f}={D}_{M}{\stackrel{^}{P}}_{k}{D}_{M}^{\text{T}}+{Q}_{k+1}$

Data Assimilation Step

${\stackrel{^}{x}}_{k+1}={x}_{k+1}^{f}+K\left[{z}_{k+1}-h\left({x}_{k+1}^{f}\right)\right]$

$K={P}_{k+1}^{f}{D}_{h}^{\text{T}}{\left[{D}_{h}{P}_{k+1}^{f}{D}_{h}^{\text{T}}+{R}_{k+1}\right]}^{-1}$

${\stackrel{^}{P}}_{k+1}=\left(I-K{D}_{h}\right){P}_{k+1}^{f}$

6.2.2. Second Order Filter

The second order filter is the same idea of the first order filter except that the expansion of $M\left({X}_{K}\right)$ around ${X}_{K}$ and $h\left({X}_{K+1}\right)$ around ${X}_{K+1}^{f}$ in Second order Taylor series expansion. And the following is summary of the second order nonlinear filter

Model ${x}_{k+1}=M\left({x}_{k}\right)+{w}_{k+1}$

Observation ${z}_{k}=h\left({x}_{k}\right)+{v}_{k}$

Forecast Step

${x}_{k+1}^{f}=M\left({\stackrel{^}{x}}_{k}\right)+\frac{1}{2}{\partial}^{2}\left(M,{\stackrel{^}{P}}_{k}\right)$

${P}_{k+1}^{f}={D}_{M}{\stackrel{^}{P}}_{k}{D}_{M}^{\text{T}}+{Q}_{k+1}$

Data Assimilation Step

${\stackrel{^}{x}}_{k+1}={x}_{k+1}^{f}+K\left[{z}_{k+1}-h\left({x}_{k+1}^{f}\right)-\frac{1}{2}{\partial}^{2}\left(h,{P}_{k+1}^{f}\right)\right]$

$K={P}_{k+1}^{f}{D}_{h}^{\text{T}}{\left[{D}_{h}{P}_{k+1}^{f}{D}_{h}^{\text{T}}+{R}_{k+1}\right]}^{-1}$

${\stackrel{^}{P}}_{k+1}=\left(I-K{D}_{h}\right){P}_{k+1}^{f}$

6.3. Reduced Rank Filters

Ensemble Kalman Filter

The ensemble Kalman filter [8] originated from the merger of Kalman filter theory and Monte Carlo estimation methods. It was introduced the basic principles of linear and nonlinear filtering but these are not used in day by day operations at the national centers for weather predictions. Because of the cost of the updating the covariance matrix was very high.

So, there are mainly two ways to avoid the high cost of computing the covariance matrix

1) The first method was the Parallel computation which mainly dependent on

a) The Algorithm

b) The number of Processors

c) The topology of the interconnection of the network

d) How the tasks of the algorithm are mapped on the processor

2) The second method which became more popular which to compute low/ reduced rank approximation to the full rank covariance matrix. and most of low rank filters differs only in the way in which the approximation are derived. Excellent review on the Ensemble Kalman filter was introduced

Formulation of the problem

It is assumed that the model is nonlinear and observations are linear functions of the state

${X}_{K+1}=M\left({X}_{K}\right)+{w}_{K+1}$

${Z}_{K}=h\left({X}_{K}\right)+{V}_{K}$

And it is assumed that

1) The initial conditions ${X}_{0}~N\left({m}_{o},{P}_{o}\right)$

2) The dynamic system noise w_{k} is white Gaussian noise with
${w}_{K}~N\left(0,{Q}_{K}\right)$

3) The observation noise ${V}_{K}$ is White noise with ${V}_{K}~N\left(0,{R}_{K}\right)$

4) ${X}_{o}$ , $\left\{{w}_{K}\right\}$ , $\left\{{V}_{K}\right\}$ are mutually uncorrelated

Model ${x}_{k+1}=M\left({x}_{k}\right){x}_{k}+{w}_{k+1}$

Observation ${z}_{k}={H}_{k}{x}_{k}+{v}_{k}$

Initial ensemble

・ Create the initial ensemble

Forecast step

1) Create the ensemble of forecasts at time $\left(k+1\right)$ using the following

The $N$ members of the ensemble forecast at time $\left(k+1\right)$ are generated ${\xi}_{k+1}^{f}\left(i\right)=M\left({\stackrel{^}{\xi}}_{k}\left(i\right)\right)+{w}_{k+1}\left(i\right)$ where ${w}_{k+1}\left(i\right)~N\left(0,{Q}_{k+1}\right)$

2) Compute ${x}_{k+1}^{f}\left(N\right)$ and ${P}_{k+1}^{f}\left(N\right)$ using

$K={P}_{k+1}^{f}\left(N\right){H}_{k+1}^{\text{T}}{\left[{H}_{k+1}{P}_{k+1}^{f}\left(N\right){H}_{k+1}^{\text{T}}+{R}_{k+1}\right]}^{-1}$

${P}_{k+1}^{f}\left(N\right)=\frac{1}{N-1}{\displaystyle {\sum}_{i=1}^{N}{e}_{k+1}^{f}\left(i\right){\left[{e}_{k+1}^{f}\left(i\right)\right]}^{\text{T}}}$

Data assimilation step

1) Create the ensemble of estimates at time $\left(k+1\right)$ using

${\stackrel{^}{\xi}}_{k+1}\left(i\right)={\xi}_{k+1}^{f}\left(i\right)+K\left[{z}_{k+1}\left(i\right)-{H}_{k+1}{\xi}_{k+1}^{f}\left(i\right)\right].$

and

$K={P}_{k+1}^{f}\left(N\right){H}_{k+1}^{\text{T}}{\left[{H}_{k+1}{P}_{k+1}^{f}\left(N\right){H}_{k+1}^{\text{T}}+{R}_{k+1}\right]}^{-1}$

2) Compute ${\stackrel{^}{x}}_{k+1}\left(N\right)$ and ${\stackrel{^}{P}}_{k+1}\left(N\right)$ using

The sample mean of the estimate at time $\left(k+1\right)$ is then given by

${\stackrel{^}{x}}_{k+1}\left(N\right)=\frac{1}{N}{\displaystyle {\sum}_{i=1}^{N}{\stackrel{^}{\xi}}_{k+1}\left(i\right)}={x}_{k+1}^{f}\left(N\right)+K\left[{\stackrel{\xaf}{z}}_{k+1}\left(N\right)-{H}_{k+1}{x}_{k+1}^{f}\left(N\right)\right]$

where

${\stackrel{\xaf}{z}}_{k+1}\left(N\right)=\frac{1}{N}{\displaystyle {\sum}_{i=1}^{N}{z}_{k+1}\left(i\right)}={z}_{k+1}+\frac{1}{N}{\displaystyle {\sum}_{i=1}^{N}{v}_{k}\left(i\right)}={\stackrel{\xaf}{z}}_{k+1}+{\stackrel{\xaf}{v}}_{k+1}\left(N\right)$

where

${P}_{k+1}\left(N\right)=\left(I-K{H}_{k+1}\right){P}_{k+1}^{f}\left(N\right){\left(I-K{H}_{k+1}\right)}^{\text{T}}+K{R}_{k+1}\left(N\right){K}^{\text{T}}$

where for large N

${R}_{k+1}\left(N\right)=\left(1/N-1\right)\left[{v}_{k+1}\left(i\right)-{\stackrel{\xaf}{v}}_{k+1}\left(N\right)\right]{\left[{v}_{k+1}\left(i\right)-{\stackrel{\xaf}{v}}_{k+1}\left(N\right)\right]}^{\text{T}}$

All summaries, derivation and details could be checked in reference [1] [9] . and full review on ensemble kalman filter for atmospheric data assimilation is inteoduced by P. L. Houtekamer and Fuqing Zhang, 2016 [10] .

6.4. Hybrid Filters

3DVar uses static climate Background error while 4DVar uses implicit flow dependent information but still start with static background error. And since. The B-matrix affects the performance of the assimilation heavily [11] it is important to use a B-matrix that is a realistic representation of the actual forecast error covariance [12] . So many proposed hybrid filters were introduced. they are to use flow dependent background error in vartional data assimilation system by combining the 3Dvar climate background error covariance and error of the day from ensemble.

In Equation (37) replace the B by weighted sum of 3Dvar B and the ensemble covariance as follow [13] :

$B={a}_{1}{B}_{1}+{a}_{2}{B}_{2}$

where

${a}_{1}=1-{a}_{2}$

The Ensemble covariance is included in the 3DVAR cost function through

Augmentation of control variables [14] and the following formula is mathematically equivalent to [13] .

This is well known hybrid 3DVar-EnKF method. While 4DVar-EnKF method has the same idea if we substitute by Equation (43) by the same methodology in 3DVar-EnKF part. More advanced hybrid filters are highlighted in ref [15] [16] .

7. Conclusions

This paper shows the mathematical perspective for the basic foundation of data assimilation modules starting from least square to advanced filters that used in data assimilation as journey. This work is the first of its type to summarize the mathematical perspective for data assimilation in extensive way and highlights both classical and advanced data assimilation methods. This paper could be used as reference to understand the mathematics behind data assimilation. It started by least square method and their different versions then explains on the classical method 3Dvar. 4DVar also is introduced. Advanced filters such kalman filter and its families were highlighted. The idea of hybrid filter was introduced finally.

For future work, detailed hybrid filters should be highlighted, since there are different hybrid filters structure were introduced. Generic case studies the evaluate performance of the different assimilation techniques.

References

[1] Lewis, J.M. (2009) Dynamic Data Assimilation: A Least Square Approach. Cambridge University Press.

[2] Dynamic Data Assimilation: An Introduction Prof S. Lakshmivarahan, School of Computer Science, University of Oklahoma.

http://nptel.ac.in/courses/111106082/#

[3] Gibbs, B.P. (2011) Advanced Kalman Filtering, Least-Squares and Modeling: A Practical Handbook. Wiley.

[4] Bjorck, A. (1996) Numerical Methods for Least Squares Problems, Linköping University, Linköping, Sweden.

https://doi.org/10.1137/1.9781611971484

[5] Kreyszig, E. (2011) Advanced Engineering Mathematics.

https://www-elec.inaoep.mx/~jmram/Kreyzig-ECS-DIF1.pdf

[6] Zarchan, P. and Musoff, H. (2000) Fundamentals of Kalman Filtering: A Practical Approach. American Inst of Aeronautics & Astronautics, United States, 2.

[7] Kalman, R.E. (1960) A New Approach to Linear Filtering and Prediction Problems. Journal of Basic Engineering, 82, 35-45.

https://doi.org/10.1115/1.3662552

[8] Evensen, G. (1994) Sequential Data Assimilation with a Nonlinear Quasi-Geostrophic Model Using Monte Carlo Methods to Forecast Error Statistics. Journal of Geophysical Research, 99, 10143-10162.

https://doi.org/10.1029/94JC00572

[9] Evensen, G. (2003) The Ensemble Kalman Filter: Theoretical Formulation and Practical Implementation.

[10] Houtekamer, P.L. and Zhang, F. (2016) Review of the Ensemble Kalman Filter for Atmospheric Data Assimilation. Monthly Weather Review, 144.

https://doi.org/10.1175/MWR-D-15-0440.1

[11] Bannister, R.N. (2008) A Review of Forecast Error Covariance Statistics in Atmospheric Variational Data Assimilation. I: Characteristics and Measurements of Forecast Error Covariances. Quarterly Journal of the Royal Meteorological Society, 134, 1951-1970.

https://doi.org/10.1002/qj.339

[12] Bannister, R.N. (2008) A Review of Forecast Error Covariance Statistics in Atmospheric Variational Data Assimilation. II: Modelling the Forecast Error Covariance Statistics. Quarterly Journal of the Royal Meteorological Society, 134, 1971-1996.

https://doi.org/10.1002/qj.340

[13] Hamill, T.M. and Snyder, C. (2000) A Hybrid Ensemble Kalman Filter-3D Variational Analysis Scheme. Monthly Weather Review, 128, 2905-2919.

https://doi.org/10.1175/1520-0493(2000)128<2905:AHEKFV>2.0.CO;2

[14] Lorenc, A.C. (2003) The Potential of the Ensemble Kalman Filter for NWP: A Comparison with 4D-Var. Quarterly Journal of the Royal Meteorological Society, 129, 3183-3203.

https://doi.org/10.1256/qj.02.132

[15] Nawinda, et al. (2016) A Hybrid Ensemble Transform Particle Filter for Nonlinear and Spatially Extended Dynamical Systems.

[16] Laura, S., et al. (2015) A Hybrid Particle-Ensemble Kalman Filter for High Dimensional Lagrangian Data Assimilation.