13.2. Calculation algorithm “4DVAR”¶
Warning
In this particular version, this algorithm or some of its variants are experimental, and therefore remain subject to change in future versions.
Description¶
This algorithm realizes an estimation of the state of a dynamic system, by a
variational minimization method of the classical function in data
assimilation:
which is usually designed as the “4D-Var” functional (see for example [Talagrand97]). The terms “4D-Var”, “4D-VAR” and “4DVAR” are equivalent. It is well suited in cases of non-linear observation and evolution operators, its application domain is similar to the one of Kalman filters, specially the Calculation algorithm “ExtendedKalmanFilter” or the Calculation algorithm “UnscentedKalmanFilter”.
Optional and required commands¶
The general required commands, available in the editing user graphical or textual interface, are the following:
- Background
- Vector. The variable indicates the background or initial vector used,
previously noted as
. Its value is defined as a “Vector” or “VectorSerie” type object. Its availability in output is conditioned by the boolean “Stored” associated with input.
- BackgroundError
- Matrix. This indicates the background error covariance matrix, previously
noted as
. Its value is defined as a “Matrix” type object, a “ScalarSparseMatrix” type object, or a “DiagonalSparseMatrix” type object, as described in detail in the section Requirements to describe covariance matrices. Its availability in output is conditioned by the boolean “Stored” associated with input.
- EvolutionError
- Matrix. The variable indicates the evolution error covariance matrix,
usually noted as
. It is defined as a “Matrix” type object, a “ScalarSparseMatrix” type object, or a “DiagonalSparseMatrix” type object, as described in detail in the section Requirements to describe covariance matrices. Its availability in output is conditioned by the boolean “Stored” associated with input.
- EvolutionModel
- Operator. The variable indicates the evolution model operator, usually
noted
, which describes an elementary step of evolution. Its value is defined as a “Function” type object or a “Matrix” type one. In the case of “Function” type, different functional forms can be used, as described in the section Requirements for functions describing an operator. If there is some control
included in the evolution model, the operator has to be applied to a pair
.
- Observation
- List of vectors. The variable indicates the observation vector used for
data assimilation or optimization, and usually noted
. Its value is defined as an object of type “Vector” if it is a single observation (temporal or not) or “VectorSeries” if it is a succession of observations. Its availability in output is conditioned by the boolean “Stored” associated in input.
- ObservationError
- Matrix. The variable indicates the observation error covariance matrix,
usually noted as
. It is defined as a “Matrix” type object, a “ScalarSparseMatrix” type object, or a “DiagonalSparseMatrix” type object, as described in detail in the section Requirements to describe covariance matrices. Its availability in output is conditioned by the boolean “Stored” associated with input.
- ObservationOperator
- Operator. The variable indicates the observation operator, usually noted as
, which transforms the input parameters
to results
to be compared to observations
. Its value is defined as a “Function” type object or a “Matrix” type one. In the case of “Function” type, different functional forms can be used, as described in the section Requirements for functions describing an operator. If there is some control
included in the observation, the operator has to be applied to a pair
.
The general optional commands, available in the editing user graphical or textual interface, are indicated in List of commands and keywords for data assimilation or optimisation case. Moreover, the parameters of the command “AlgorithmParameters” allows to choose the specific options, described hereafter, of the algorithm. See Description of options of an algorithm by “AlgorithmParameters” for the good use of this command.
The options are the following:
- Bounds
List of pairs of real values. This key allows to define pairs of upper and lower bounds for every state variable being optimized. Bounds have to be given by a list of list of pairs of lower/upper bounds for each variable, with possibly
None
every time there is no bound. The bounds can always be specified, but they are taken into account only by the constrained optimizers.Example:
{"Bounds":[[2.,5.],[1.e-2,10.],[-30.,None],[None,None]]}
- ConstrainedBy
Predefined name. This key allows to choose the method to take into account the bounds constraints. The only one available is the “EstimateProjection”, which projects the current state estimate on the bounds constraints.
Example:
{"ConstrainedBy":"EstimateProjection"}
- CostDecrementTolerance
Real value. This key indicates a limit value, leading to stop successfully the iterative optimization process when the cost function decreases less than this tolerance at the last step. The default is 1.e-7, and it is recommended to adapt it to the needs on real problems. One can refer to the section describing ways for Convergence control for calculation cases and iterative algorithms for more detailed recommendations.
Example:
{"CostDecrementTolerance":1.e-7}
- EstimationOf
Predefined name. This key allows to choose the type of estimation to be performed. It can be either state-estimation, with a value of “State”, or parameter-estimation, with a value of “Parameters”. The default choice is “State”.
Example:
{"EstimationOf":"Parameters"}
- GradientNormTolerance
Real value. This key indicates a limit value, leading to stop successfully the iterative optimization process when the norm of the gradient is under this limit. It is only used for non-constrained optimizers. The default is 1.e-5 and it is not recommended to change it.
Example:
{"GradientNormTolerance":1.e-5}
- InitializationPoint
Vector. The variable specifies one vector to be used as the initial state around which an iterative algorithm starts. By default, this initial state is not required and is equal to the background
. Its value must allow to build a vector of the same size as the background. If provided, it replaces the background only for initialization.
Example :
{"InitializationPoint":[1, 2, 3, 4, 5]}
- MaximumNumberOfIterations
Integer value. This key indicates the maximum number of internal iterations allowed for iterative optimization. The default is 15000, which is very similar to no limit on iterations. It is then recommended to adapt this parameter to the needs on real problems. For some optimizers, the effective stopping step can be slightly different of the limit due to algorithm internal control requirements. One can refer to the section describing ways for Convergence control for calculation cases and iterative algorithms for more detailed recommendations.
Example:
{"MaximumNumberOfIterations":100}
- Minimizer
Predefined name. This key allows to choose the optimization minimizer. The default choice is “LBFGSB”, and the possible ones are “LBFGSB” (nonlinear constrained minimizer, see [Byrd95], [Morales11], [Zhu97]), “TNC” (nonlinear constrained minimizer), “CG” (nonlinear unconstrained minimizer), “BFGS” (nonlinear unconstrained minimizer), “NCG” (Newton CG minimizer). It is strongly recommended to stay with the default.
Example :
{"Minimizer":"LBFGSB"}
- ProjectedGradientTolerance
Real value. This key indicates a limit value, leading to stop successfully the iterative optimization process when all the components of the projected gradient are under this limit. It is only used for constrained optimizers. The default is -1, that is the internal default of each minimizer (generally 1.e-5), and it is not recommended to change it.
Example:
{"ProjectedGradientTolerance":-1}
- StoreSupplementaryCalculations
List of names. This list indicates the names of the supplementary variables, that can be available during or at the end of the algorithm, if they are initially required by the user. Their avalability involves, potentially, costly calculations or memory consumptions. The default is then a void list, none of these variables being calculated and stored by default (excepted the unconditionnal variables). The possible names are in the following list (the detailed description of each named variable is given in the following part of this specific algorithmic documentation, in the sub-section “Information and variables available at the end of the algorithm”): [ “Analysis”, “BMA”, “CostFunctionJ”, “CostFunctionJAtCurrentOptimum”, “CostFunctionJb”, “CostFunctionJbAtCurrentOptimum”, “CostFunctionJo”, “CostFunctionJoAtCurrentOptimum”, “CurrentIterationNumber”, “CurrentOptimum”, “CurrentState”, “IndexOfOptimum”, ].
Example :
{"StoreSupplementaryCalculations":["BMA", "CurrentState"]}
Information and variables available at the end of the algorithm¶
At the output, after executing the algorithm, there are information and
variables originating from the calculation. The description of
Variables and informations available at the output show the way to obtain them by the method
named get
, of the variable “ADD” of the post-processing in graphical
interface, or of the case in textual interface. The input variables, available
to the user at the output in order to facilitate the writing of post-processing
procedures, are described in the Inventory of potentially available information at the output.
Permanent outputs (non conditional)
The unconditional outputs of the algorithm are the following:
- Analysis
List of vectors. Each element of this variable is an optimal state
in optimization or an analysis
in data assimilation.
Example:
Xa = ADD.get("Analysis")[-1]
- CostFunctionJ
List of values. Each element is a value of the chosen error function
.
Example:
J = ADD.get("CostFunctionJ")[:]
- CostFunctionJb
List of values. Each element is a value of the error function
, that is of the background difference part. If this part does not exist in the error function, its value is zero.
Example:
Jb = ADD.get("CostFunctionJb")[:]
- CostFunctionJo
List of values. Each element is a value of the error function
, that is of the observation difference part.
Example:
Jo = ADD.get("CostFunctionJo")[:]
Set of on-demand outputs (conditional or not)
The whole set of algorithm outputs (conditional or not), sorted by alphabetical order, is the following:
- Analysis
List of vectors. Each element of this variable is an optimal state
in optimization or an analysis
in data assimilation.
Example:
Xa = ADD.get("Analysis")[-1]
- BMA
List of vectors. Each element is a vector of difference between the background and the optimal state.
Example:
bma = ADD.get("BMA")[-1]
- CostFunctionJ
List of values. Each element is a value of the chosen error function
.
Example:
J = ADD.get("CostFunctionJ")[:]
- CostFunctionJAtCurrentOptimum
List of values. Each element is a value of the error function
. At each step, the value corresponds to the optimal state found from the beginning.
Example:
JACO = ADD.get("CostFunctionJAtCurrentOptimum")[:]
- CostFunctionJb
List of values. Each element is a value of the error function
, that is of the background difference part. If this part does not exist in the error function, its value is zero.
Example:
Jb = ADD.get("CostFunctionJb")[:]
- CostFunctionJbAtCurrentOptimum
List of values. Each element is a value of the error function
. At each step, the value corresponds to the optimal state found from the beginning. If this part does not exist in the error function, its value is zero.
Example:
JbACO = ADD.get("CostFunctionJbAtCurrentOptimum")[:]
- CostFunctionJo
List of values. Each element is a value of the error function
, that is of the observation difference part.
Example:
Jo = ADD.get("CostFunctionJo")[:]
- CostFunctionJoAtCurrentOptimum
List of values. Each element is a value of the error function
, that is of the observation difference part. At each step, the value corresponds to the optimal state found from the beginning.
Example:
JoACO = ADD.get("CostFunctionJoAtCurrentOptimum")[:]
- CurrentIterationNumber
List of integers. Each element is the iteration index at the current step during the iterative algorithm procedure. There is one iteration index value per assimilation step corresponding to an observed state.
Example:
i = ADD.get("CurrentIterationNumber")[-1]
- CurrentOptimum
List of vectors. Each element is the optimal state obtained at the usual step of the iterative algorithm procedure of the optimization algorithm. It is not necessarily the last state.
Example:
Xo = ADD.get("CurrentOptimum")[:]
- CurrentState
List of vectors. Each element is a usual state vector used during the iterative algorithm procedure.
Example:
Xs = ADD.get("CurrentState")[:]
- IndexOfOptimum
List of integers. Each element is the iteration index of the optimum obtained at the current step of the iterative algorithm procedure of the optimization algorithm. It is not necessarily the number of the last iteration.
Example:
i = ADD.get("IndexOfOptimum")[-1]
See also¶
References to other sections:
- Calculation algorithm “3DVAR”
- Calculation algorithm “KalmanFilter”
- Calculation algorithm “ExtendedKalmanFilter”
- Calculation algorithm “EnsembleKalmanFilter”
Bibliographical references: