13.11. Calculation algorithm “LinearLeastSquares”¶
Description¶
This algorithm realizes a “Least Squares” linear type estimation of the state of a system. It is similar to a Calculation algorithm “Blue”, without its background part.
This algorithm is always the fastest of all the optimization algorithms of ADAO. It is theoretically reserved for observation operator cases which are explicitly linear, even if it sometimes works in “slightly” non-linear cases. One can verify the linearity of the observation operator with the help of a Checking algorithm “LinearityTest”.
This algorithm is naturally written for a single estimate, without any dynamic or iterative notion (there is no need in this case for an incremental evolution operator, nor for an evolution error covariance). In ADAO, it can also be used on a succession of observations, placing the estimate in a recursive framework partly similar to a Kalman Filter. A standard estimate is made at each observation step on the state predicted by the incremental evolution model.
In all cases, it is recommanded to prefer at least a Calculation algorithm “Blue”, or a Calculation algorithm “ExtendedBlue” or a Calculation algorithm “3DVAR”.
Optional and required commands¶
The general required commands, available in the editing user graphical or textual interface, are the following:
- Observation
- List of vectors. The variable indicates the observation vector used for
data assimilation or optimization, and usually noted
. Its value is defined as an object of type “Vector” if it is a single observation (temporal or not) or “VectorSeries” if it is a succession of observations. Its availability in output is conditioned by the boolean “Stored” associated in input.
- ObservationError
- Matrix. The variable indicates the observation error covariance matrix,
usually noted as
. It is defined as a “Matrix” type object, a “ScalarSparseMatrix” type object, or a “DiagonalSparseMatrix” type object, as described in detail in the section Requirements to describe covariance matrices. Its availability in output is conditioned by the boolean “Stored” associated with input.
- ObservationOperator
- Operator. The variable indicates the observation operator, usually noted as
, which transforms the input parameters
to results
to be compared to observations
. Its value is defined as a “Function” type object or a “Matrix” type one. In the case of “Function” type, different functional forms can be used, as described in the section Requirements for functions describing an operator. If there is some control
included in the observation, the operator has to be applied to a pair
.
The general optional commands, available in the editing user graphical or textual interface, are indicated in List of commands and keywords for data assimilation or optimisation case. Moreover, the parameters of the command “AlgorithmParameters” allows to choose the specific options, described hereafter, of the algorithm. See Description of options of an algorithm by “AlgorithmParameters” for the good use of this command.
The options are the following:
- EstimationOf
Predefined name. This key allows to choose the type of estimation to be performed. It can be either state-estimation, with a value of “State”, or parameter-estimation, with a value of “Parameters”. The default choice is “Parameters”.
Example:
{"EstimationOf":"Parameters"}
- StoreSupplementaryCalculations
List of names. This list indicates the names of the supplementary variables, that can be available during or at the end of the algorithm, if they are initially required by the user. Their avalability involves, potentially, costly calculations or memory consumptions. The default is then a void list, none of these variables being calculated and stored by default (excepted the unconditionnal variables). The possible names are in the following list (the detailed description of each named variable is given in the following part of this specific algorithmic documentation, in the sub-section “Information and variables available at the end of the algorithm”): [ “Analysis”, “CostFunctionJ”, “CostFunctionJAtCurrentOptimum”, “CostFunctionJb”, “CostFunctionJbAtCurrentOptimum”, “CostFunctionJo”, “CostFunctionJoAtCurrentOptimum”, “CurrentOptimum”, “CurrentState”, “CurrentStepNumber”, “ForecastState”, “InnovationAtCurrentAnalysis”, “OMA”, “SimulatedObservationAtCurrentOptimum”, “SimulatedObservationAtCurrentState”, “SimulatedObservationAtOptimum”, ].
Example :
{"StoreSupplementaryCalculations":["BMA", "CurrentState"]}
Tips for this algorithm:
As the “Background” and “BackgroundError” commands are required for ALL the calculation algorithms in the interface, you have to provide a value, even if these commands are not required for this algorithm, and will not be used. The simplest way is to give “1” as a STRING for both.
Information and variables available at the end of the algorithm¶
At the output, after executing the algorithm, there are information and
variables originating from the calculation. The description of
Variables and informations available at the output show the way to obtain them by the method
named get
, of the variable “ADD” of the post-processing in graphical
interface, or of the case in textual interface. The input variables, available
to the user at the output in order to facilitate the writing of post-processing
procedures, are described in the Inventory of potentially available information at the output.
Permanent outputs (non conditional)
The unconditional outputs of the algorithm are the following:
- Analysis
List of vectors. Each element of this variable is an optimal state
in optimization or an analysis
in data assimilation.
Example:
Xa = ADD.get("Analysis")[-1]
- CostFunctionJ
List of values. Each element is a value of the chosen error function
.
Example:
J = ADD.get("CostFunctionJ")[:]
- CostFunctionJb
List of values. Each element is a value of the error function
, that is of the background difference part. If this part does not exist in the error function, its value is zero.
Example:
Jb = ADD.get("CostFunctionJb")[:]
- CostFunctionJo
List of values. Each element is a value of the error function
, that is of the observation difference part.
Example:
Jo = ADD.get("CostFunctionJo")[:]
Set of on-demand outputs (conditional or not)
The whole set of algorithm outputs (conditional or not), sorted by alphabetical order, is the following:
- Analysis
List of vectors. Each element of this variable is an optimal state
in optimization or an analysis
in data assimilation.
Example:
Xa = ADD.get("Analysis")[-1]
- CostFunctionJ
List of values. Each element is a value of the chosen error function
.
Example:
J = ADD.get("CostFunctionJ")[:]
- CostFunctionJAtCurrentOptimum
List of values. Each element is a value of the error function
. At each step, the value corresponds to the optimal state found from the beginning.
Example:
JACO = ADD.get("CostFunctionJAtCurrentOptimum")[:]
- CostFunctionJb
List of values. Each element is a value of the error function
, that is of the background difference part. If this part does not exist in the error function, its value is zero.
Example:
Jb = ADD.get("CostFunctionJb")[:]
- CostFunctionJbAtCurrentOptimum
List of values. Each element is a value of the error function
. At each step, the value corresponds to the optimal state found from the beginning. If this part does not exist in the error function, its value is zero.
Example:
JbACO = ADD.get("CostFunctionJbAtCurrentOptimum")[:]
- CostFunctionJo
List of values. Each element is a value of the error function
, that is of the observation difference part.
Example:
Jo = ADD.get("CostFunctionJo")[:]
- CostFunctionJoAtCurrentOptimum
List of values. Each element is a value of the error function
, that is of the observation difference part. At each step, the value corresponds to the optimal state found from the beginning.
Example:
JoACO = ADD.get("CostFunctionJoAtCurrentOptimum")[:]
- CurrentOptimum
List of vectors. Each element is the optimal state obtained at the usual step of the iterative algorithm procedure of the optimization algorithm. It is not necessarily the last state.
Example:
Xo = ADD.get("CurrentOptimum")[:]
- CurrentState
List of vectors. Each element is a usual state vector used during the iterative algorithm procedure.
Example:
Xs = ADD.get("CurrentState")[:]
- CurrentStepNumber
List of integers. Each element is the index of the current step in the iterative process, driven by the series of observations, of the algorithm used. This corresponds to the observation step used. Note: it is not the index of the current iteration of the algorithm even if it coincides for non-iterative algorithms.
Example:
i = ADD.get("CurrentStepNumber")[-1]
- ForecastState
List of vectors. Each element is a state vector forecasted by the model during the iterative algorithm procedure.
Example:
Xp = ADD.get("ForecastState")[:]
- InnovationAtCurrentAnalysis
List of vectors. Each element is an innovation vector at current analysis. This quantity is identical to the innovation vector at analysed state in the case of a single-state assimilation.
Example:
ds = ADD.get("InnovationAtCurrentAnalysis")[-1]
- OMA
List of vectors. Each element is a vector of difference between the observation and the optimal state in the observation space.
Example:
oma = ADD.get("OMA")[-1]
- SimulatedObservationAtCurrentOptimum
List of vectors. Each element is a vector of observation simulated from the optimal state obtained at the current step the optimization algorithm, that is, in the observation space.
Example:
hxo = ADD.get("SimulatedObservationAtCurrentOptimum")[-1]
- SimulatedObservationAtCurrentState
List of vectors. Each element is an observed vector simulated by the observation operator from the current state, that is, in the observation space.
Example:
hxs = ADD.get("SimulatedObservationAtCurrentState")[-1]
- SimulatedObservationAtOptimum
List of vectors. Each element is a vector of observation obtained by the observation operator from simulation on the analysis or optimal state
. It is the observed forecast from the analysis or the optimal state, and it is sometimes called “Forecast”.
Example:
hxa = ADD.get("SimulatedObservationAtOptimum")[-1]
See also¶
References to other sections: