The optimization problem
is said Tykhonov well-posed if it satisfies together the following properties:
a) existence of the solution (i.e.
has a solution),
b) uniqueness of the solution (i.e. the solution set for
is a singleton),
is a good approximation of the solution of
is close to
is said Tykhonov well-posed if there exist exactly a unique
, and if
for any sequence
Recalling that a sequence
is said minimizing sequence for problem
, the previous definition can be rephrased in equivalent way, so  :
Definition 3.1: The problem
is said Tykhonov well-posed if it has, on K, a unique global minimum point,
, and, moreover, every minimizing sequence for
The definition 2.1 is motivated by the fact that, usually, every numerical method for solving
provides iteratively some minimizing sequences
; such sequences are also called sequences of approximate solutions for the problem
and therefore it is important to be sure that the approximate solutions
are not far from the (unique) minimum
In other words, the Tykhonov well-posedness of the optimization problem
requires existence and uniqueness of minimum point
towards which every sequence of approximate solutions of the problem
converges. More precisely, to consider well-posedness of Tykhonov type, it is introduced the notion of “approximating sequence” for the solutions of optimization problems and it is required convergence of such sequences to a solution of the problem. For more details see   .
When K is compact, the uniqueness of the solution of a minimization problem
is enough to guarantee its well-posedness but there are however simple examples in which the uniqueness of the solution of
is not enough to guarantee its Tykhonov well-posedness even for continuous functions.
A simple example of a problem with a unique solution but which is not Tykhonov well-posed is the following:
has a unique solution at zero, namely the argmin
provides a minimizing sequence which does not converge to this unique solution. Hence
is not Tykhonov well-posed. Therefore, for continuous functions the Tykhonov well-posedness of an optimization problem
simply means that every minimizing sequence of
has a unique minimum
but it is not Tykhonov well-posed, since the sequences
is minimizing but it does not converges to
is Tykhonov well-posed.
For convex functions in finite dimensions the uniqueness of the solution is enough to guarantee its Tykhonov well-posedness while this is no longer valid in infinite dimensions  . It is, in fact, known the following result:
Proposition 3.1: (  ) Let
be a convex function and let K be convex. If
has a unique solution, then
is Tykhonov well-posed.
Different characterizations of Tykhonov well-posedness for minimization problems determined by convex functions in Banach spaces can be found in  .
The next fundamental theorem  gives an alternative characterization of Tykhonov well-posed problems: it uses the set of ε-optimal solutions and states that Tykhonov well-posedness of
can be characterized by behaviour of
Theorem 3.1: If the minimization problem
is Tykhonov well-posed, then
is the set of ε-minimizers (approximate solutions) of f over K and diam denotes the diameter of given set.
Conversely, if f is lower semicontinuous and bounded from below on K,
implies Tykhonov well-posedness of
When K is closed and f is lower semicontinuous and bounded, from below it is possible to use the sets:
to introduce the notion of well-posedness of
Definition 3.3: Let K be closed and let
be lower semicontinuous. The minimization problem
is said to be well-posed if:
Of course, if to any of the notions of generalized well-posedness is added the uniqueness of the solution, it is obtained the corresponding non generalized notion. Different characterizations of Tykhonov well-posedness for minimization problems determined by convex functions in Banach spaces can be found in  .
2) Hadamard well-posedness
The second notion of well-posedness is inspired by the classical idea of J. Hadamard to the beginning of previous century: it requires existence and uniqueness of solution of the optimization problem together with continuous dependence of the optimal solution and optimal value on the data of the problems.
Definition 3.3: The minimization problem
is said to be Hadamard well-posed if it has unique solution
depends continuously on the data of the problem.
This is the well-known condition of well-posedness considered in the study of differential equations, translated for minimum problems. The essence of this notion is that a “small” change of the data of the problem yields a “small” change of the solution.
In fact very often the mathematical model of a phenomenon is so complicated that it is necessary to simplify it and replace it by other model which is “near” the original and, at the same time, it is important to be sure that the new problem will have a solution which is “near” the original one. The well-known variational principle of Ekeland  , an important tool for nonlinear analysis and optimization, asserts just that a particular optimization problem can be replaced by other which is near the original and has a unique solution.
3) Relations between Hadamard and Tykhonow well-posedness
Almost all the literature deals with different notions of well-posedness, even if especially with Tykhonov well-posedness. Some researchers have investigated the relations between these notions of well-posedness but there is no general research to such relations. At first sight, the two notions seem to be independent but, at least in the convex case, there are some papers showing a connection between the two properties: for instance    . The two notions (Tykhonov and Hadamard well-posedness) are equivalent at least for continuous objective functions. The links between Hadamard and Tykhonov well-posedness have been studied in    . There, besides uniqueness, additional structures are involved: in   , for example, basic ingredient is convexity. The object of this section is to describe generally the relations between Hadamard and Tykhonov well-posedness: a central role is provided by the well-known Hausdorff convergence.
We remember the concept of Hausdorff convergence of sequences of sets.
Let D, E be subsets of
Definition 3.4: Let
be a sequences of subsets of
. We say that
in the sense of Hansdorff, and we write
The following theorems  show the relations between the Tykhonov and the Hadamard well-posedness:
Theorem 3.2: Let K be a closed convex subset of
be a convex continuous function with one and only one minimum point on every closed and convex subset of K. If
Hadamard well-posed, with respect to the well-known Hausdorff convergence, then
is Tykhonov well-posed on every closed and convex subset of K.
Theorem 3.3: Let
be a convex function uniformly continuous on every bounded set. If
is Tykhonov well-posed on every closed and convex set, then
is Hadamard well-posed, with respect to the Hausdorff convergence.
The Tykhonov well-posedness does not, in general, imply the Hadamard well-posedness if the objective function is only continuous.
4. Some Generalizations
In the above definitions it is required the existences and the uniqueness of solution towards which every minimizing sequence converges. The different notions of well-posedness, however, admit generalizations which do not require uniqueness of the solution. In other words, the uniqueness requirement can be relaxed and well-posed optimization problems with several solutions can be considered. Therefore, while the requirement of existence in the previous definitions is crucial, the uniqueness condition is more debatable. In fact, many problems in linear and quadratic programming or many multicriteria optimization problems are usually considered as well-posed problems, although uniqueness is usually not satisfied  .
More precisely, in scalar optimization problems it is difficult to guarantee the uniqueness of the optimal solutions, uniqueness that is critical to the solution stability and calculation.
In other words, the different notions of well-posedness admit generalizations which do not require uniqueness of the solution. In particular, the concept of Tykhonov well posedness can be extended to minimum problems without uniqueness of the optimal solutions. It becomes imperative, namely, to generalize the notion of well-posedness for a minimization problem, introduced by Tykhonov, based on the fact that every minimizing sequence converges towards the unique minimum solution and to discuss the well-posedness for problems having more than one solution.
This new definition requires existence, but not uniqueness, of solution of
, and, for every minimizing sequences, the convergence of some subsequence of the minimizing sequence towards some optimal solution.
Definition 4.1: The problem
is called Tykhonov well-posed in the generalized sense if every minimizing sequence for
has some subsequence converging to an optimal solution of
, i.e. to an element of
More precisely the problem
is called Tykhonov well-posed in the generalized sense if
and every sequence
has some subsequence
From the definition it follows, obviously, that, if the problem
is Tykhonov well-posed in the generalized sense, then it has a non-empty compact set of solutions, i.e.
is nonempty and compact. Moreover, when
is well-posed in the generalized sense and
is a singleton (i.e. its solution is unique), then
is Tykhonov well-posed.
is a singleton, the previous definition reduces to the classical notion of Tykhonov well-posedness or rather the problem
is Tykhonov well-posedness if it is Tykhonov well-posed in the generalized Tykhonov sense and
is a singleton; thus generalized well-posedness is really a generalization of Tykhonov well-posedness.
In order to weaken the requirement of uniqueness of the solution, other more general notions of well-posedness have been introduced, depending on the hypotheses made on f (and K). Here, the author recall the concept of well-setness introduced in  .
Definition 4.2: Problem
is said to be well-set when, for every minimizing sequence
denotes the set of solutions of problem
is the distance of the point x from the set K.
The idea of the behaviour of the minimizing sequences was used by different authors also to extend this concept to strengthened notions. These notions are not suitable for numerical methods, where the function f is approximated by a family or a sequence of functions. For this reason new notions of well-posedness have been introduced and studied.
Before, however, we consider two generalizations of the notion of minimizing sequence.
The first was introduced and studied by  ; they introduced a new notion of well-posedness that strengthened the Tykhonov’s concept as it required the convergence to the optimal solution of each sequence belonging to a larger set of minimizing sequences. The Levitin-Polyak well-posedness has been investigated intensively in the literature, such as     .
Konsulova and Revalski  studied Levitin-Polyak well-posedness for convex scalar optimization problems with functional constraints. While, recently,  generalized the results of Konsulova and Revalski  for non convex optimization problems with abstract and functional constraints.
The well-posedness of the minimization problem
in the sense of Tykhonov concerns the behaviour of the function f in the set K but it does not take into account the behaviour of f outside K  . Of course, often, one can come across with minimizing sequences that do not lie necessarily in K and one wants to control the behaviour of these minimizing sequences, as well. Levitin and Polyak in  considered such kind of sequences.
Definition 4.3: Let K be a nonempty subset of
. The sequences
is a Levitin-Polyak minimizing sequences for the minimization problem
is the distance from the point
to the set K while
is the Euclidean norm.
In other words, a sequences
is a Levitin-Polyak minimizing sequences for
if not only
approaches the greatest lower bound of f over K but also the sequence
tends to K.
Then, the well-posedness concept can be strengthened as follows:
Definition 4.4: The minimization problem
is called Levitin-Polyak well-posed if it has unique solution
and, moreover, every Levitin-Polyak minimizing sequence for
Of course, this definition is stronger than that of Tykhonov since requires that each sequence, belonging to a larger set of minimizing sequences, convergs to the unique solution, namely Levitin-Polyak well-posedness implies Tykhonov well-posedness.
The converse is true provided that f is uniformly continuous but not necessarily true if f is only continuous. It is enough to consider
and the generalized minimizing sequence
As Tykhonov well-posedness can be characterized by the behaviour of
, as Levitin Polyak well-posedness can be characterized by the behaviour of the set:
and for f bounded from below on K.
In analogy with Theorem 3.1, the following result gives  :
Theorem 4.2: If K is closed and f is lower semicontinuous and bounded from below on K, then
implies Levitin-Polyak well-posedness of
A second generalization of the usual notion of minimizing sequences is the following:
Definition 4.5: A sequence
is said to be a generalized minimizing sequence for the minimization problem
if are fulfilled both:
Consequently another strengthened version of the well-posedness is the following:
Definition 4.6: The minimization problem
is said strongly well-posed if it has unique solution
and, moreover, every generalized minimizing sequences for
Obviously, in general strong well-posedness of the problem
implies that of Levitin-Polyak, which in its turn implies the Tykhonov well-posedness. It is important underline that, each of the previous definitions, widely studied in many papers    , is based on the behaviour of a certain set of minimizing sequences.
The corresponding generalization of Levitin-Polyak well-posedness in the case of non-uniqueness of the solution, or when the uniqueness of the solution is dropped, is:
Definition 4.7: The minimization problem
is called generalized Levitin-Polyak well-posed if every Levitin-Polyak minimizing sequence
has a subsequence converging to a solution of
Of course, any of the notions of generalized well-posedness, at which is added the uniqueness of the solution, is equivalent, obviously, to corresponding non generalized notion.
5. Well-Posedness of Vector Optimization Problems
In scalar optimization, the different notions of well-posedness are based either on the behaviour of “appropriate” minimizing sequences or on the dependence of optimal solution with respect to the data of optimization problems. In vector optimization, instead, there is not a commonly accepted definition of well-posedness but there are different notions of well-posedness of vector optimization problems. For a detailed survey on these problems it is possible to refer to      .
In this section, we propose some of these definitions of well-posedness for a vector optimization problem; in particular, among the various vector well-posedness notions known in the literature, the attention is focused on the concept of pointwise well-posedness, introduced in  .
We consider the vector optimization problem:
where K is a nonempty, closed, convex subset of
is a continuous function and
is a closed, convex, pointed cone and with nonempty interior. Denoted by
the interior of C.
is said to be an efficient solution or minimal solution of problem
If, in the above definition, instead of the cone C is used the cone
is said weak minimal solution. Then, a point
is said to be a weakly efficient solution or weak minimal solution of problem
The set of all efficient solutions (minimal solutions) of problem
is denoted by
denotes the set of weakly efficient solutions (weak minimal solutions) of
. Moreover, every minimal is also a weak minimal solution but the converse is not generally true.
In this section the authors recall a notion of well-posedness that considers a single point (a fixed efficient solution) and not the whole solution set: a particular type of pointwise well-posedness and strong pointwise well-posedness for vector optimization problems. This definition can be introduced considering, as in the scalar case, the diameter of the level sets of the function f.
Generalizing Tykhonov’s definition of well-posedness for a scalar optimization problem, in  are introduced the notions of well-posedness and of strong well-posedness of vector optimization problem
at a point
and are provided, also, some conditions to guarantee well-posedness according to these definitions.
Definition 5.1: The vector optimization problem
is said to be pointwise well-posed at the efficient solution
or Tykhonov well-posed at
Definition 5.2: The vector optimization problem
is said to be strongly pointwise well-posed at the efficient solution
, or Tykhonov strongly well-posed at