Some Convexificators-Based Optimality Conditions for Nonsmooth Mathematical Program with Vanishing Constraints
Abstract: In this paper, by using the notion of convexificator, we introduce the generalized standard Abadie constraint qualification and the generalized MPVC Abadie constraint qualification, and define the generalized stationary conditions for the nonsmooth mathematical program with vanishing constraints (MPVC for short). We show that the generalized strong stationary is the first order necessary optimality condition for nonsmooth MPVC under the generalized standard Abadie constraint qualification. Sufficient conditions for global or local optimality for nonsmooth MPVC are also derived under some generalized convexity assumptions.

1. Introduction

We consider the following mathematical program with vanishing constraints

$\begin{array}{ll}\mathrm{min}\hfill & f\left(x\right)\hfill \\ \text{s}\text{.t}\text{.}\hfill & {g}_{i}\left(x\right)\le 0,\text{\hspace{0.17em}}i=1,2,\cdots ,m;\hfill \\ \hfill & {h}_{j}\left(x\right)=0,\text{\hspace{0.17em}}j=1,2,\cdots ,p;\hfill \\ \hfill & {H}_{i}\left(x\right)\ge 0,\text{\hspace{0.17em}}i=1,2,\cdots ,l;\hfill \\ \hfill & {G}_{i}\left(x\right){H}_{i}\left(x\right)\le 0,\text{\hspace{0.17em}}i=1,2,\cdots ,l,\hfill \end{array}$ (1.1)

where $f:{R}^{n}\to R,\text{\hspace{0.17em}}g:{R}^{n}\to {R}^{m},\text{\hspace{0.17em}}h:{R}^{n}\to {R}^{p}$ and $G,H:{R}^{n}\to {R}^{l}$ are the given functions.

The MPVC problem was firstly introduced by Achtziger and Kanzow [1], and it originated from the optimization topology design problems in mechanical structures [1]. The current researches show that the robot motion planning problem [2], the economic dispatch problem [3] and the nonlinear integer optimal control [4] [5] can all be transformed into the MPVC problem. As pointed out in [1], the major difficulty in solving problem (1) is that it does not satisfy most of the standard constraint qualifications, including linearly independent constraint qualification (LICQ for short) and Mangasarian-Fromovitz constraint qualification (MFCQ for short) at any interesting feasible point so that the standard optimization methods are likely to fail for this problem. The MPVC can be formulated as a mathematical program with equilibrium constraints (MPEC for short) and vice versa. However, the formulation has certain disadvantages like the introduction of additional solutions, the large dimension and so on. In [6], the MPVC can be formulated as a nonsmooth MPEC, but the reformulation violates the MPEC type constraint qualifications, which causes some trouble when solving the MPEC formulation by the suitable algorithms. These observations motivate us to consider it as an independent class of interesting optimization problems. The MPVC has attracted much attention in recent years. Several theoretical properties and different numerical approaches for MPVC can be found in [1] - [25].

It is well known that convexifactor is one of the important tools of nonsmooth analysis; the concept of convexificator was firstly introduced by Demyanov [26] in 1994 as a generalization of the notation of upper convex and lower concave approximation. It can be viewed as a weaker version of the notion of subdifferential. Indeed, the convexificator is in general a closed set unlike the well-known subdifferentials which are convex and compact sets. Moreover, for a locally Lipschitz function, most known subdifferentials are convexificators and these known subdifferentials may contain the convex hull of a convexificator [27]. Therefore, from the viewpoint of optimization and applications, the optimality conditions using convexificators are sharper than those using Clarke, Michel-Penot subdifferentials, etc. Convexificators were further studied by Demyanov and Jeyakumar [28], Jeyakumar and Luc [27], Dutta and Chandra [29] [30], etc. Recently, the notion of convexificators has been used to extend various results in nonsmooth analysis; see, e.g., [31] [32] [33] [34]. For nonsmooth optimization problems, various convexificators-based results with respect to the Fritz-John type and the Karush-Kuhn-Tucker type necessary optimality conditions have been developed in [32] [33] [34] [35] [36]. Very recently, Ansari, Movahedian and Nobakhtian [37] deal with constraint qualifications, stationary concepts and optimality conditions for a nonsmooth mathematical program with equilibrium constraints by using the notion of convexificators. However, the corresponding results about the nonsmooth mathematical program with vanishing constraints can be very few. Until recently, based on the Clarke subdifferential, Kazemi and Kanzi [21] study a broad class of mathematical programming with non-differentiable vanishing constraints. They firstly propose some various qualification conditions for this problem. Then, these constraint qualifications are applied to obtain, under different conditions, several stationary conditions of Karush-Kuhn-Tucker type.

In this paper, different from [21], by utilizing the concept of convexificators which is weaker than the Clarke subdifferential, we introduce the generalized standard Abadie constraint qualification and the generalized MPVC Abadie constraint qualification, and define the generalized stationary conditions for the nonsmooth MPVC. We derive the necessary and sufficient optimality conditions for nonsmooth MPVC under the generalized standard Abadie constraint qualification and some generalized convexity assumptions.

The rest of the paper is organized as follows. Section 2 contains the preliminaries and basic definitions which are used in the sequel. In Section 3, some necessary and sufficient optimality conditions are derived for nonsmooth MPVC based on the notion of convexificators. We close with some final remarks in the end.

2. Prelimilaries

In this section, we will give some basic definitions, which will be used in the sequel.

Let $S\subseteq {R}^{n}$ be a nonempty subset contains the origin. The convex hull of S, the closure of S and the convex cone generated by S is denoted by $coS,\text{\hspace{0.17em}}clS$ and $coneS$, respectively. The negative polar cone is defined by ${S}^{-}=\left\{v\in {R}^{n}:〈x,v〉\le 0,\text{\hspace{0.17em}}\forall x\in S\right\}$.

Let $x\in clS$, the contingent cone $T\left(x,S\right)$ to S at x is defined by

$T\left(x,S\right)=\left\{v\in {R}^{n}|\exists {t}_{n}↓0,\text{\hspace{0.17em}}\exists {v}_{n}\to v,\text{\hspace{0.17em}}\text{s}\text{.t}\text{.}\text{\hspace{0.17em}}x+{t}_{n}{v}_{n}\in S\right\}.$

Let $f:{R}^{n}\to R\cup \left\{+\infty \right\}$ be an extended real valued function. The lower and upper Dini directional derivatives of f at x in the direction v are defined, respectively, by

${f}^{-}\left(x,v\right)=\underset{t\to {0}^{+}}{\mathrm{lim}\mathrm{inf}}\frac{f\left(x+tv\right)-f\left(x\right)}{t}$

and

${f}^{+}\left(x,v\right)=\underset{t\to {0}^{+}}{\mathrm{lim}\mathrm{sup}}\frac{f\left(x+tv\right)-f\left(x\right)}{t}.$

Definition 2.1 [27] A function $f:{R}^{n}\to R\cup \left\{+\infty \right\}$ is said to admit an upper convexificator, ${\partial }^{*}f\left(x\right)$ at $x\in {R}^{n}$ if ${\partial }^{*}f\left(x\right)$ is a closed set and for every $v\in {R}^{n}$ ,

${f}^{-}\left(x,v\right)\le \underset{\xi \in {\partial }^{*}f\left(x\right)}{\mathrm{sup}}〈\xi ,v〉.$

Definition 2.2 [27] A function $f:{R}^{n}\to R\cup \left\{+\infty \right\}$ is said to admit a lower convexificator, ${\partial }_{*}f\left(x\right)$ at $x\in {R}^{n}$ if ${\partial }_{*}f\left(x\right)$ is a closed set and for every $v\in {R}^{n}$ ,

${f}^{+}\left(x,v\right)\ge \underset{\xi \in {\partial }_{*}f\left(x\right)}{\mathrm{inf}}〈\xi ,v〉.$

A closed set ${\partial }^{*}f\left(x\right)\subset {R}^{n}$ is said to be a convexificator of f at x if it is both upper and lower convexificator of f at x.

Definition 2.3 [30] A function $f:{R}^{n}\to R\cup \left\{+\infty \right\}$ is said to admit an upper semi-regular convexificator, ${\partial }^{*}f\left(x\right)$ at $x\in {R}^{n}$ if ${\partial }^{*}f\left(x\right)$ is a closed set and for every $v\in {R}^{n}$ ,

${f}^{+}\left(x,v\right)\le \underset{\xi \in {\partial }^{*}f\left(x\right)}{\mathrm{sup}}〈\xi ,v〉.$

If the equality holds in the above inequality,then ${\partial }^{*}f\left(x\right)$ is called as an upper regular convexificator of f at x.

Definition 2.4 [30] Let $f:{R}^{n}\to R\cup \left\{+\infty \right\}$ be an extended real valued function that has an upper semi-regular convexificator at $x\in {R}^{n}$. Then f is said to be

(i) ${\partial }^{*}$ convex at $\stackrel{¯}{x}$ if for every $x\in {R}^{n}$, $f\left(x\right)\ge f\left(\stackrel{¯}{x}\right)+〈\xi ,x-\stackrel{¯}{x}〉,\text{\hspace{0.17em}}\forall \xi \in {\partial }^{*}f\left(\stackrel{¯}{x}\right)$.

(ii) ${\partial }^{*}$ pseudoconvex at $\stackrel{¯}{x}$ if for every $x\in {R}^{n}$, $f\left(x\right)

(iii) ${\partial }^{*}$ quasiconvex at $\stackrel{¯}{x}$ if for every $x\in {R}^{n}$, $f\left(x\right)\le f\left(\stackrel{¯}{x}\right)⇒〈\xi ,x-\stackrel{¯}{x}〉\le 0,\text{\hspace{0.17em}}\forall \xi \in {\partial }^{*}f\left(\stackrel{¯}{x}\right).$

3. Optimality Conditions for Nonsmooth Mathematical Program with Vanishing Constraints

In this section, we will develop several optimality conditions for nonsmooth MPVC in terms of the concept of convexificator. It is worth mentioning that since the upper convexificator is not necessary unique, all the new definitions given in this section depend on the the choice of the convexificator.

First, we introduce some notations. For the problem (1), we denote the feasible region by X, that is,

$X=\left\{x\in {R}^{n}|g\left(x\right)\le 0,h\left(x\right)=0,\text{\hspace{0.17em}}{H}_{i}\left(x\right)\ge 0,\text{\hspace{0.17em}}{G}_{i}\left(x\right){H}_{i}\left(x\right)\le 0,\text{\hspace{0.17em}}i=1,2,\cdots ,l\right\}.$

For $\stackrel{¯}{x}\in X$, we define the following index sets:

${I}_{g}=\left\{i|{g}_{i}\left(\stackrel{¯}{x}\right)=0\right\},$

${I}_{+0}=\left\{i|{H}_{i}\left(\stackrel{¯}{x}\right)>0,{G}_{i}\left(\stackrel{¯}{x}\right)=0\right\},$

${I}_{+-}=\left\{i|{H}_{i}\left(\stackrel{¯}{x}\right)>0,{G}_{i}\left(\stackrel{¯}{x}\right)<0\right\},$

${I}_{0+}=\left\{i|{H}_{i}\left(\stackrel{¯}{x}\right)=0,{G}_{i}\left(\stackrel{¯}{x}\right)>0\right\},$

${I}_{0-}=\left\{i|{H}_{i}\left(\stackrel{¯}{x}\right)=0,{G}_{i}\left(\stackrel{¯}{x}\right)<0\right\},$

${I}_{00}=\left\{i|{H}_{i}\left(\stackrel{¯}{x}\right)=0,{G}_{i}\left(\stackrel{¯}{x}\right)=0\right\}.$

Now, we assume that all the functions have an upper convexificator at $\stackrel{¯}{x}$. For the above index sets, we introduce the following notations:

$g=\underset{i\in {I}_{g}}{\cup }\text{ }\text{ }co{\partial }^{*}{g}_{i}\left(\stackrel{¯}{x}\right),$

$h=\underset{i=1}{\overset{p}{\cup }}\text{ }\text{ }co{\partial }^{*}{h}_{i}\left(\stackrel{¯}{x}\right)\cup co{\partial }^{*}\left(-{h}_{i}\right)\left(\stackrel{¯}{x}\right),$

${G}_{{I}_{+0}}=\underset{i\in {I}_{+0}}{\cup }\text{ }\text{ }co{\partial }^{*}{G}_{i}\left(\stackrel{¯}{x}\right),$

${H}_{{I}_{0+}}=\underset{i\in {I}_{0+}}{\cup }\text{ }\text{ }co{\partial }^{*}{H}_{i}\left(\stackrel{¯}{x}\right)\cup co{\partial }^{*}\left(-{H}_{i}\right)\left(\stackrel{¯}{x}\right),$

${H}_{{I}_{0-}}=\underset{i\in {I}_{0-}}{\cup }\text{ }\text{ }co{\partial }^{*}\left(-{H}_{i}\right)\left(\stackrel{¯}{x}\right),$

${H}_{{I}_{00}}=\underset{i\in {I}_{00}}{\cup }\text{ }\text{ }co{\partial }^{*}\left(-{H}_{i}\right)\left(\stackrel{¯}{x}\right),$

${\left(GH\right)}_{{I}_{00}}=\underset{i\in {I}_{00}}{\cup }\text{ }\text{ }co{\partial }^{*}{G}_{i}\left(\stackrel{¯}{x}\right)\cup co{\partial }^{*}{H}_{i}\left(\stackrel{¯}{x}\right),$

$\Gamma \left(\stackrel{¯}{x}\right)={g}^{-}\cap {h}^{-}\cap {G}_{{I}_{+0}}^{-}\cap {H}_{{I}_{0+}}^{-}\cap {H}_{{I}_{0-}}^{-}\cap {H}_{{I}_{00}}^{-},$

$\Lambda \left(\stackrel{¯}{x}\right)={g}^{-}\cap {h}^{-}\cap {G}_{{I}_{+0}}^{-}\cap {H}_{{I}_{0+}}^{-}\cap {H}_{{I}_{0-}}^{-}\cap {H}_{{I}_{00}}^{-}\cap {\left(GH\right)}_{{I}_{00}}^{-}.$

Utilizing the above notations, motivated by [37], we are ready to introduce the Abadie type constraint qualification in the form of convexificator which is very important to establish the optimality conditions.

Definition 3.1. Let $\stackrel{¯}{x}\in X$ , and assume that all of the functions have an upper convexificator at $\stackrel{¯}{x}$. We say that the generalized standard Abadie constraint qualification (GS ACQ for short) holds at $\stackrel{¯}{x}$ if at least one of the dual sets used in the definition of $\Gamma \left(\stackrel{¯}{x}\right)$ is nonzero and $\Gamma \left(\stackrel{¯}{x}\right)\subset T\left(X,\stackrel{¯}{x}\right)$.

Definition 3.2. Let $\stackrel{¯}{x}\in X$ , and assume that all of the functions have an upper convexificator at $\stackrel{¯}{x}$. We say that the generalized MPVC Abadie constraint qualification (GMPVC ACQ for short) holds at $\stackrel{¯}{x}$ if at least one of the dual sets used in the definition of $\Lambda \left(\stackrel{¯}{x}\right)$ is nonzero and $\Lambda \left(\stackrel{¯}{x}\right)\subset T\left(X,\stackrel{¯}{x}\right)$.

Remark 3.1. Since $\Lambda \left(\stackrel{¯}{x}\right)\subset \Gamma \left(\stackrel{¯}{x}\right)$ , the GS ACQ implies the GMPVC ACQ.

Following the procedure in this section, we will formulate several extended version of stationary concepts for MPVC in the context of convexificator.

Definition 3.3. A feasible point $\stackrel{¯}{x}$ of MPVC is called as a generalized weakly stationary point (GW stationary point) if there are vectors $\lambda =\left({\lambda }^{g},{\lambda }^{h},{\lambda }^{H}\right)\in {R}^{m+p+2l}$ and $\mu =\left({\mu }^{h},{\mu }^{G},{\mu }^{H}\right)\in {R}^{p+2l}$ such that the following conditions hold true:

$\begin{array}{l}0\in co{\partial }^{*}f\left(\stackrel{¯}{x}\right)+\underset{i\in {I}_{g}}{\sum }\text{ }\text{ }{\lambda }_{i}^{g}co{\partial }^{*}{g}_{i}\left(\stackrel{¯}{x}\right)+\underset{j=1}{\overset{p}{\sum }}\left[{\lambda }_{j}^{h}co{\partial }^{*}{h}_{j}\left(\stackrel{¯}{x}\right)+{\mu }_{j}^{h}co{\partial }^{*}\left(-{h}_{j}\right)\left(\stackrel{¯}{x}\right)\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+\underset{i=1}{\overset{l}{\sum }}\text{ }\text{ }{\lambda }_{i}^{H}co{\partial }^{*}\left(-{H}_{i}\right)\left(\stackrel{¯}{x}\right)+\underset{i=1}{\overset{l}{\sum }}\left[{\mu }_{i}^{G}co{\partial }^{*}{G}_{i}\left(\stackrel{¯}{x}\right)+{\mu }_{i}^{H}co{\partial }^{*}{H}_{i}\left(\stackrel{¯}{x}\right)\right],\end{array}$ (3.1)

${\lambda }_{{I}_{g}}^{g}\ge 0,\text{\hspace{0.17em}}{\lambda }_{j}^{h},{\mu }_{j}^{h}\ge 0,\text{\hspace{0.17em}}j=1,2,3,\cdots ,p,\text{\hspace{0.17em}}{\lambda }_{i}^{H},\text{\hspace{0.17em}}{\mu }_{i}^{G},\text{\hspace{0.17em}}{\mu }_{i}^{H}\ge 0,\text{\hspace{0.17em}}i=1,2,3,\cdots ,l.$ (3.2)

${\lambda }_{{I}_{+0}\cup {I}_{+-}}^{H}=0,\text{\hspace{0.17em}}{\mu }_{{I}_{+0}\cup {I}_{+-}}^{H}=0,\text{\hspace{0.17em}}{\mu }_{{I}_{0+}\cup {I}_{+-}\cup {I}_{0-}}^{G}=0,\text{\hspace{0.17em}}{\lambda }_{i}^{H}-{\mu }_{i}^{H}\ge 0,\text{\hspace{0.17em}}i\in {I}_{0-}.$ (3.3)

Definition 3.4. A feasible point $\stackrel{¯}{x}$ of MPVC is called as a generalized T stationary point (GT stationary point for short) if there are vectors $\lambda =\left({\lambda }^{g},{\lambda }^{h},{\lambda }^{H}\right)\in {R}^{m+p+2l}$ and $\mu =\left({\mu }^{h},{\mu }^{G},{\mu }^{H}\right)\in {R}^{p+2l}$ such that (3.1)-(3.3) and the following conditions hold true:

$\forall i\in {I}_{00},\text{\hspace{0.17em}}{\mu }_{i}^{G}\left({\lambda }_{i}^{H}-{\mu }_{i}^{H}\right)\le 0.$

Definition 3.5. A feasible point $\stackrel{¯}{x}$ of MPVC is called as a generalized M stationary point (GM stationary point for short) if there are vectors $\lambda =\left({\lambda }^{g},{\lambda }^{h},{\lambda }^{H}\right)\in {R}^{m+p+2l}$ and $\mu =\left({\mu }^{h},{\mu }^{G},{\mu }^{H}\right)\in {R}^{p+2l}$ such that (3.1)-(3.3) and the following conditions hold true:

$\forall i\in {I}_{00},\text{\hspace{0.17em}}{\mu }_{i}^{G}\left({\lambda }_{i}^{H}-{\mu }_{i}^{H}\right)=0.$

Definition 3.6. A feasible point $\stackrel{¯}{x}$ of MPVC is called as a generalized S stationary point (GS stationary point for short) if there are vectors $\lambda =\left({\lambda }^{g},{\lambda }^{h},{\lambda }^{H}\right)\in {R}^{m+p+2l}$ and $\mu =\left({\mu }^{h},{\mu }^{G},{\mu }^{H}\right)\in {R}^{p+2l}$ such that (3.1)-(3.3) and the following conditions hold true:

$\forall i\in {I}_{00},\text{\hspace{0.17em}}{\mu }_{i}^{G}=0,\text{\hspace{0.17em}}\left({\lambda }_{i}^{H}-{\mu }_{i}^{H}\right)\ge 0.$

Remark 3.2. If all the functions are differentiable, then these notions reduce to the stationary concepts defined in [25]. Directly from the definitions, we get the following relationships between these stationary concepts.

$GS\text{-}staionary⇒GM\text{-}staionary⇒GT\text{-}staionary⇒GW\text{-}staionary.$

On the other hand,it is obviously that the above stationary concepts include the ones of Karush-Kuhn?C Tucker type which are proposed by Kazemi et al in [21] as a special case.

In the rest of this section, we will focus our attention to the necessary and sufficient optimality conditions for the nonsmooth MPVC under the framework of convexificator. The following theorem is the first main result of this paper. We will see that this result is proved under very weak assumptions. Only the objective function is assumed to be Lipschitz, while the other functions do not satisfy any type of continuity.

Theorem 3.1. Let $\stackrel{¯}{x}$ be a local optimal solution of MPVC (1.1). Suppose that f is a locally Lipschitz at $\stackrel{¯}{x}$ which admits a bounded upper semi-regular convexificator ${\partial }^{*}f\left(\stackrel{¯}{x}\right)$. Assume that GS-ACQ holds at $\stackrel{¯}{x}$ and the cone

$\begin{array}{c}K=cone\text{\hspace{0.17em}}co\text{\hspace{0.17em}}g+cone\text{\hspace{0.17em}}co\text{\hspace{0.17em}}h+cone\text{\hspace{0.17em}}co\text{\hspace{0.17em}}{H}_{{I}_{0+}}+cone\text{\hspace{0.17em}}co\text{\hspace{0.17em}}{H}_{{I}_{0-}}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+cone\text{\hspace{0.17em}}co\text{\hspace{0.17em}}{G}_{{I}_{+0}}+cone\text{\hspace{0.17em}}co\text{\hspace{0.17em}}{H}_{{I}_{00}}\end{array}$ (3.4)

is closed. Then $\stackrel{¯}{x}$ is a GS stationary point.

Proof. Firstly, we will prove that

$0\in co\text{\hspace{0.17em}}{\partial }^{*}f\left(\stackrel{¯}{x}\right)+K.$ (3.5)

Assume that (3.5) does not hold, one has $co\text{\hspace{0.17em}}{\partial }^{*}f\left(\stackrel{¯}{x}\right)\cap -K=\varnothing$. In view of $co\text{\hspace{0.17em}}{\partial }^{*}f\left(\stackrel{¯}{x}\right)$ being a bounded upper semi-regular convexificator, we know that $co\text{\hspace{0.17em}}{\partial }^{*}f\left(\stackrel{¯}{x}\right)$ is compact and convex. Since K is a closed convex set, thus utilizing the convex separation theorem, there exists a nonzero vector $v\in {R}^{n}$ and a real number $\rho \in R$ satisfying

$\mathrm{sup}〈\xi ,v〉<\rho <\mathrm{inf}〈\eta ,v〉,\text{\hspace{0.17em}}\forall \xi \in co\text{\hspace{0.17em}}{\partial }^{*}f\left(\stackrel{¯}{x}\right),\text{\hspace{0.17em}}\forall \eta \in -K.$ (*)

Notice that $-K$ is a cone, this implies that $\rho =0$ and

$\mathrm{sup}〈\xi ,v〉<0,\text{\hspace{0.17em}}\forall \xi \in co\text{\hspace{0.17em}}{\partial }^{*}f\left(\stackrel{¯}{x}\right).$ (3.6)

By the definition of upper semi-regular convexificator and (3.6), one gets ${f}^{+}\left(x;v\right)<0$. Hence, there is a $\delta >0$ such that

$f\left(\stackrel{¯}{x}+tv\right) (3.7)

On the other hand, using the relationships (*) and $\rho =0$, we obtain

$〈\zeta ,v〉\le 0,\text{\hspace{0.17em}}\forall \zeta \in K.$

This implies that

$\begin{array}{l}〈{\zeta }_{i}^{1},v〉\le 0,\text{\hspace{0.17em}}\forall {\zeta }_{i}^{1}\in co\text{\hspace{0.17em}}{\partial }^{*}{g}_{i}\left(\stackrel{¯}{x}\right),\text{\hspace{0.17em}}\forall i\in {I}_{g},\\ 〈{\zeta }_{i}^{2},v〉\le 0,\text{\hspace{0.17em}}\forall {\zeta }_{i}^{2}\in co\text{\hspace{0.17em}}{\partial }^{*}{h}_{i}\left(\stackrel{¯}{x}\right)\cup co\text{\hspace{0.17em}}{\partial }^{*}\left(-{h}_{i}\right)\left(\stackrel{¯}{x}\right),\text{\hspace{0.17em}}\forall i=1,2,3,\cdots ,p,\\ 〈{\zeta }_{i}^{3},v〉\le 0,\text{\hspace{0.17em}}\forall {\zeta }_{i}^{3}\in co\text{\hspace{0.17em}}{\partial }^{*}{H}_{i}\left(\stackrel{¯}{x}\right)\cup co\text{\hspace{0.17em}}{\partial }^{*}\left(-{H}_{i}\right)\left(\stackrel{¯}{x}\right),\text{\hspace{0.17em}}\forall i\in {I}_{0+},\\ 〈{\zeta }_{i}^{4},v〉\le 0,\text{\hspace{0.17em}}\forall {\zeta }_{i}^{4}\in co\text{\hspace{0.17em}}{\partial }^{*}\left(-{H}_{i}\right)\left(\stackrel{¯}{x}\right),\text{\hspace{0.17em}}\forall i\in {I}_{0-},\\ 〈{\zeta }_{i}^{5},v〉\le 0,\text{\hspace{0.17em}}\forall {\zeta }_{i}^{5}\in co\text{\hspace{0.17em}}{\partial }^{*}{G}_{i}\left(\stackrel{¯}{x}\right),\text{\hspace{0.17em}}\forall i\in {I}_{+0},\\ 〈{\zeta }_{i}^{6},v〉\le 0,\text{\hspace{0.17em}}\forall {\zeta }_{i}^{6}\in co\text{\hspace{0.17em}}{\partial }^{*}\left(-{H}_{i}\right)\left(\stackrel{¯}{x}\right),\text{\hspace{0.17em}}\forall i\in {I}_{00}.\end{array}$ (3.8)

This is to say

$v\in {g}^{-}\cap {h}^{-}\cap {G}_{{I}_{+0}}^{-}\cap {H}_{{I}_{0+}}^{-}\cap {H}_{{I}_{0-}}^{-}\cap {H}_{{I}_{00}}^{-}=\Gamma \left(\stackrel{¯}{x}\right).$

Taking into account GS-ACQ at $\stackrel{¯}{x}$, we obtain $v\in T\left(X,\stackrel{¯}{x}\right)$. Thus, there exist the sequences ${t}_{k}\to 0$ and ${v}_{k}\to v$ such that $\stackrel{¯}{x}+{t}_{k}{v}_{k}\in X,\text{\hspace{0.17em}}\forall k\in \text{N}$, where N denotes the natural number set. On the other hand, since f is Lipschitz near $\stackrel{¯}{x}$ with the modulus $L>0$, we have for all sufficiently large k,

$\begin{array}{c}f\left(\stackrel{¯}{x}+{t}_{k}{v}_{k}\right)\le f\left(\stackrel{¯}{x}+{t}_{k}v\right)+L{t}_{k}‖{v}_{k}-v‖\\ \le f\left(\stackrel{¯}{x}\right)+L{t}_{k}‖{v}_{k}-v‖.\end{array}$ (3.9)

Combining (3.7) and (3.9), we get for all sufficiently large k,

$f\left(\stackrel{¯}{x}+{t}_{k}{v}_{k}\right)

which contradicts the local optimality of $\stackrel{¯}{x}$. Thus, (3.5) is true. This implies that there exist the nonnegative multipliers ${\lambda }_{i}^{g},\text{\hspace{0.17em}}i\in {I}_{g}$, ${\lambda }_{j}^{h},\text{\hspace{0.17em}}{\mu }_{j}^{h},\text{\hspace{0.17em}}j=1,2,3,\cdots ,p$, ${\mu }_{i}^{H},\text{\hspace{0.17em}}i\in {I}_{0+}$, ${\lambda }_{i}^{H},\text{\hspace{0.17em}}i\in {I}_{0+}\cup {I}_{0-}\cup {I}_{00}$, ${\mu }_{i}^{G},\text{\hspace{0.17em}}i\in {I}_{+0}$ such that

$\begin{array}{l}0\in co{\partial }^{*}f\left(\stackrel{¯}{x}\right)+\underset{i\in {I}_{g}}{\sum }\text{ }\text{ }{\lambda }_{i}^{g}co{\partial }^{*}{g}_{i}\left(\stackrel{¯}{x}\right)+\underset{j=1}{\overset{p}{\sum }}\left[{\lambda }_{j}^{h}co{\partial }^{*}{h}_{j}\left(\stackrel{¯}{x}\right)+{\mu }_{j}^{h}co{\partial }^{*}\left(-{h}_{j}\right)\left(\stackrel{¯}{x}\right)\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }\text{ }+\underset{i\in {I}_{0+}\cup {I}_{0-}\cup {I}_{00}}{\sum }\text{ }\text{ }{\lambda }_{i}^{H}co{\partial }^{*}\left(-{H}_{i}\right)\left(\stackrel{¯}{x}\right)+\underset{i\in {I}_{0+}}{\sum }\text{ }\text{ }{\mu }_{i}^{H}co{\partial }^{*}{H}_{i}\left(\stackrel{¯}{x}\right)+\underset{i\in {I}_{+0}}{\sum }\text{ }\text{ }{\mu }_{i}^{G}co{\partial }^{*}{G}_{i}\left(\stackrel{¯}{x}\right).\end{array}$ (3.10)

Let ${\lambda }_{{I}_{+0}\cup {I}_{+-}}^{H}=0,\text{\hspace{0.17em}}{\mu }_{{I}_{00}\cup {I}_{0-}\cup {I}_{+-}\cup {I}_{+0}}^{H}=0,\text{\hspace{0.17em}}{\mu }_{{I}_{+-}\cup {I}_{0+}\cup {I}_{0-}\cup {I}_{00}}^{G}=0$, we obtain from (3.10),

$\begin{array}{l}0\in co{\partial }^{*}f\left(\stackrel{¯}{x}\right)+\underset{i\in {I}_{g}}{\sum }\text{ }\text{ }{\lambda }_{i}^{g}co{\partial }^{*}{g}_{i}\left(\stackrel{¯}{x}\right)+\underset{j=1}{\overset{p}{\sum }}\left[{\lambda }_{j}^{h}co{\partial }^{*}{h}_{j}\left(\stackrel{¯}{x}\right)+{\mu }_{j}^{h}co{\partial }^{*}\left(-{h}_{j}\right)\left(\stackrel{¯}{x}\right)\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+\underset{i=1}{\overset{l}{\sum }}\text{ }\text{ }{\lambda }_{i}^{H}co{\partial }^{*}\left(-{H}_{i}\right)\left(\stackrel{¯}{x}\right)+\underset{i=1}{\overset{l}{\sum }}\left[{\mu }_{i}^{G}co{\partial }^{*}{G}_{i}\left(\stackrel{¯}{x}\right)+{\mu }_{i}^{H}co{\partial }^{*}{H}_{i}\left(\stackrel{¯}{x}\right)\right],\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\lambda }_{{I}_{g}}^{g}\ge 0,\text{\hspace{0.17em}}{\lambda }_{j}^{h},{\mu }_{j}^{h}\ge 0,\text{\hspace{0.17em}}j=1,2,3,\cdots ,p,\text{\hspace{0.17em}}{\lambda }_{i}^{H},\text{\hspace{0.17em}}{\mu }_{i}^{G},\text{\hspace{0.17em}}{\mu }_{i}^{H}\ge 0,\text{\hspace{0.17em}}i=1,2,3,\cdots ,l.\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\lambda }_{{I}_{+0}\cup {I}_{+-}}^{H}=0,\text{\hspace{0.17em}}{\mu }_{{I}_{+0}\cup {I}_{+-}\cup {I}_{0-}\cup {I}_{00}}^{H}=0,\text{\hspace{0.17em}}{\mu }_{{I}_{0+}\cup {I}_{+-}\cup {I}_{0-}}^{G}=0,\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\forall i\in {I}_{00},\text{\hspace{0.17em}}{\mu }_{i}^{G}=0,\text{\hspace{0.17em}}{\lambda }_{i}^{H}-{\mu }_{i}^{H}={\lambda }_{i}^{H}\ge 0.\end{array}$

This shows that $\stackrel{¯}{x}$ is a GS stationary point and the proof is complete.

Since the constraint functions admitting a bounded upper semi-regular convexificator assure that the set K in (3.4) is closed, we immediately obtain the following corollary of Theorem 3.1.

Corollary 3.1. Let $\stackrel{¯}{x}$ be a local optimal solution of MPVC (1.1). Suppose that f is a locally Lipschitz function at $\stackrel{¯}{x}$. Assume also that f and the constraint functions admit a bounded upper semi-regular convexificator. If the GS-ACQ holds at $\stackrel{¯}{x}$ , then $\stackrel{¯}{x}$ is a GS stationary point.

Now, we provide the following example to illustrate Theorem 3.1, this example is a modified version of Example 4.7 in [37].

Example 3.1. Consider the following two-dimension nonsmooth MPVC problem:

$\begin{array}{ll}\mathrm{min}\hfill & f\left(x\right)=|{x}_{1}|-|{x}_{2}|\hfill \\ \text{s}\text{.t}\text{.}\hfill & g\left(x\right)=|{x}_{2}|\le 0;\hfill \\ \hfill & H\left(x\right)={x}_{2}\ge 0;\hfill \\ \hfill & G\left(x\right)H\left(x\right)={x}_{1}{x}_{2}\le 0.\hfill \end{array}$

Obviously, 0 is the global optimal solution of the above problem and we have

${f}^{+}\left(0;v\right)=|{v}_{1}|-|{v}_{2}|,\text{\hspace{0.17em}}{g}^{+}\left(0;v\right)=|{v}_{2}|,\text{\hspace{0.17em}}{\left(-H\right)}^{+}\left(0;v\right)=-{v}_{2},$

${G}^{+}\left(0;v\right)={v}_{1},\text{\hspace{0.17em}}{H}^{+}\left(0;v\right)={v}_{2}.$

Moreover, we obtain the following bounded upper semi-regular convexificators for these functions

${\partial }^{*}f\left(0\right)=\left\{{\left(1,-1\right)}^{\text{T}},{\left(-1,1\right)}^{\text{T}}\right\},\text{\hspace{0.17em}}{\partial }^{*}g\left(0\right)=\left\{{\left(0,-1\right)}^{\text{T}},{\left(0,1\right)}^{\text{T}}\right\},$

${\partial }^{*}\left(-H\right)\left(0\right)=\left\{{\left(0,-1\right)}^{\text{T}}\right\},\text{\hspace{0.17em}}{\partial }^{*}G\left(0\right)=\left\{{\left(1,0\right)}^{\text{T}}\right\},\text{\hspace{0.17em}}{\partial }^{*}H\left(0\right)=\left\{{\left(0,1\right)}^{\text{T}}\right\}.$

Hence, we get

${g}^{-}=\left\{v|{v}_{2}=0\right\},\text{\hspace{0.17em}}{H}_{{I}_{00}}^{-}=\left\{v|{v}_{2}\ge 0\right\}.$

This implies that the GS-ACQ is satisfied at 0 and K is closed. By taking ${\lambda }_{g}={\lambda }^{H}={\mu }^{G}={\mu }^{H}=0$, one gets

$0\in co{\partial }^{*}f\left(0\right)+{\lambda }_{g}co{\partial }^{*}g\left(0\right)+{\lambda }^{H}co{\partial }^{*}\left(-H\right)\left(0\right)+{\mu }^{G}co{\partial }^{*}G\left(0\right)+{\mu }^{H}co{\partial }^{*}H\left(0\right).$

This shows that the GS-stationarity of 0.

The next result shows that the GM-stationarity is a necessary optimality condition for MPVC if GMPVC ACQ is satisfied at a local optimal solution of MPVC.

Theorem 3.2. Let $\stackrel{¯}{x}$ be a local optimal solution of MPVC (1.1). Suppose that f is a locally Lipschitz at $\stackrel{¯}{x}$ , and assume that f and the constraint functions admit a bounded upper semi-regular convexificator. If the GMPVC-ACQ holds at $\stackrel{¯}{x}$ , then $\stackrel{¯}{x}$ is a GM stationary point.

Proof. Firstly, we claim that

$0\in co\text{\hspace{0.17em}}{\partial }^{*}f\left(\stackrel{¯}{x}\right)+K+cone\text{\hspace{0.17em}}{\cup }_{i\in {I}_{00}}co{\partial }^{*}{G}_{i}\left(\stackrel{¯}{x}\right)\cup co{\partial }^{*}{H}_{i}\left(\stackrel{¯}{x}\right).$ (3.11)

Suppose that (3.11) does not hold. Since $co\text{\hspace{0.17em}}{\partial }^{*}f\left(\stackrel{¯}{x}\right)$ is compact and convex, and $K+cone\text{\hspace{0.17em}}{\cup }_{i\in {I}_{00}}\text{ }\text{ }co{\partial }^{*}{G}_{i}\left(\stackrel{¯}{x}\right)\cup co{\partial }^{*}{H}_{i}\left(\stackrel{¯}{x}\right)$ is closed and convex, similar to the proof of Theorem 3.1, we can find a nonzero vector $v\in {R}^{n}$ and the sequences ${t}_{k}↓0$ and ${v}_{k}\to v$ such that for all sufficiently large k,

$f\left(\stackrel{¯}{x}+{t}_{k}{v}_{k}\right)

which contradicts the local optimality of $\stackrel{¯}{x}$. Thus (11) holds true. This implies that there exist the nonnegative multipliers ${\lambda }_{i}^{g},\text{\hspace{0.17em}}i\in {I}_{g}$, ${\lambda }_{j}^{h},\text{\hspace{0.17em}}{\mu }_{j}^{h},\text{\hspace{0.17em}}j=1,2,3,\cdots ,p$, ${\mu }_{i}^{H},\text{\hspace{0.17em}}i\in {I}_{0+}$, ${\lambda }_{i}^{H},\text{\hspace{0.17em}}i\in {I}_{0+}\cup {I}_{0-}\cup {I}_{00}$, ${\mu }_{i}^{G},\text{\hspace{0.17em}}i\in {I}_{+0}$, ${\mu }_{i}^{H},\text{\hspace{0.17em}}i\in {I}_{00}$ such that

$\begin{array}{l}0\in co{\partial }^{*}f\left(\stackrel{¯}{x}\right)+\underset{i\in {I}_{g}}{\sum }\text{ }\text{ }{\lambda }_{i}^{g}co{\partial }^{*}{g}_{i}\left(\stackrel{¯}{x}\right)+\underset{j=1}{\overset{p}{\sum }}\left[{\lambda }_{j}^{h}co{\partial }^{*}{h}_{j}\left(\stackrel{¯}{x}\right)+{\mu }_{j}^{h}co{\partial }^{*}\left(-{h}_{j}\right)\left(\stackrel{¯}{x}\right)\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+\underset{i\in {I}_{0+}\cup {I}_{0-}\cup {I}_{00}}{\sum }\text{ }\text{ }{\lambda }_{i}^{H}co{\partial }^{*}\left(-{H}_{i}\right)\left(\stackrel{¯}{x}\right)+\underset{i\in {I}_{0+}\cup {I}_{00}}{\sum }\text{ }\text{ }{\mu }_{i}^{H}co{\partial }^{*}{H}_{i}\left(\stackrel{¯}{x}\right)+\underset{i\in {I}_{+0}}{\sum }\text{ }\text{ }{\mu }_{i}^{G}co{\partial }^{*}{G}_{i}\left(\stackrel{¯}{x}\right).\end{array}$ (3.12)

Let ${\lambda }_{{I}_{+0}\cup {I}_{+-}}^{H}=0,\text{\hspace{0.17em}}{\mu }_{{I}_{0-}\cup {I}_{+-}\cup {I}_{+0}}^{H}=0,\text{\hspace{0.17em}}{\mu }_{{I}_{+-}\cup {I}_{0+}\cup {I}_{0-}\cup {I}_{00}}^{G}=0$, we obtain from (3.12),

$\begin{array}{l}0\in co{\partial }^{*}f\left(\stackrel{¯}{x}\right)+\underset{i\in {I}_{g}}{\sum }\text{ }\text{ }{\lambda }_{i}^{g}co{\partial }^{*}{g}_{i}\left(\stackrel{¯}{x}\right)+\underset{j=1}{\overset{p}{\sum }}\left[{\lambda }_{j}^{h}co{\partial }^{*}{h}_{j}\left(\stackrel{¯}{x}\right)+{\mu }_{j}^{h}co{\partial }^{*}\left(-{h}_{j}\right)\left(\stackrel{¯}{x}\right)\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+\underset{i=1}{\overset{l}{\sum }}\text{ }\text{ }{\lambda }_{i}^{H}co{\partial }^{*}\left(-{H}_{i}\right)\left(\stackrel{¯}{x}\right)+\underset{i=1}{\overset{l}{\sum }}\left[{\mu }_{i}^{G}co{\partial }^{*}{G}_{i}\left(\stackrel{¯}{x}\right)+{\mu }_{i}^{H}co{\partial }^{*}{H}_{i}\left(\stackrel{¯}{x}\right)\right],\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\lambda }_{{I}_{g}}^{g}\ge 0,\text{\hspace{0.17em}}{\lambda }_{j}^{h},{\mu }_{j}^{h}\ge 0,\text{\hspace{0.17em}}j=1,2,3\cdots ,p,\text{\hspace{0.17em}}{\lambda }_{i}^{H},\text{\hspace{0.17em}}{\mu }_{i}^{G},\text{\hspace{0.17em}}{\mu }_{i}^{H}\ge 0,\text{\hspace{0.17em}}i=1,2,3,\cdots ,l,\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\lambda }_{{I}_{+0}\cup {I}_{+-}}^{H}=0,\text{\hspace{0.17em}}{\mu }_{{I}_{+0}\cup {I}_{+-}}^{H}=0,\text{\hspace{0.17em}}{\mu }_{{I}_{0+}\cup {I}_{+-}\cup {I}_{0-}}^{G}=0,\text{\hspace{0.17em}}{\lambda }_{i}^{H}-{\mu }_{i}^{H}\ge 0,\text{\hspace{0.17em}}i\in {I}_{0-},\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\forall i\in {I}_{00},\text{\hspace{0.17em}}{\mu }_{i}^{G}=0,\text{\hspace{0.17em}}{\mu }_{i}^{G}\left({\lambda }_{i}^{H}-{\mu }_{i}^{H}\right)=0.\end{array}$

This shows that $\stackrel{¯}{x}$ is a GM stationary point and the proof is complete.

Next, we will show that the GW stationarity is a global or local sufficient optimality condition under certain generalized convexity assumptions.

Theorem 3.3. Let $\stackrel{¯}{x}$ be a feasible GW stationary point of MPVC (1.1) and define the following index sets:

$\begin{array}{l}{I}_{00}^{G}=\left\{i\in {I}_{00}|{\mu }_{i}^{G}>0\right\},\text{\hspace{0.17em}}{I}_{00}^{H}=\left\{i\in {I}_{00}|{\mu }_{i}^{H}>0\right\},\text{\hspace{0.17em}}{I}_{0-}^{H}=\left\{i\in {I}_{0-}|{\mu }_{i}^{H}>0\right\},\\ {I}_{0+}^{H}=\left\{i\in {I}_{0+}|{\mu }_{i}^{H}>0\right\},\text{\hspace{0.17em}}{I}_{+0}^{G}=\left\{i\in {I}_{+0}|{\mu }_{i}^{G}>0\right\}.\end{array}$

Assume that f is ${\partial }^{*}$ pseudoconvex and ${g}_{i}\left(i\in {I}_{g}\right)$, $±{h}_{i}\left(i=1,2,3,\cdots ,p\right)$ and $-{H}_{i}\left(i\in {I}_{0+}\cup {I}_{0-}\cup {I}_{00}\right)$ are ${\partial }^{*}$ quasiconvex at $\stackrel{¯}{x}$. Then the following assertions hold true:

(i) If ${I}_{00}^{G}\cup {I}_{00}^{H}\cup {I}_{0+}^{H}\cup {I}_{0-}^{H}\cup {I}_{+0}^{G}=\varnothing$, then $\stackrel{¯}{x}$ is a global optimal solution of MPVC.

(ii) If ${G}_{i}\left(i\in {I}_{+0}^{G}\right)$ and ${H}_{i}\left(i\in {I}_{0+}^{H}\right)$ are continuous and ${\partial }^{*}$ quasiconvex at $\stackrel{¯}{x}$, and ${I}_{00}^{G}\cup {I}_{00}^{H}\cup {I}_{0-}^{H}=\varnothing$, then $\stackrel{¯}{x}$ is a local optimal solution of MPVC.

(iii)If ${G}_{i}\left(i\in {I}_{+0}^{G}\right)$ and ${H}_{i}\left(i\in {I}_{0+}^{H}\cup {I}_{0-}^{H}\right)$ are continuous and ${\partial }^{*}$ quasiconvex at $\stackrel{¯}{x}$, and $\stackrel{¯}{x}$ is an interior point relative to the set $X\cap \left\{x|{G}_{i}\left(x\right)=0,\text{\hspace{0.17em}}{H}_{i}\left(x\right)=0,i\in {I}_{00}^{G}\cup {I}_{00}^{H}\cup {I}_{0-}^{H}\right\}$, then $\stackrel{¯}{x}$ is a local optimal solution of MPVC.

Proof. Let $\stackrel{¯}{x}$ be an arbitrary feasible point of (1.1). Since ${g}_{i}\left(x\right)\le 0={g}_{i}\left(\stackrel{¯}{x}\right)$ for $i\in {I}_{g}$, by ${\partial }^{*}$ quasiconvexity of ${g}_{i}$ at $\stackrel{¯}{x}$, we get

$〈{\zeta }_{i},x-\stackrel{¯}{x}〉\le 0,\text{\hspace{0.17em}}\forall {\zeta }_{i}\in {\partial }^{*}{g}_{i}\left(\stackrel{¯}{x}\right),\text{\hspace{0.17em}}\forall i\in {I}_{g}.$ (3.13)

Similarly, we get

$\begin{array}{l}〈{\eta }_{i},x-\stackrel{¯}{x}〉\le 0,\text{\hspace{0.17em}}\forall {\eta }_{i}\in {\partial }^{*}{h}_{i}\left(\stackrel{¯}{x}\right),\text{\hspace{0.17em}}\forall i=1,2,3,\cdots ,p.\\ 〈{\nu }_{i},x-\stackrel{¯}{x}〉\le 0,\text{\hspace{0.17em}}\forall {\nu }_{i}\in {\partial }^{*}\left(-{h}_{i}\right)\left(\stackrel{¯}{x}\right),\text{\hspace{0.17em}}\forall i=1,2,3,\cdots ,p.\\ 〈{\xi }_{i},x-\stackrel{¯}{x}〉\le 0,\text{\hspace{0.17em}}\forall {\xi }_{i}\in {\partial }^{*}\left(-{H}_{i}\right)\left(\stackrel{¯}{x}\right),\text{\hspace{0.17em}}\forall i\in {I}_{0+}\cup {I}_{0-}\cup {I}_{00}.\end{array}$ (3.14)

(i) We multiply each inequality in (3.14) by ${\lambda }_{i}^{g},i\in {I}_{g}$, ${\lambda }_{j}^{h},\text{\hspace{0.17em}}{\mu }_{j}^{h},\text{\hspace{0.17em}}j=1,2,3,\cdots ,p$, ${\lambda }_{i}^{H},\text{\hspace{0.17em}}i\in {I}_{0+}\cup {I}_{0-}\cup {I}_{00}$, respectively, and adding, we get

$〈\underset{i\in {I}_{g}}{\sum }\text{ }\text{ }{\lambda }_{i}^{g}{\zeta }_{i}+\underset{j=1}{\overset{p}{\sum }}\left[{\lambda }_{j}^{h}{\eta }_{i}+{\mu }_{j}^{h}{\nu }_{i}\right]+\underset{i\in {I}_{0+}\cup {I}_{0-}\cup {I}_{00}}{\sum }{\lambda }_{i}^{H}{\xi }_{i},x-\stackrel{¯}{x}〉\le 0.$

Since ${I}_{00}^{G}\cup {I}_{00}^{H}\cup {I}_{0+}^{H}\cup {I}_{0-}^{H}\cup {I}_{+0}^{G}=\varnothing$, taking into account the GW stationarity of $\stackrel{¯}{x}$, one gets

$\begin{array}{l}〈\underset{i\in {I}_{g}}{\sum }\text{ }\text{ }{\lambda }_{i}^{g}{\zeta }_{i}+\underset{j=1}{\overset{p}{\sum }}\left[{\lambda }_{j}^{h}{\eta }_{i}+{\mu }_{j}^{h}{\nu }_{i}\right]+\underset{i=1}{\overset{l}{\sum }}\text{ }\text{ }{\lambda }_{i}^{H}{\xi }_{i}+\underset{i=1}{\overset{l}{\sum }}\left[{\mu }_{i}^{G}{\tau }_{i}+{\mu }_{i}^{H}{\delta }_{i}\right],x-\stackrel{¯}{x}〉\le 0,\\ \forall \text{\hspace{0.17em}}{\tau }_{i}\in {\partial }^{*}{G}_{i}\left(\stackrel{¯}{x}\right),\text{\hspace{0.17em}}\forall \text{\hspace{0.17em}}{\delta }_{i}\in {\partial }^{*}{H}_{i}\left(\stackrel{¯}{x}\right).\end{array}$

This implies that there exists $\vartheta \in co\text{\hspace{0.17em}}{\partial }^{*}f\left(\stackrel{¯}{x}\right)$ such that $〈\vartheta ,x-\stackrel{¯}{x}〉\ge 0$. The ${\partial }^{*}$ pseudoconvexity of f at $\stackrel{¯}{x}$ shows that $f\left(x\right)\ge f\left(\stackrel{¯}{x}\right)$ for all $x\in X$. Hence, $\stackrel{¯}{x}$ is a global optimal solution of MPVC.

(ii) For any $i\in {I}_{+0}$, since ${H}_{i}\left(\stackrel{¯}{x}\right)>0$, the continuity of ${H}_{i}$ implies that ${H}_{i}\left(x\right)>0$ for all feasible points x which is sufficiently close to $\stackrel{¯}{x}$. This shows that ${G}_{i}\left(x\right)\le 0$ for such x. Hence, for x which is sufficiently close to $\stackrel{¯}{x}$, one has

${G}_{i}\left(x\right)\le 0={G}_{i}\left(\stackrel{¯}{x}\right),\text{\hspace{0.17em}}\forall i\in {I}_{+0}.$

Utilizing the ${\partial }^{*}$ quasiconvexity of ${G}_{i}\left(i\in {I}_{+0}^{G}\right)$ at $\stackrel{¯}{x}$, we deduce that for the feasible point x which is sufficiently close to $\stackrel{¯}{x}$,

$〈{\tau }_{i},x-\stackrel{¯}{x}〉\le 0,\text{\hspace{0.17em}}\forall {\tau }_{i}\in {\partial }^{*}{G}_{i}\left(\stackrel{¯}{x}\right),\text{\hspace{0.17em}}\forall i\in {I}_{+0}^{G}.$ (3.15)

Similarly, one gets, for the feasible point x which is sufficiently close to $\stackrel{¯}{x}$,

$〈{\delta }_{i},x-\stackrel{¯}{x}〉\le 0,\text{\hspace{0.17em}}\forall {\delta }_{i}\in {\partial }^{*}{H}_{i}\left(\stackrel{¯}{x}\right),\text{\hspace{0.17em}}\forall i\in {I}_{0+}^{H}.$ (3.16)

Similar to the proof of case (i), we can find $\vartheta \in co\text{\hspace{0.17em}}{\partial }^{*}f\left(\stackrel{¯}{x}\right)$ such that $〈\vartheta ,x-\stackrel{¯}{x}〉\ge 0$. The ${\partial }^{*}$ pseudoconvexity of f at $\stackrel{¯}{x}$ shows that $f\left(x\right)\ge f\left(\stackrel{¯}{x}\right)$ for all feasible point x which is sufficiently close to $\stackrel{¯}{x}$. Hence, $\stackrel{¯}{x}$ is a local optimal solution of MPVC.

(iii) Taking into account that $\stackrel{¯}{x}$ is an interior point relative to the set $X\cap \left\{x|{G}_{i}\left(x\right)=0,\text{\hspace{0.17em}}{H}_{i}\left(x\right)=0,i\in {I}_{00}^{G}\cup {I}_{00}^{H}\cup {I}_{0-}^{H}\right\}$, we know that, for the feasible point x which is sufficiently close to $\stackrel{¯}{x}$,

${G}_{i}\left(x\right)=0={G}_{i}\left(\stackrel{¯}{x}\right),\text{\hspace{0.17em}}{H}_{i}\left(x\right)=0={H}_{i}\left(\stackrel{¯}{x}\right),i\in {I}_{00}^{G}\cup {I}_{00}^{H}\cup {I}_{0-}^{H}.$

By the ${\partial }^{*}$ quasiconvexity of the above functions at $\stackrel{¯}{x}$, we have

$\begin{array}{l}〈{\tau }_{i},x-\stackrel{¯}{x}〉\le 0,\text{\hspace{0.17em}}\forall {\tau }_{i}\in {\partial }^{*}{G}_{i}\left(\stackrel{¯}{x}\right),\text{\hspace{0.17em}}\forall i\in {I}_{00}^{G},\\ 〈{\delta }_{i},x-\stackrel{¯}{x}〉\le 0,\text{\hspace{0.17em}}\forall {\delta }_{i}\in {\partial }^{*}{H}_{i}\left(\stackrel{¯}{x}\right),\text{\hspace{0.17em}}\forall i\in {I}_{00}^{H}\cup {I}_{0-}^{H}.\end{array}$ (3.17)

Multiplying (3.13)-(3.17) by ${\lambda }_{i}^{g},i\in {I}_{g}$, ${\lambda }_{j}^{h},\text{\hspace{0.17em}}{\mu }_{j}^{h},\text{\hspace{0.17em}}j=1,2,3,\cdots ,p$, ${\lambda }_{i}^{H},\text{\hspace{0.17em}}i\in {I}_{0+}\cup {I}_{0-}\cup {I}_{00}$, ${\mu }_{i}^{G},\text{\hspace{0.17em}}i\in {I}_{00}^{G}\cup {I}_{+0}^{G}$, ${\mu }_{i}^{H},\text{\hspace{0.17em}}i\in {I}_{00}^{H}\cup {I}_{0+}^{H}\cup {I}_{0-}^{H}$, respectively, and adding, we get

$〈\underset{i\in {I}_{g}}{\sum }\text{ }\text{ }{\lambda }_{i}^{g}{\zeta }_{i}+\underset{j=1}{\overset{p}{\sum }}\left[{\lambda }_{j}^{h}{\eta }_{i}+{\mu }_{j}^{h}{\nu }_{i}\right]+\underset{i=1}{\overset{l}{\sum }}\text{ }\text{ }{\lambda }_{i}^{H}{\xi }_{i}+\underset{i=1}{\overset{l}{\sum }}\left[{\mu }_{i}^{G}{\tau }_{i}+{\mu }_{i}^{H}{\delta }_{i}\right],x-\stackrel{¯}{x}〉\le 0.$

This implies that there exists $\vartheta \in co\text{\hspace{0.17em}}{\partial }^{*}f\left(\stackrel{¯}{x}\right)$ such that $〈\vartheta ,x-\stackrel{¯}{x}〉\ge 0$. The ${\partial }^{*}$ pseudoconvexity of f at $\stackrel{¯}{x}$ shows that $f\left(x\right)\ge f\left(\stackrel{¯}{x}\right)$ for all feasible points x which is sufficiently close to $\stackrel{¯}{x}$. Hence, $\stackrel{¯}{x}$ is a local optimal solution of MPVC.

4. Concluding Remarks

In this paper, under the framework of convexificator, by introducing two generalized MPVC type constraint qualifications and the stationary concepts, we derive the necessary and sufficient optimality conditions for the nonsmooth MPVC using the notion of convexificators. As the future work, some other generalized MPVC type constraint qualifications under the framework of convexificator will be investigated.

Founding

This work was supported in part by NNSF (No. 11461015, 11961011, 11761014) of China and Guangxi Natural Science Foundation (2015GXNSFAA139010, 2016GXNSFFA380009, 2017GXNSFAA198243) and Guangxi Key Laboratory of Automatic Detecting Technology (No. YQ17117).

Cite this paper: Hu, Q. , Zhou, Z. and Chen, Y. (2021) Some Convexificators-Based Optimality Conditions for Nonsmooth Mathematical Program with Vanishing Constraints. American Journal of Operations Research, 11, 324-337. doi: 10.4236/ajor.2021.116020.
References

[1]   Achtziger, W. and Kanzow, C. (2008) Mathematical Programed with Vanishing Constraints: Optimality Conditions and Constraints Qualifications. Mathematical Programming, 114, 69-99.
https://doi.org/10.1007/s10107-006-0083-3

[2]   Kirches, C., Potschka, A., Bock, H.G. and Sager, S. (2013) A Parametric Active Set Method for Quadratic Programs with Vanishing Constraints. Pacific Journal of Optimization, 9, 275-299.

[3]   Jabr, R.A. (2012) Solution to Economic Dispatching with Disjoint Feasible Regions via Semidefinite Programming. IEEE Transactions on Power Systems, 27, 572-573.
https://doi.org/10.1109/TPWRS.2011.2166009

[4]   Jung, M.N., Kirches, C. and Sager, S. (2013) On Perspective Functions and Vanishing Con-straints in Mixed-Integer Nonlinear Optimal Control. In: Jünger, M. and Reinelt, G., Eds., Facets of Combinatorial Optimization, Springer, Berlin, 387-417.
https://doi.org/10.1007/978-3-642-38189-8_16

[5]   Palagachev, K. and Gerdts, M. (2015) Mathematical Programs with Blocks of Vanishing Constraints Arising in Discretized Mixed-Integer Optimal Control Problems. Set-Valued and Variational Analysis, 23, 149-167.
https://doi.org/10.1007/s11228-014-0297-0

[6]   Hoheisel, T., Kanzow, C. and Outrata, J.V. (2010) Exact Penalty Results for Mathematical Programs with Vanishing Constraints. Nonlinear Analysis: Theory, Methods and Applica-tions, 72, 2514-2526.
https://doi.org/10.1016/j.na.2009.10.047

[7]   Achtziger, W., Hoheisel, T. and Kanzow, C. (2013) A Smoothing-Regularization Approach to Mathematical Programs with Vanishing Constraints. Computational Optimization and Ap-plications, 55, 733-767.
https://doi.org/10.1007/s10589-013-9539-6

[8]   Hu, Q.J., Chen, Y., Zhu, Z.B. and Zhang, B.S. (2014) Notes on Some Convergence Proper-ties for a Smoothing-Regularization Approach to Mathematical Programs with Vanishing Constraints. Abstract and Applied Analysis, 2014, Article ID: 715015.
https://doi.org/10.1155/2014/715015

[9]   Hu, Q.J., Wang, J.G., Chen, Y. and Zhu, Z.B. (2017) On an Exact Penalty Result for Mathematical Programs with Vanishing Constraints. Optimization Letters, 11, 641-653.
https://doi.org/10.1007/s11590-016-1034-4

[10]   Hu, Q.J., Wang, J.G. and Chen, Y. (2020) New Dualities for Mathematical Programs with Vanishing Constraints. Annals of Operations Research, 287, 233-255.
https://doi.org/10.1007/s10479-019-03409-6

[11]   Achtziger, W., Hoheisel, T. and Kanzow, C. (2012) On a Relaxation Method for Mathematical Programs with Vanishing Constraints. GAMM-Mitt., 35, 110-130.
https://doi.org/10.1002/gamm.201210009

[12]   Dorsch, D., Shikhman, V. and Stein, O. (2012) Mathematical Programs with Vanishing Con-straints: Critical Point Theory. Journal of Global Optimization, 52, 591-605.
https://doi.org/10.1007/s10898-011-9805-z

[13]   Hoheisel, T. and Kanzow, C. (2009) On the Abadie and Guignard Constraint Qualification for Mathematical Programs with Vanishing Constraints. Optimization, 58, 431-448.
https://doi.org/10.1080/02331930701763405

[14]   Mishra, S.K., Singh, V. and Laha, V. (2016) On Duality for Mathematical Programs with Vanishing Constraints. Annals of Operations Research, 243, 249-272.
https://doi.org/10.1007/s10479-015-1814-8

[15]   Hoheisel, T. and Kanzow, C. (2008) Stationary Conditions for Mathematical Programs with Vanishing Constraints Using Weak Constraint Qualification. Journal of Mathematical Analy-sis and Applications, 337, 292-310.
https://doi.org/10.1016/j.jmaa.2007.03.087

[16]   Hoheisel, T. and Kanzow, C. (2007) First- and Second-Order Optimality Conditions for Mathematical Programs with Vanishing Constraints. Applications of Mathematics, 52, 495-514.
https://doi.org/10.1007/s10492-007-0029-y

[17]   Hoheisel, T., Kanzow, C. and Schwartz, A. (2012) Convergence of a Local Regularization Approach for Mathematical Programs with Complementarity or Vanishing Constraints. Opti-mization Method and Software, 27, 483-512.
https://doi.org/10.1080/10556788.2010.535170

[18]   Izmailov, A.F. and Pogosyan, A.L. (2009) Optimality Conditions and Newton-Type Methods for Mathematical Programs with Vanishing Constraints. Computational Mathematics and Mathematical Physics, 49, 1128-1140.
https://doi.org/10.1134/S0965542509070069

[19]   Izmailov, A.F. and Solodov, M.V. (2009) Mathematical Programs with Vanishing Con-straints: Optimality Conditions, Sensitivity, and a Relaxation Method. Journal of Optimization Theory and Applications, 142, 501-532.
https://doi.org/10.1007/s10957-009-9517-4

[20]   Dussault, J.P., Haddou, M. and Migot, T. (2018) Mathematical Programs with Vanishing Constraints: Constraint Qualifications, Their Applications, and a New Regularization Method. Optimization, 68, 1-30.
https://doi.org/10.1080/02331934.2018.1542531

[21]   Kazemi, S. and Kanzi, N. (2018) Constraint Qualifications and Stationary Conditions for Mathematical Programming with Non-Differentiable Vanishing Constraints. Journal of Opti-mization Theory and Applications, 179, 800-819.
https://doi.org/10.1007/s10957-018-1373-7

[22]   Khare, A. and Nath, T. (2019) Enhanced Fritz John Stationarity, New Constraint Qualifica-tions and Local Error Bound for Mathematical Programs with Vanishing Constraints. Journal of Mathematical Analysis and Applications, 472, 1042-1077.
https://doi.org/10.1016/j.jmaa.2018.11.063

[23]   Khare, A. and Nath. T. (2019) On an Exact Penality Result and New Constraint Qualifications for Mathematical Programs with Vanishing Constraints. Yugoslav Journal of Operations Re-search, 29, 18-28.
https://doi.org/10.2298/YJOR180615018N

[24]   Hu, Q.J., Zhang, H.Q. and Chen, Y. (2018) An Improved Exact Penalty Result for Mathe-matical Programs with Vanishing Constraints. Journal of Advance in Applied Mathenatics, 3, 43-49.
https://doi.org/10.22606/jaam.2018.32001

[25]   Hoheisel, T. (2009) Mathematical Programs with Vanishing Constraints. Dissertation, De-partment of Mathematics, University of Würzburg, Würzburg.

[26]   Demyanov, V.F. (1994) Convexification and Concavification of a Positively Homogenous Function by the Same Family of Linear Functions. Universita di Pisa, Pisa.

[27]   Jeyakumar, V. and Luc, D.T. (1999) Nonsmooth Calculus, Minimality, and Monotonicity of Convexificators. Journal of Optimization Theory and Applications, 101, 599-621.
https://doi.org/10.1023/A:1021790120780

[28]   Demyanov, V.F. and Jeyakumar, V. (1997) Hunting for a Smaller Convex Subdifferential. Journal of Global Optimization, 10, 305-326.
https://doi.org/10.1023/A:1008246130864

[29]   Dutta, J. and Chandra, S. (2002) Convexifactors, Generalized Convexity and Optimality Con-ditions. Journal of Optimization Theory and Applications, 113, 41-65.
https://doi.org/10.1023/A:1014853129484

[30]   Dutta, J. and Chandra, S. (2004) Convexifactors, Generalized Convexity and Vector Optimi-zation. Optimization, 53, 77-94.
https://doi.org/10.1080/02331930410001661505

[31]   Li, X.F. and Zhang, J.Z. (2006) Necessary Optimality Conditions in Terms of Convexificators in Lipschitz Optimization. Journal of Optimization Theory and Applications, 131, 429-452.
https://doi.org/10.1007/s10957-006-9155-z

[32]   Golestani, M. and Nobakhtian, S. (2012) Convexificators and Strong Kuhn-Tucker Condi-tions. Computers Mathematics with Applications, 64, 550-557.
https://doi.org/10.1016/j.camwa.2011.12.047

[33]   Golestani, M. and Nobakhtian, S. (2013) Nonsmooth Multiobjective Programming and Con-straint Qualifications. Optimization, 62, 783-795.
https://doi.org/10.1080/02331934.2012.679939

[34]   Jeyakumar, V. and Luc, D.T. (1998) Approximate Jacobian Matrices for Nonsmooth Contin-uous Maps and C1-Optimization. SIAM Journal on Control and Optimization, 36, 1815-1832.
https://doi.org/10.1137/S0363012996311745

[35]   Golestani, M. and Nobakhtian, S. (2013) Nonsmooth Multiobjective Programming: Strong Kuhn-Tucker Conditions. Positivity, 13, 711-732.
https://doi.org/10.1007/s11117-012-0201-9

[36]   Luc, D.T. (2002) A Multiplier Rule for Multiobjective Programming Problems with Continu-ous Data. SIAM Journal on Optimization, 13, 168-178.
https://doi.org/10.1137/S1052623400378286

[37]   Ardali, A.A., Movahedian, N. and Nobakhtian, S. (2014) Optimality Conditions for Non-smooth Mathematical Programs with Equilibrium Constraints, Using Convexificators. Opti-mization, 65, 67-85.
https://doi.org/10.1080/02331934.2014.987776

Top