Proximal Methods for Elliptic Optimal Control Problems with Sparsity Cost Functional

Show more

Received 17 March 2016; accepted 27 May 2016; published 30 May 2016

1. Introduction

A representative formulation of optimal control problems with control costs is the following

(1.1)

On the other hand, in the field of signal acquisition and reconstruction, l^{1}-based optimization and sparsity have been exploited to successfully recover “functions” from few samples; see, e.g., [8] - [10] .

In this framework, it was shown [11] that l^{1}-based inverse problems in signal recovery can be very efficiently solved by proximal methods. Nowadays, these iterative schemes are the method of choice in magnetic resonance imaging and a special proximal method called “Fast Iterative Soft Thresholding Algorithm” (FISTA) [12] is con- sidered the state-of-the-art method for solving finite-dimensional optimization problems of the following form

where the rectangular matrix A represents a blur operator.

The purpose of our work is to contribute to the field of PDE-based optimization with control costs by investigating proximal methods in this infinite-dimensional setting. In particular, we aim at implementing and analysing proximal schemes for solving (1.1) that exploit first-order optimality conditions. Our investigation is motivated by the fact that proximal methods may have a computational performance that is comparable to that of semismooth Newton methods. However, in contrast to the latter, proximal schemes do not require the con- struction of second-order derivatives and the implementation of, e.g., a Krylov solver.

For our investigation, we consider (1.1) with elliptic operators and linear and bilinear control mechanisms. Notice that the latter case has been a much less investigated problem. One of our main contributions is to prove convergence for all variants of the proximal schemes that we discuss in this paper. In particular, we prove an convergence rate of the value of reduced cost functional, where k is the number of proximal iterations. This notion of convergence is used in l^{1}-based optimization and in some application fields [14] .

We remark that many arguments in our analysis are similar to those presented in the finite-dimensional case. However, some additional arguments are necessary in infinite dimensions, especially regarding the structure of our differential constraints and the discussion of our inexact proximal schemes. We refer to [13] for further results concerning the formulation of proximal schemes for infinite-dimensional optimization problems from a different perspective.

In the next section, we discuss linear and bilinear elliptic optimal control problems, where for completeness, some conditions for the existence of a unique control-to-state operators are considered. Section 3 is devoted to optimal control problems with sparsity costs and governed by elliptic equations with linear and bilinear control mechanisms. We discuss conditions for convexity of the bilinear problem and state the optimality conditions. In Section 4, we present a Fast Inexact Proximal method (FIP) that represents an infinite-dimensional extension of the FISTA method. In Section 5, the convergence rate of this method is proven to be. In Section 6, an inexact semismooth Newton method in function spaces is presented as the state of the art method for comparison purposes. For completeness, the theory of this method is extended to the case of elliptic bilinear control pro- blems. A numerical comparison of the FIP and Semismooth-Newton methods is presented in Section 7. A section of conclusion completes this work.

2. Elliptic Models with Linear and Bilinear Control Mechanisms

Consider the following boundary value problem

(2.1)

(2.2)

where, with, is a bounded domain and. The operator represents a second-order linear elliptic differential operator of the following form

such that, and satisfies the coercivity condition a.e. in

for some and. For the existence and uniqueness of solutions to (2.1) see ( [15] , Section 6).

Further, we consider the following bilinear elliptic control problem

(2.3)

(2.4)

In both linear and bilinear control settings, we require, with the following set of feasible controls

(2.5)

where,.

Now, we discuss the existence of a unique weak solution to (2.3)-(2.4). For this purpose, we need the Poincaré-Friedrichs lemma and denote with the Poincaré-Friedrichs constant; see, e.g., [15] .

We denote induced by the inner product.

Theorem 2.1. Let, where

(2.6)

Then, there exists a unique weak solution to the bilinear elliptic problem (2.3)-(2.4) and the following property holds

(2.7)

Proof. The proof is immediate using the Lemma of Lax-Milgram and the following result

With, we have that and therefore. In the forth

line the Poincaré-Friedrichs was used. Hence,. □

Remark 2.1. In the case of, , and, including homogeneous Dirichlet boundary conditions, we have, and such that we can ensure invertibility for.

Remark 2.2. In order to ensure a unique solution, we require condition (2.6) for the choice of in the bilinear case.

Next, we recall the following theorem stating higher regularity of solutions to (2.3)-(2.4); see ( [18] , Theorem 4.3.1.4).

Theorem 2.2. Let, , be a convex and bounded polygonal or polyhedral domain. If in addition to the assumptions of Theorem 2.1, we have that, then and the following holds

(2.8)

for some appropriate constants that only depend on.

Remark 2.3. Because can be embedded in for [19] and using (2.8), we obtain

(2.9)

Theorem 2.1 and Theorem 2.2 ensure the existence of a unique control-to-state operator

(2.10)

where in the linear case represents the unique solution to (2.1) and in the bilinear case is the unique solution to (2.3). In the following, we use the expression when it is valid for both the linear and the bilinear systems.

Remark 2.4. The control-to-state operator is not Fréchet-differentiable in the topology since for every there is always an with such that and therefore it is not neces- sarily defined. However, we only need the following weaker form of differentiability, which is a directional dif- ferentiability in all in the directions for. This is called Q-differentiability; see [16] .

Definition 2.1. (Q-differentiability). Let be a convex set and. Then T is called Q-differentiable in, if there exists a mapping, such that for all the following holds

In the following, we omit the index U and write.

The Q-derivatives of have the following properties.

Lemma 2.3. The control-to-state-operator is at least two times Q-differentiable in with and its derivatives have the following properties for all directions:

i) is the solution of

(2.11)

ii) is the solution of

(2.12)

iii) The following inequalities hold

(2.13)

(2.14)

Proof. Part (i) and (ii) can be shown by direct calculation (see ( [16] , Lemma 2.9). So part (iii) is left to be proved. If is a solution to

for and, by using (2.9), we obtain

(2.15)

where the constants depend on the measure of and not on y. Therefore we obtain (2.13) and. conclude Furthermore, let be a solution to the following problem

for and. With the same arguments as above and using (2.15), we obtain

Therefore, we obtain (2.14), which completes the proof. □

3. Elliptic Optimal Control Problems with Sparsity Cost Functional

In this section, we discuss optimal control problems governed by the linear- and bilinear-control elliptic systems discussed in the previous section. We consider the following cost functional

(3.1)

where, , and. This functional is made of a Fréchet-differentiable classical tracking type cost with -regularization and a nondifferentiable -control cost. Using the control- to-state operator (2.10), we have the following reduced optimal control problem

(3.2)

The nondifferentiable part is convex. Therefore, in order to state convexity of the reduced

functional, we investigate the second-derivative of the differentiable part.

We have

In particular, in the linear case, we have

(3.3)

We conclude that the reduced functional is strictly convex in the linear case.

In the bilinear case, we have a non-convex optimization problem. However, local convexity can be guar- anteed under some conditions. To be specific, we chose the sufficient condition stated in the following theorem.

Lemma 3.1. Let, if the following inequality holds

(3.4)

then the reduced functional is strictly convex in a neighborhood of.

Proof. Since is convex, we have to prove that is strictly

convex in u. Therefore we show that the reduced Hessian is positive definite in as follows

and thus is strictly convex in u. □

We remark that the result of Lemma 3.1 is well known. It expresses local convexity of the reduced objective when the state function is sufficiently close to the target and the weight of the quadratic cost of the control is sufficiently large. Indeed, local convexity may result with much weaker assumptions. However, since our focus is the investigation of proximal schemes, we make the following strong assumption.

Assumption 1. We assume that (3.4) holds for all.

Because of Lemma 2.3, this assumption holds if the regularization parameter.

In the next step, the first-order optimality conditions for (3.2) are derived. First, we need the definition of the subdifferential.

Definition 3.1. Let H be a Hilbert space and be convex. We call the set-valued mapping given by

the subdifferential of F in u.

From ( [20] , Remark 3.2), we obtain that is a solution of (3.2), if and only if there exists a such that

(3.5)

where denotes the adjoint operator. From (3.5), one can derive the optimality system by using the Lagrange multipliers. We have the following theorem (see [4] , Theorem 2.1).

Theorem 3.2. The optimal solution of (3.2) is characterized by the existence of such that

(3.6)

(3.7)

(3.8)

(3.9)

(3.10)

(3.11)

If one introduces the parameter, it is shown in [4] that conditions (3.7)-(3.11) are equivalent to

(3.12)

where

where is arbitrary. With this setting, the optimality system (3.6)-(3.11) reduces to the following

(3.13)

(3.14)

In the linear-control case, the Equation (3.13) becomes the following

(3.15)

where. By setting and, (3.15) becomes.

We summarize the previous considerations into the following theorem.

Theorem 3.3. (Linear optimality conditions) The optimal solution to (3.2) in the linear control case is characterized by the existence of the dual pair such that

(3.16)

(3.17)

(3.18)

(3.19)

Furthermore, the reduced gradient and the reduced Hessian of are given by

(3.20)

Notice that with an abuse of notation, we denote the reduced Hessian with, which is also used to denote the second derivative operator.

For the bilinear-control system, we have and therefore

such that (3.13) becomes the following

(3.21)

By setting and this can be written as follows,.

We summarize the previous considerations into the following theorem.

Theorem 3.4. (Bilinear optimality system) The optimal solution to (3.2) in the bilinear control case is characterized by the existence of the dual pair such that

(3.22)

Furthermore, the explicit reduced gradient and the reduced Hessian of are given by

(3.23)

and

4. Proximal Methods for Elliptic Control Problems

In this section, we discuss first-order proximal methods to solve our linear and bilinear optimal control problems. The starting point to discuss proximal methods consists of identifying a smooth and a nonsmooth part in the reduced objective. That is, we consider the following optimization problem

(4.1)

where, we assume

(4.2)

(4.3)

(4.4)

where. Notice that our optimal control problem (3.2) has this additive structure where (4.2) holds for

and is at least two times Q-differentiable, it is convex under

appropriate conditions discussed in the previous section, and it has Lipschitz-continuous gradient that we prove in the following lemma.

Lemma 4.1. The functional has a Lipschitz-continuous gradient for

(linear-control case) and for (bilinear-control case).

Proof. For the linear-control case, we have

such that we have the Lipschitz-constant.

For the bilinear-control case, we use the mean value theorem. There exists a such that

for the last inequality, we use (2.7), (2.13), (2.14), which completes the proof. □

The following lemma is essential in the formulation of proximal methods.

Lemma 4.2. Let be Q-differentiable with respect to, and it has a Lipschitz continuous gradient with Lipschitz constant. Then for all, the following holds

(4.5)

Proof.

□

Notice that represents the smallest value of L such that (4.5) is satisfied. We remark that the discussion that follows is valid for as in Lemma 4.5. However, as we discuss below, the efficiency of our proximal schemes depends on how close is the chosen L to the minimal and optimal value. Now, since this value is usually not available analytically, we discuss and implement below some numerical strategies for determining a sufficiently accurate approximation of. In particular, we consider a power iteration [21] , and the backtracking approach discussed in Remark 5.1.

Further, notice that also in the case of choosing, our proximal scheme still converges with rate (resp.) times a convergence constant. However, this convergence constant grows considerably as L becomes larger and therefore the convergence of the proximal method appears recognizably slower. On the other hand, if L is chosen smaller than the Lipschitz constant, then convergence cannot be guaranteed.

The strategy of the proximal scheme is to minimize an upper bound of the objective functional at each iteration, instead of minimizing the functional directly. Lemma 4.2 gives us the following upper bound for all. We have

where, we have equality if. Furthermore, we have the following equation

(4.6)

Now, consider (4.6) and recall that. We have the following lemma.

Lemma 4.3. The following equation holds

where the projected soft thresholding function is defined as follows

Proof. There exists a, the subdifferential of such that the solution

fulfills the following variational inequality; see, e.g., ( [20] , Remark 3.2);

(4.7)

Now, we show that fulfills (4.7). The following investigation of the different cases is meant to be pointwise. We have

・ :

It follows that and therefore such that

.

・ :

It follows that and such that

.

・ :

It follows that and such that

.

・ :

It follows that and therefore such that

・ :

It follows that and therefore such that

.

□

Based on this lemma, we conclude that the solution to (4.6) is given by

thus obtaining an approximation to the optimal u sought. Therefore we can use this result to define an iterative scheme as follows

starting from a given. The resulting algorithm implements a proximal scheme as follows

This scheme is discussed in [9] [12] for the case of finite-dimensional optimization problems. Notice that the iterated thresholding scheme discussed in [9] coincides with Algorithm 1 for the special case. The con- vergence results for Algorithm 1 presented in [12] can be extended to our elliptic control problems, using the theoretical results presented above. Therefore we can state the following theorem.

Theorem 4.4. Let be a sequence generated by Algorithm 1 and be the solution to (3.2) with linear- or bilinear-control elliptic equality constraints. Then, for every the following holds

In [22] , an acceleration strategy for proximal methods applied to convex optimization problems fulfilling (4.4) is formulated, which improves the rate of convergence of these schemes from to. Speci- fically, one defines the sequence with

(4.8)

and

(4.9)

Correspondingly, the optimization variable is updated by the following

This procedures is summarized in the following algorithm

The following convergence result represents an extension of ( [12] , Theorem 4.4). We have

Theorem 4.5. Let be a sequence generated by Algorithm 2 and be the solution of (3.2) with linear- or bilinear-control elliptic equality constraints. Then, for every the following holds

Algorithm 1 and Algorithm 2 require the calculation of

However, the exact inversion of a discretized elliptic differential operator A may become too expensive. Therefore one has to use iterative methods; e.g., the conjugate gradient method [23] . For this reason, we discuss an inexact version of the proximal scheme, where the equality constraints and the corresponding adjoint equa- tions are solved up to a given tolerance quantified by. In the following, we denote with the inexact gradient that corresponds to an inexact inversion of the equation, resp., that results in an approximated state variable, resp., in the following sense

Hence, there exists an with such that

(4.10)

We denote the inexact inversion method for the problem, with an error, with. With this notation, the inexact gradient computation is illustrated in Algorithm 3.

With this preparation, we formulate our inexact proximal (IP) scheme with Algorithm 4.

We also formulate the accelerated (fast) version of our IP scheme in Algorithm 5. We refer to it as the FIP method.

5. Convergence Analysis of Inexact Proximal Methods

In this section, we investigate the convergence of our IP and FIP schemes. Notice that our analysis differs from that presented in [12] where finite-dimensional problems and exact inversion are considered. We start investigat- ing the error of the inexact gradient.

Lemma 5.1. The following estimate holds

for some c > 0.

Proof. In the linear case, we have. Using (4.10) there exist the errors

with such that

(5.1)

where

(5.2)

In the bilinear case, we have. Furthermore, Theorem

2.1 implies that the solution of the equation has the following property

(5.3)

We also have.

Using (4.10) there exist the errors with such that

where

(5.4)

For the three last inequalities, we use (5.3), (2.7), and. □

We refer to the estimation error of the inexact gradient in step k as follows

Now, we define

and

(5.5)

such that one step of Algorithm 4, resp. Algorithm 5, can be written as follows

In order to prove the convergence of the IP method, we need the following two lemmas.

Lemma 5.2. For any, one has iff there exists, the subdifferential of, such that

(5.6)

Proof. This is immediate from the variational inequality of (5.5). For a proof see, e.g., [20] . □

Lemma 5.3. Let and, then for any, we have

Proof. From (4.5), we have

and therefore

(5.7)

Now since and are convex, we have

Summing the above inequalities gives

(5.8)

so using (5.6), (5.8), and the definition of in (5.7) gives the following

□

Now, we prove a convergence rate for Algorithm 4 (IP scheme).

Theorem 5.4. Let be the sequence generated by Algorithm 4 and be the solution of (3.2) with linear or bilinear elliptic equality constraints; let c be determined by (5.2) resp. (5.4). Then for any, we have

(5.9)

Proof. Using Lemma 5.3 with, and we obtain

Summing this inequality over gives

(5.10)

Using again Lemma 5.3 with, we obtain

Multiplying this inequality by n and summing again over gives

which simplifies to the following

(5.11)

Adding (5.10) and (5.11) together, we get

and hence with and, it follows that

□

Next, we present a convergence result for the FIP method. For this purpose, we need the following lemma.

Lemma 5.5. Let, and be the sequences generated by Algorithm 5, let be the error of the inexact gradient, and let be the solution to (3.2), then for any, we have

with,.

Proof. We apply Lemma 4.2 at the points and likewise at the points. We obtain the following

where we used the fact that. Now, we multiply the first inequality above by and add it to the second inequality to obtain the following

Multiplying this inequality by and using, which holds due to (4.8), we obtain

Applying the Pythagoras relation, to the right-hand side of the last inequality with, we obtain

Therefore, with (see (4.9)) and defined as

it follows that

□

We also have the following lemmas.

Lemma 5.6. The positive sequence generated by the FIP scheme via (4.8) with satisfies for all.

Proof. The proof is immediate by mathematical induction. □

Lemma 5.7. Let and be positive sequences of reals and be a sequence of reals satisfying

Then,.

Proof. The proof is immediate by mathematical induction. □

Now, we can prove a convergence rate of for Algorithm 5 (FIP scheme).

Theorem 5.8. Let be the sequence generated by Algorithm 5, let be the solution to (3.2) with linear or bilinear elliptic equality constraints; let c be determined by (5.2) resp. (5.4). Then for any, the following holds

(5.12)

Proof. Let us define the quantities

As in Lemma 5.5, we define. Then, by Lemma 5.5, the following holds for every

and hence assuming that holds true, invoking Lemma 5.7, we obtain

which combined with (Lemma 5.6) gives the following

(5.13)

Furthermore with Lemma 5.6 and, we have that

which combined with (5.13) and gives the following

What remains to be proved is the validity of the relation. Since, we have

Applying Lemma 4.2 to the points and, we get

that is holds true. □

Remark 5.1. The IP and FIP methods converge also replacing L with an upper bound of it. In particular, we can prove convergence of the FIP method using a backtracking stepsize rule for the Lipschitz constant (Step 1 in Algorithm 5) as in [12] .

We complete this section formulating a fast inexact proximal scheme where the Lipschitz constant L is obtained by forward tracking, (nevertheless we call it backtracking as in [12] ), thus avoiding any need to compute the reduced Hessian. Our fast inexact proximal backtracking (FIPB) method is presented in Algorithm 6.

6. The Inexact Semismooth Newton Method

We consider the semismooth Newton method as a benchmark scheme for solving elliptic non-smooth optimal control problems; see, e.g., [4] - [6] . This method is proven to be equivalent to the primal-dual active set method in [24] . The inexact semismooth Newton (ISSN) method is presented in [25] for finite-dimensional problems. In this section, we discuss the ISSN method for infinite-dimensional optimization problems and use it for compari- son with our inexact proximal schemes. To support our use of the ISSN scheme to solve bilinear control pro- blems, we extend two theoretical results in [3] [4] . For the analysis that follows, we need the following defini- tion.

Definition 6.1. Let X and Y be Banach spaces, be open and be a nonlinear mapping. We say that is generalized differentiable in an open subset if there exists a set-valued mapping with for all such that

(6.1)

for every and for every. We call the generalized differential and every a generalized derivative.

This definition is similar to the semismoothness stated in [3] and also known under the name “slant differentiability”; see, e.g., [24] . Now, we discuss the solution of the following nonlinear equation. We have the following theorem.

Theorem 6.1. ( [24] , Theorem 1.1) Suppose that is a solution to and that is generalized differentiable in an open neighborhood U containing with a generalized derivative. If is

invertible for all and is bounded, then the semismooth Newton (SSN) iteration

converges superlinearly to, provided that is sufficiently small.

An inexact version of the SSN scheme discussed in this theorem is formulated in ( [3] , Algorithm 3.19), where the update to is obtained as follows. Choose a boundedly invertible operator and com- pute. For this scheme, superlinear convergence is proven in ( [3] , Theorem 3.20), provided that there exists a such that

However, this procedure is difficult to realize in practice. For this reason, in our ISSN scheme, the “exact”

update step with, as discussed in [24] , is replaced by

with satisfying the following inequality

(6.2)

Our ISSN scheme is given in Algorithm 7.

On the basis of the proof of Theorem 3.20 in [3] , we prove the following theorem that states convergence of Algorithm 7. We have

Theorem 6.2. Suppose that is a solution to and that is generalized differentiable and Lipschitz continuous in an open neighborhood U containing with a generalized derivative. If is

invertible for all and is bounded, then Algorithm 7 converges superlinearly to, provided that is sufficiently small.

Proof. Let and. Furthermore, let be so small that and is Lipschitz continuous in with. Now, we show inductively that for all k. So we assume that for some. Then there holds

. We estimate the Y-norm of as

(6.4)

Next, using, we obtain

(6.5)

This result, the generalized differentiability of at, and (6.4) give the following

(6.6)

Hence, for sufficiently small, we have, with

, and thus

This gives, which inductively proves that in Y. We con-

clude from (6.6) the following, which completes the proof. □

Our purpose is to solve the nonlinear and nonsmooth equation system (3.13)-(3.14) by the semismooth Newton iteration. We introduce the operator

where is the Sobolev embedding (see [4] and [19] , Theorem 5.4]) of into with. This embedding is necessary to show that the function defined in (6.7) is generalized differentiable. Now, by using from (3.13) and choosing, Equation (3.14) becomes, where

(6.7)

The function is generalized differentiable (see [4] , Theorem 4.2 for the linear case, analogue for the bilinear case) and a generalized derivative is given by

(6.8)

where and.

Using Theorem 6.2, the following theorem guarantees the superlinear convergence of the semismooth Newton method applied to our problems. To prove this we extend the proof of Theorem 4.3 in [4] .

Theorem 6.3. If (3.4) is fulfilled, then is invertible for all and is

bounded.

Proof. The linear-control case is investigated in [4] , so we focus on the bilinear case. Define, and for and the restriction operator by. The corre- sponding adjoint operator is the extension-by-zero operator. We assume that. From (6.8) we obtain that. Thus, satisfies

(6.9)

Now, we define and

for. We use

to see that (6.9) is equivalent to.

Using and (3.4) we have coercivity of a for and therefore the

Lax-Milgram-Lemma can be applied to show that (6.9) admits a unique solution. Moreover, this

solution satisfies, with a constant C independent of u. For the last inequality,

we use the fact that is bounded due to the boundedness of, and as shown in (2.7), (2.13), and (2.14). □

7. Numerical Experiments

In this section, we present results of numerical experiments to validate the computational performance of our inexact proximal methods and to demonstrate the convergence rate of proved in Theorem 5.8. In the following procedures, for validation purposes, we formulate control problems for which we know the exact solution. We have

Procedure 1. (Linear case)

1) Choose and arbitrary.

2) Set

3) Set, , and.

Lemma 7.1. Procedure 1 provides a solution of the optimal control problem (3.2) with linear-control elliptic equality constraints.

Proof. We show that the optimality conditions (3.16)-(3.19) in Theorem 3.3 are fulfilled. (3.16)-(3.18) are obviously fulfilled because of 3) in Procedure 1. Now, we consider different cases to show (3.19):

・ : From 2) we have and from 3) and therefore

・ :

*: From 2) we have and from 3) we have, therefore

*: From 2) we have and from 3) we have, therefore

・

*: From 2) we have and from 3) we have. So

*: From 2) we have and from 3) we have, therefore

□

Procedure 2. (Bilinear case)

1) Choose and arbitrary.

2) Set.

3) Set, , and.

Lemma 7.2. Procedure 2 provides a solution to the optimal control problem (3.2) with bilinear- control elliptic equality constraints.

Proof. The proof is similar to the one of the linear case. □

Next, we specify the elliptic operator, the domain of computation, the choice of and, and the optimi- zation and numerical parameters. We consider the following examples.

Case 1. (1 dimensional), , , and. We discretize

with gridsize is discretized by second-order finite differences. Then we have,

and such that (2.6) holds. The results are shown in Table 1.

Case 2. (2 dimensional), , , and

. We discretize with gridsize. A is discretized by second-order finite

differences. Then we have, and such that (2.6) holds. The results are shown in Table 2.

We compare the FIP, FIPB and ISSN schemes in terms of computational time. In the FIP method, we estimate an approximation to the Lipschitz constant with a power iteration. This power iteration is stopped if the difference between two iterates of the norm is less or equal than a tolerance of. For the FIPB method, we use backtracking with and. All algorithms are stopped if. We can see in Table 1 and Table 2 that the computational performance of the FIP and FIPB methods is comparable to that of the ISSN method.

Table 1. Example 1: Comparison of the FIP, FIPB and ISSN methods.

Table 2. Example 2: Comparison of the FIP, FIPB, and ISSN methods.

In order to validate the convergence rate of, the theoretical upper bound of Theorem 5.20 and the actual error of the functional in correspondence to Example 1 and Example 2 with and, are plotted in Figure 1. We see that the observed convergence may be faster than the theoretical prediction.

We conclude this section considering challenging linear- and a bilinear-control cases. However, the exact solutions are not known. In these cases, the target function is not attainable. We have

Case 3. (Linear case), , , , and. We discretize with gridsize. A is discretized by second-order finite differences.

Case 4. (Bilinear case), , , , and. We discretize with gridsize. A is discretized by second-order finite differences.

In Figure 2, we present the optimal controls obtained for the Examples 3 and 4, respectively. Notice that the controls obtained with the FIP, FIPB, and ISSN schemes are indistinguishable. We observe that in the case of a small there is an abrupt change between and, whereas for bigger the change is con- tinuous. We also see that by increasing the support of u decreases, as expected. The different computational times of the FIP, FIPB, and ISSN schemes are also given in the figure. We see that the FIPB scheme may outperform the ISSN scheme and vice versa. We also have a case where the ISSN scheme has difficulty to converge; see Figure 2, test case (d). Notice that very similar results are also obtained using a globalized version [7] of the ISSN scheme. These results and further results of numerical experiments demonstrate that fast inexact proximal scheme represent an effective alternative to semi-smooth Newton methods.

8. Conclusion

Inexact proximal schemes for solving linear- and bilinear elliptic optimal control problems were discussed. A complete analysis of these methods was presented and a convergence rate of was proven. For bench- marking purposes, the proposed inexact proximal schemes were compared to an inexact semismooth Newton method. Results of numerical experiments demonstrated the computational effectiveness of inexact proximal schemes and successfully validated the theoretical estimates.

Figure 1. Validation of the theoretical upper bound (Theorem 58).

Figure 2. Optimal controls u for the Case 3 (top) and Case 4 (bottom).

Acknowledgements

Supported in part by the Interdisziplinäres Zentrum für Klinische Forschung der Universität Würzburg (IZKF); Project F-254: Parallel Multigrid Imaging and Compressed Sensing for Dynamic 3D Magnetic Resonance Imaging. This publication was supported by the Open Access Publication Fund of the University of Würzburg.

References

[1] Borzì, A. and Schulz, V. (2011) Computational Optimization of Systems Governed by Partial Differential Equations. SIAM, Philadelphia.

http://dx.doi.org/10.1137/1.9781611972054

[2] Troltzsch, F. (2009) Optimale Steuerung partieller Differentialgleichungen. Theorie, Verfahren und Anwendungen. Vieweg.

http://dx.doi.org/10.1007/978-3-8348-9357-4

[3] Ulbrich, M. (2011) Semismooth Newton Methods for Variational Inequalities and Constrained Optimization Problems in Function Spaces. SIAM, Philadelphia.

http://dx.doi.org/10.1137/1.9781611970692

[4] Stadler, G. (2009) Elliptic Optimal Control Problems with -Control Cost and Applications for the Placement of Control Devices. Computational Optimization and Applications, 44, 159-181.

http://dx.doi.org/10.1007/s10589-007-9150-9

[5] Wachsmuth, G. and Wachsmuth, D. (2010) Convergence and Regularization Results for Optimal Control Problems with Sparsity Function. ESAIM: Control Optimisation and Calculus of Variations, 17, 858-886.

http://dx.doi.org/10.1051/cocv/2010027

[6] Casas, E., Herzog, R. and Wachsmuth, G. (2012) Optimality Conditions and Error Analysis of Semilinear Elliptic Control Problems with Cost Functional. SIAM Journal on Optimization, 22, 795-820.

http://dx.doi.org/10.1137/110834366

[7] Ciaramella, G. and Borzì, A. (2016) A LONE Code for the Sparse Control of Quantum Systems. Computer Physics Communications, 200, 312-323.

http://dx.doi.org/10.1016/j.cpc.2015.10.028

[8] Candes, E., Romberg, J. and Tao, T. (2006) Stable Signal Recovery from Incomplete and Inaccurate Measurements. Communications on Pure and Applied Mathematics, 59, 1207-1223.

http://dx.doi.org/10.1002/cpa.20124

[9] Defrise, M., Daubechies, I. and Mol, C.D. (2004) An Iterative Thresholding Algorithm for Linear Inverse Problems with a Sparsity Constraint. Communications on Pure and Applied Mathematics, 57, 1413-1457.

http://dx.doi.org/10.1002/cpa.20042

[10] Donoho, D.L. and Elad, M. (2003) Maximal Sparsity Representation via Minimization. Proceedings of the National Academy of Sciences of the United States of America, 100, 2197-2202.

http://dx.doi.org/10.1073/pnas.0437847100

[11] Combettes, P.L. and Wajs, V.R. (2005) Signal Recovery by Proximal Forward-Backward Splitting. Multiscale Modeling & Simulation, 4, 1168-1200.

http://dx.doi.org/10.1137/050626090

[12] Beck, A. and Teboulle, M. (2009) A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems. SIAM Journal on Imaging Sciences, 2, 183-202.

http://dx.doi.org/10.1137/080716542

[13] Lorenz, A., Bredies, K. and Reiterer, S. (2015) Minimization of Non-Smooth, Non-Convex Functionals by Iterative Thresholding. Journal of Optimization Theory and Applications, 165, 78-112.

http://dx.doi.org/10.1007/s10957-014-0614-7

[14] Wech, T., Seiberlich, N., Schindele, A., Grau, V., Diffley, L., Gyngell, M.L., Borzì, A., Kostler, H. and Schneider, J.E. (2016) Development of Real-Time Magnetic Resonance Imaging of Mouse Hearts at 9.4 Tesla—Simulations and First Application. IEEE Transactions on Medical Imaging, 35, 912-920.

http://dx.doi.org/10.1109/TMI.2015.2501832

[15] Evans, L.C. (2010) Partial Differential Equations. 2nd Edition, Department of Mathematics, University of California, Berkeley, American Mathematical Society.

http://dx.doi.org/10.1090/gsm/019

[16] Kr?ner, A. and Vexler, B. (2009) A Priori Error Estimates for Elliptic Optimal Control Problems with a Bilinear State Equation. Journal of Computational and Applied Mathematics, 230, 781-802.

http://dx.doi.org/10.1016/j.cam.2009.01.023

[17] Lions, J. (1971) Optimal Control of Systems Governed by Partial Differential Equations. Springer, Berlin.

[18] Grisvard, P. (1985) Elliptic Problems in Nonsmooth Domains. Pitman Publishing, Boston.

[19] Adams, R.A. (1975) Sobolev Spaces. Pure and Applied Mathematics. Academic Press, Inc., Cambridge.

[20] Ekeland, I. and Témam, R. (1999) Convex Analysis and Variational Problems. Society for Industrial and Applied Mathematics, Philadelphia.

http://dx.doi.org/10.1137/1.9781611971088

[21] Wilkinson, J.H. (1988) The Algebraic Eigenvalue Problem. Oxford University Press, Oxford.

[22] Nesterov, Y.E. (1983) A Method for Solving the Convex Programming Problem with Covergence Rate . Doklady Akademii Nauk SSSR, 269, 543-547.

[23] Hestenes, M.R. and Stiefel, E. (1952) Methods of Conjugate Gradients for Solving Linear Systems. Journal of Research of the National Bureau of Standards, 49, 409-436.

http://dx.doi.org/10.6028/jres.049.044

[24] Hintermüller, M., Ito, K. and Kunisch, K. (2002) The Primal-Dual Active Set Strategy as a Semismooth Newton Method. SIAM Journal on Optimization, 13, 865-888.

http://dx.doi.org/10.1137/S1052623401383558

[25] Martínez, J. and Qi, L. (1995) Inexact Newton Methods for Solving Nonsmooth Equations. Journal of Computational and Applied Mathematics, 60, 127-145.