On the Stable Method Computing Values of Unbounded Operators
Show more
Abstract: Unbounded operators can transform arbitrarily small vectors into arbitrarily large vectors—a phenomenon known as instability. Stabilization methods strive to approximate a value of an unbounded operator by applying a family of bounded operators to rough approximate data that do not necessarily lie within the domain of unbounded operator. In this paper we shall be concerned with the stable method of computing values of unbounded operators having perturbations and the stability is established for this method.

1. Introduction

The stable computation of values of unbounded operators is one of the most important problems in computational mathematics. Indeed, let A be a linear operator from X into Y with domain $D\left(A\right)\subset X$ and range $R\left(A\right)\subset Y$, where X and Y are normed spaces and A is unbounded, that is, there exists a sequence of elements ${x}_{n}\in D\left(A\right),n=1,2,\cdots$, such that $‖A{x}_{n}‖\to +\infty$ as $n\to \infty$. Let ${x}_{0}\in D\left(A\right)$ and ${y}_{0}=A{x}_{0}$. We put ${x}_{n,\delta }={x}_{0}+\delta {x}_{n}$, where $\delta$ is an arbitrarily small number. Let ${y}_{n,\delta }=A{x}_{n,\delta }$. Then

$‖{y}_{n,\delta }-{y}_{0}‖=\delta ‖A{x}_{n}‖\to +\infty ,\forall \delta >0,$

while $‖{x}_{n,\delta }-{x}_{0}‖=\delta$ may be arbitrarily small.

Therefore, the problem of computing values of an operator in the considered case is unstable . Moreover, if we bear in mind arbitrarily $\delta$ -approximation to the element ${x}_{0}$ in X, that is the elements ${x}_{\delta }\in X$ with $‖{x}_{\delta }-{x}_{0}‖\le \delta$, we can see that the values of the operator A may not even be defined on the elements ${x}_{\delta }$, that is, ${x}_{\delta }\notin D\left(A\right)$ and if ${x}_{\delta }\in D\left(A\right)$, it may happen $A{x}_{\delta }↛A{x}_{0}$ as $\delta \to 0$, since the operator A is unbounded.

In the case, where A is a closed densely defined unbounded linear operator from a Hilbert space X into a Hilbert space Y, V. A. Morozov has studied a stable method for approximating the value $A{x}_{0}$ when only approximate data ${x}_{\delta }$ is available . This method takes as an approximation to ${y}_{0}=A{x}_{0}$ the element ${y}_{\alpha }^{\delta }=A{z}_{\alpha }^{\delta }$, where ${z}_{\alpha }^{\delta }$ minimizes the parametric functional

${\Phi }_{\alpha }^{\delta }\left(z\right)={‖z-{x}_{\delta }‖}^{2}+\alpha {‖Az‖}^{2},z\in D\left(A\right),\alpha >0.$ (1)

He shows that, if $\alpha =\alpha \left(\delta \right)\to 0$ as $\delta \to 0$, in such a way that $\frac{\delta }{\sqrt{\alpha }}\to 0$, then ${y}_{\alpha }^{\delta }\to A{x}_{0}$ as $\delta \to 0$. Moreover, the order of convergence results for $\left\{{y}_{\alpha }^{\delta }\right\}$ have been established  - .

In the another case, where A is a monotone operator from a real strictly convex reflexive Banach space X into its dual ${X}^{\ast }$, an approximation to ${y}_{0}=A{x}_{0}$ is the element ${y}_{\alpha }^{\delta }=-U\left({x}_{\alpha }^{\delta }-{x}_{\delta }\right)/\alpha$, where ${x}_{\alpha }^{\delta }$ is the unique solution of the equation

$\alpha Ax+U\left(x-{x}_{\delta }\right)=0,$

where $U:X\to {X}^{\ast }$ is the dual mapping in X  . Then the sequence $\left\{{y}_{\alpha }^{\delta }\right\}$ for $\frac{\delta }{\alpha }\to 0,\alpha \to 0$, in the norm of ${X}^{\ast }$, to a generalized value ${y}_{0}$ of the operator A at ${x}_{0}$ .

We now assume that both the operator A and ${x}_{0}\in D\left(A\right)$ are only given approximately by ${A}_{h}$ and ${x}_{\delta }\in X$, which satisfy

$‖{x}_{\delta }-{x}_{0}‖\le \delta ,\text{and}\text{\hspace{0.17em}}‖{A}_{h}x-Ax‖\le h,\forall x\in D=D\left({A}_{h}\right)\cap D\left(A\right),h,\delta >0,$ (2)

where ${A}_{h}$ is also an operator from X into Y. We should approximate values of A when we are given the approximations ${A}_{h}$ and ${x}_{\delta }$. Until now, this problem is still an open problem.

In this paper we shall be concerned with the construction of a stable method of computing values of the operator A for the perturbations (2).

2. The Stable Method of Computing Values of Closed Densely Defined Unbounded Linear Operators

In this section, we assume that $A:D\left(A\right)\subset X\to Y$ is a closed densely defined unbounded linear operator from a Hilbert space X into a Hilbert space Y with domain $D\left(A\right)\subset X$ and ${x}_{0}\in D\left(A\right)$. $\left(A,{x}_{0}\right)$ is called an exact data.

Instead of the exact data $\left(A,{x}_{0}\right)$, we have an approximation $\left({A}_{h},{x}_{\delta }\right)$, which satisfies (1.2), where ${A}_{h}$ is also a closed densely defined unbounded linear operator from X into Y with domain $D\left({A}_{h}\right)=D\left(A\right),\forall h>0$.

First, we define the regularization functional

${\Phi }_{\Delta }\left(z\right)={‖z-{x}_{\delta }‖}^{2}+\alpha {‖{A}_{h}z‖}^{2},\forall z\in D\left({A}_{h}\right),$ (1)

where $\alpha >0$ is called the regularization parameter, $\Delta =\left(h,\delta ,\alpha \right)$.

We shall take as an approximation to ${y}_{0}=A{x}_{0}$ the element ${y}_{\Delta }={A}_{h}{z}_{\Delta }$, where ${z}_{\Delta }$ minimizes the regularization functional ${\Phi }_{\Delta }\left(z\right)$ over $D\left({A}_{h}\right)$.

Theorem 2.1.  For any $\Delta =\left(h,\delta ,\alpha \right)$ the minimization problem (1) has a unique solution

${z}_{\Delta }={\left(I+\alpha {A}_{h}^{\ast }{A}_{h}\right)}^{-1}{x}_{\delta }.$ (2)

Hence

${y}_{\Delta }={A}_{h}{\left(I+\alpha {A}_{h}^{\ast }{A}_{h}\right)}^{-1}{x}_{\delta }.$ (3)

To establish the convergence of (3), it will be convenient to reformulate (3) as

${y}_{\Delta }={A}_{h}{\stackrel{⌣}{A}}_{h}{\left[\alpha I+\left(1-\alpha \right){\stackrel{⌣}{A}}_{h}\right]}^{-1}{x}_{\delta },$ (4)

where ${\stackrel{⌣}{A}}_{h}={\left(I+{A}_{h}^{\ast }{A}_{h}\right)}^{-1}$.

${\stackrel{⌣}{A}}_{h},{A}_{h}{\stackrel{⌣}{A}}_{h}$ are known to be bounded everywhere defined linear operators and ${\stackrel{⌣}{A}}_{h}$ is a self-adjoint with spectrum $\sigma \left({\stackrel{⌣}{A}}_{h}\right)\subset \left[0,1\right]$ ( , p. 38).

To further simplify the presentation, we introduce the functions

${T}_{\alpha }\left(t\right)={\left[\alpha +\left(1-\alpha \right)t\right]}^{-1},\alpha >0,t\in \left[0,1\right].$

We then have

${y}_{\Delta }={A}_{h}{\stackrel{⌣}{A}}_{h}{T}_{\alpha }\left({\stackrel{⌣}{A}}_{h}\right){x}_{\delta }.$ (5)

We also denote

${y}_{h,\alpha }={A}_{h}{\stackrel{⌣}{A}}_{h}{T}_{\alpha }\left({\stackrel{⌣}{A}}_{h}\right){x}_{0}.$ (6)

The following lemma will be used in the proof of Theorem 2.2.

Lemma 2.1. Under the stated assumption, we obtain

${A}_{h}{\stackrel{⌣}{A}}_{h}={\stackrel{^}{A}}_{h}{A}_{h},$

where ${\stackrel{^}{A}}_{h}={\left(I+{A}_{h}{A}_{h}^{\ast }\right)}^{-1}$.

Proof. We denote

$G\left({A}_{h}\right)=\left\{\left(x,{A}_{h}x\right):x\in D\left({A}_{h}\right)\right\}$

$VG\left({A}_{h}^{\ast }\right)=\left\{\left(-{A}_{h}^{\ast }y,y\right):y\in D\left({A}_{h}^{\ast }\right)\right\}.$

Since ${A}_{h}$ is a closed densely defined linear operator then $G\left({A}_{h}\right)$ and $VG\left({A}_{h}^{\ast }\right)$ are complementary orthogonal subspaces of the Hilbert space $X×Y$ ( , p. 307). Hence, for any $z\in X$, we have the uniquely determined decomposition

$\left(z,0\right)=\left(x,{A}_{h}x\right)+\left(-{A}_{h}^{\ast }y,y\right),\text{with}\text{\hspace{0.17em}}x\in D\left({A}_{h}\right),y\in D\left({A}_{h}^{\ast }\right).$ (7)

Thus

$z=x-{A}_{h}^{\ast }y,0={A}_{h}x+y.$ (8)

Therefore, $x\in D\left({A}_{h}^{\ast }{A}_{h}\right)$ and $x+{A}_{h}^{\ast }{A}_{h}x=z$. Because of the uniqueness of decomposition (7), x is uniquely determined by z, and so the everywhere defined inverse ${\left(I+{A}_{h}^{\ast }{A}_{h}\right)}^{-1}$ exists.

In a similar way as above, the everywhere defined inverse ${\left(I+{A}_{h}{A}_{h}^{\ast }\right)}^{-1}$ exists. It follows from (8) that

${A}_{h}{\left(I+{A}_{h}^{\ast }{A}_{h}\right)}^{-1}={\left(I+{A}_{h}{A}_{h}^{\ast }\right)}^{-1}{A}_{h},$

that means ${A}_{h}{\stackrel{⌣}{A}}_{h}={\stackrel{^}{A}}_{h}{A}_{h}$. Moreover, ${\stackrel{⌣}{A}}_{h},{\stackrel{^}{A}}_{h}$ are bouned operators and

$‖{\stackrel{⌣}{A}}_{h}‖\le 1,‖{\stackrel{^}{A}}_{h}‖\le 1.$

( , p. 308).

Theorem 2.2. If $D\left({A}_{h}{A}_{h}^{\ast }{A}_{h}\right)=D\left(A{A}^{\ast }A\right),\forall h>0$ and ${x}_{0}\in D\left(A{A}^{\ast }A\right)$, and $\alpha =\alpha \left(h,\delta \right)\to 0$, ${\delta }^{2}/\alpha \to 0$ as $h,\delta \to 0$, then $\left\{{y}_{\Delta }\right\}$ converges to $A{x}_{0}$.

Proof. Let $\omega =\left(I+{A}_{h}{A}_{h}^{\ast }\right){A}_{h}{x}_{0}$. Then ${A}_{h}{x}_{0}={\stackrel{^}{A}}_{h}\omega$. Since ${A}_{h}{\stackrel{⌣}{A}}_{h}={\stackrel{^}{A}}_{h}{A}_{h}$ (Lemma 2.1) and ${A}_{h}{x}_{0}={\stackrel{^}{A}}_{h}\omega$, we have

$\begin{array}{c}{y}_{h,\alpha }-{A}_{h}{x}_{0}={A}_{h}\left({\stackrel{⌣}{A}}_{h}-\left[\alpha I+\left(1-\alpha \right){\stackrel{⌣}{A}}_{h}\right]\right){\left(\alpha I+\left[\left(1-\alpha \right){\stackrel{⌣}{A}}_{h}\right]\right)}^{-1}{x}_{0}\\ =\alpha \left({\stackrel{^}{A}}_{h}-I\right){T}_{\alpha }\left({\stackrel{^}{A}}_{h}\right){\stackrel{^}{A}}_{h}\omega .\end{array}$

Since $‖{T}_{\alpha }\left({\stackrel{^}{A}}_{h}\right){\stackrel{^}{A}}_{h}‖\le \frac{1}{\alpha }$ and $‖{\stackrel{^}{A}}_{h}-I‖\le 2$, for all $h>0$, we obtain

$\underset{\alpha \to 0}{\mathrm{lim}}{y}_{h,\alpha }={A}_{h}{x}_{0},\forall h>0.$

On the other hand we have

$\begin{array}{c}{‖{y}_{\Delta }-{y}_{h,\alpha }‖}^{2}=〈{A}_{h}{\stackrel{⌣}{A}}_{h}{T}_{\alpha }\left({\stackrel{⌣}{A}}_{h}\right)\left({x}_{\delta }-{x}_{0}\right),{A}_{h}{\stackrel{⌣}{A}}_{h}{T}_{\alpha }\left({\stackrel{⌣}{A}}_{h}\right)\left({x}_{\delta }-{x}_{0}\right)〉\\ =〈{A}_{h}^{\ast }{A}_{h}{\stackrel{⌣}{A}}_{h}{T}_{\alpha }\left({\stackrel{⌣}{A}}_{h}\right)\left({x}_{\delta }-{x}_{0}\right),{\stackrel{⌣}{A}}_{h}{T}_{\alpha }\left({\stackrel{⌣}{A}}_{h}\right)\left({x}_{\delta }-{x}_{0}\right)〉\\ =〈\left(I-{\stackrel{⌣}{A}}_{h}\right){T}_{\alpha }\left({\stackrel{⌣}{A}}_{h}\right)\left({x}_{\delta }-{x}_{0}\right),{\stackrel{⌣}{A}}_{h}{T}_{\alpha }\left({\stackrel{⌣}{A}}_{h}\right)\left({x}_{\delta }-{x}_{0}\right)〉\\ =‖I-{\stackrel{⌣}{A}}_{h}‖\frac{{\delta }^{2}}{\alpha },\end{array}$

since $‖{T}_{\alpha }\left({\stackrel{⌣}{A}}_{h}\right)‖\le 1$.

Hence

$‖{y}_{\Delta }-{y}_{h,\alpha }‖\to 0,\text{\hspace{0.17em}}\text{as}\text{\hspace{0.17em}}\text{ }\alpha \left(h,\delta \right)\to 0,\frac{{\delta }^{2}}{\alpha }\to 0.$

We have

$\begin{array}{c}‖{y}_{\Delta }-A{x}_{0}‖\le ‖{y}_{\Delta }-{y}_{h,\alpha }‖+‖{y}_{h,\alpha }-{A}_{h}{x}_{0}‖+‖{A}_{h}{x}_{0}-A{x}_{0}‖\\ \le ‖{y}_{\Delta }-{y}_{h,\alpha }‖+‖{y}_{h,\alpha }-{A}_{h}{x}_{0}‖+h.\end{array}$ (9)

It follows from (9) that

${y}_{\Delta }\to A{x}_{0},\text{\hspace{0.17em}}\text{as}\text{\hspace{0.17em}}\text{ }h,\delta \to 0.$

The theorem is proved.

We shall call ${y}_{\Delta }$ the approximate values of the operator A at ${x}_{0}$.

3. The Stable Method of Computing Values of Hemi-Continuous Monotone Operators

Let X be a real strictly convex reflexive Banach space with the dual ${X}^{\ast }$ be an E- space. Suppose that $A:X\to {X}^{\ast }$ is a hemi-continuous monotone operator from X into ${X}^{\ast }$ with domain $D\left(A\right)\subset X$ (possibly multi-valued) and y is a given element in ${X}^{\ast }$. We consider the following three problems

1) To solve the equation

$Ax=y,$ (1)

2) To solve the variational inequality

$〈Ax-y,x-z〉\ge 0,\forall x\in D\left(A\right),$ (2)

3) To compute values of the operator A at ${x}_{0}$ in X with ${x}_{0}$ given approximately.

These problems are important objects of investigation in the theory unstable problems. In    -  a class of monotone operators was singled out and, as an approximate method, the operator-regularization method was used.

As it is known , a solution of (1) is understood to be an element $\stackrel{˜}{x}\in D\left(A\right)$ such that $A\stackrel{˜}{x}=y$ if A is a single-valued, and $y\in A\stackrel{˜}{x}$ if A is a maximal monotone (possibly multi-valued). If A is an arbitrary monotone operator, we follow  and understand a solution of (1) to be an element $\stackrel{¯}{x}\in X$ such that

$〈Ax-y,x-\stackrel{¯}{x}〉\ge 0,\forall x\in D\left(A\right),$ (3)

where $〈Ax-y,x-\stackrel{¯}{x}〉$ values of the linear functional $Ax-y$ at $x-\stackrel{¯}{x}$.

We shall call $\stackrel{¯}{x}$ a generalized solution of Equation (1). We note that, if A is hemi-continuous and $D\left(A\right)$ is open or everywhere dense in X, or if A is maximal monotone, then a generalized solution $\stackrel{¯}{x}$ coincides with the corresponding solution $\stackrel{˜}{x}$, and (3) is equivalent to the inclusion $y\in A\stackrel{¯}{x}$ .

We now deal with the stable method of computing values of the operator A at ${x}_{0}$ when only the approximations ${A}_{h},{x}_{\delta }$ as in (2) are given, where ${A}_{h}$ is also a hemi-continuous monotone operator from X into ${X}^{\ast }$ with domain $D\left({A}_{h}\right)=D\left(A\right)=X$.

We denote the set values of A at ${x}_{0}$

${R}_{{x}_{0}}=\left\{y\in {X}^{\ast }:y\in A{x}_{0}\right\}.$

In ${X}^{\ast }$ we consider the set

${M}_{{x}_{0}}=\left\{y\in {X}^{\ast }|〈Ax-y,x-{x}_{0}〉\ge 0,\forall x\in X\right\},$

and we call ${M}_{{x}_{0}}$ the set of generalized values of A at ${x}_{0}$. It is easy to show that ${R}_{{x}_{0}}\subset {M}_{{x}_{0}}$.

Lemma 3.1.  The set ${M}_{{x}_{0}}$ is convex and closed in ${X}^{\ast }$, moreover, there is a unique element ${y}_{0}\in {M}_{{x}_{0}}$ such that

$‖{y}_{0}‖=\underset{y\in {M}_{{x}_{0}}}{\mathrm{min}}‖y‖.$

Under the above hypotheses, there exist the dual mappings

$U:X\to {X}^{\ast },V:{X}^{\ast }\to X,$

being strictly monotone, single-valued, homogeneous, hemi-continuous and such that

$VUx=x,\forall x\in X;UVy=y,\forall y\in {X}^{\ast },$

(see    ).

We consider the equation

$\alpha {A}_{h}x+U\left(x-{x}_{\delta }\right)=0,\alpha >0.$ (4)

The following theorem asserts the existence and uniqueness of generalized solution of (4).

Theorem 3.1. Under hypotheses as above, Equation (4) has a unique solution ${x}_{\Delta }$, for any $\Delta =\left(h,\delta ,\alpha \right)$.

Proof. Let ${\stackrel{˜}{A}}_{h}$ be the maximal monotone extension of ${A}_{h}$ (such an extension exists by virtue of Zorn’s lemma). Therefore, the operator $\alpha {\stackrel{˜}{A}}_{h}x+U\left(x-{x}_{\delta }\right)$ is maximal monotone  and Browder’s theorem  implies that Equation (4) has a unique solution ${\stackrel{˜}{x}}_{\Delta }$, i.e., $0\in {\stackrel{˜}{A}}_{h}{\stackrel{˜}{x}}_{\Delta }+U\left({\stackrel{˜}{x}}_{\Delta }-{x}_{\delta }\right)$. In view of the preceding remark, this follows that

$〈\alpha {A}_{h}x+U\left(x-{x}_{\delta }\right),x-{\stackrel{˜}{x}}_{\Delta }〉\ge 0,\forall x\in X.$

Thus, ${\stackrel{˜}{x}}_{\Delta }$ coincides with the generalized solution of Equation (2). Therefore, (2) has a unique solution ${x}_{\Delta }={\stackrel{˜}{x}}_{\Delta }$, for any $\Delta =\left(h,\delta ,\alpha \right)$. We now consider the sequence

${y}_{\Delta }=-U\left({x}_{\Delta }-{x}_{\delta }\right)/\alpha .$ (5)

The uniqueness of ${x}_{\Delta }$ implies that ${y}_{\Delta }$ is uniquely determined. It is easy to show that ${y}_{\Delta }\in {\stackrel{˜}{A}}_{h}{x}_{\Delta }$.

${y}_{\Delta }$ is call approximate value of A at ${x}_{0}$ for the given approximation $\left({A}_{h},{y}_{\delta }\right)$.

Theorem 3.2. Under the stated assumption, if $\alpha \left(h,\delta \right)\to 0$, $\delta /\alpha \to 0$, as $h,\delta \to 0$, then the sequence $\left\{{y}_{\Delta }\right\}$ converges to the generalized value ${y}_{0}\in {M}_{{x}_{0}}$ of the operator A at ${x}_{0}$.

Proof. By applying the dual mapping $V:{X}^{\ast }\to X$ to (5), we obtain

$\alpha V{y}_{\Delta }+\left({x}_{\Delta }-{x}_{\delta }\right)-0.$ (6)

Let ${M}_{{x}_{0}}^{h}$ denote the set of generalized values of ${A}_{h}$ at ${x}_{0}$, i.e.

${M}_{{x}_{0}}^{h}=\left\{{y}_{h}\in {X}^{\ast }|〈{A}_{h}x-{y}_{h},x-{x}_{0}〉\ge 0,\forall x\in X\right\}.$

By using  we obtain ${M}_{{x}_{0}}^{h}\ne \varnothing$. It follows from (6) that

$〈{y}_{\Delta }-{y}_{h},{x}_{\Delta }-{x}_{0}〉+〈{y}_{\Delta }-{y}_{h},{x}_{0}-{x}_{\delta }〉+\alpha 〈{y}_{\Delta }-{y}_{h},V{y}_{\Delta }〉=0,\forall {y}_{h}\in {M}_{{x}_{0}}^{h}.$ (7)

It is easy to show that $\left({x}_{0},{y}_{h}\right)\in gr{\stackrel{˜}{A}}_{h}$ and hence

$〈{y}_{\Delta }-{y}_{h},{x}_{\Delta }-{x}_{0}〉\ge 0.$ (8)

It follows from (7) and (8), that

$〈{y}_{\Delta }-{y}_{h},{x}_{0}-{x}_{\delta }〉+\alpha 〈{y}_{\Delta }-{y}_{h},V{y}_{\Delta }〉\le 0,$

implies

$\alpha {‖V{y}_{\Delta }‖}^{2}-\alpha ‖{y}_{h}‖‖V{y}_{\Delta }‖-‖{y}_{\Delta }-{y}_{h}‖‖{x}_{0}-{x}_{\delta }‖\le 0,$

consequently

$\alpha {‖{y}_{\Delta }‖}^{2}-\left(\alpha ‖{y}_{h}‖+\delta \right)‖{y}_{\Delta }‖-\delta ‖{y}_{h}‖\le 0,\forall {y}_{h}\in {M}_{{x}_{0}}^{h}.$ (9)

It follows from (9), that

$‖{y}_{\Delta }‖\le ‖{y}_{h}‖+2\delta /\alpha ,\forall {y}_{h}\in {M}_{{x}_{0}}^{h}.$

In view of preceding remark and (2) we obtain

$‖{y}_{h}-y‖\le h,\forall y\in {M}_{{x}_{0}},\forall {y}_{h}\in {M}_{{x}_{0}}^{h}.$

Hence,

$‖{y}_{\Delta }‖\le ‖y‖+2\delta /\alpha +h,\forall y\in {M}_{{x}_{0}},$

implies

$‖{y}_{\Delta }‖\le ‖{y}_{0}‖+2\delta /\alpha +h,\forall h,\delta >0.$ (10)

Since ${X}^{\ast }$ is an E- space and from (10) and by using  we see that the sequence $\left\{{y}_{\Delta }\right\}$ converges to ${y}_{0}$ as $\alpha \left(h,\delta \right)\to 0$, $\delta /\alpha \to 0$, $h,\delta \to 0$.

The theorem is proved.

4. Applications

As a simple concrete example of this type of approximation, consider differentiation in ${L}^{2}\left(ℝ\right)$. That is, the operator A is defined on ${H}^{1}\left(ℝ\right)$, the Sobolev space of functions possessing a weak derivative in ${L}^{2}\left(ℝ\right)$ by

$Ax=\frac{\text{d}x}{\text{d}t}.$

For a given data function ${x}_{\delta }\in {L}^{2}\left(ℝ\right)$ and a given data operator ${A}_{h}$ is defined on ${H}^{1}\left(ℝ\right)$ possessing a weak derivative in ${L}^{2}\left(ℝ\right)$, by ${A}_{h}{x}_{\delta }=\frac{\text{d}{x}_{\delta }}{\text{d}t}$ satisfying

$‖{x}^{\delta }-{x}_{0}‖\le \delta ,‖{A}_{h}x-Ax‖\le h,\forall x\in {H}^{1}\left(ℝ\right).$ (1)

The stabilized approximate derivative (3) is easily seen (using Fourier transform analysis) to be given by

${y}_{\Delta }\left(s\right)=\underset{-\infty }{\overset{+\infty }{\int }}{\sigma }_{\alpha ,h}\left(s-t\right){x}_{\delta }\left(t\right),$ (2)

where the kernel ${\sigma }_{\alpha ,h}$ is given by

${\sigma }_{\alpha ,h}\left(t\right)=-\frac{\text{sign}\left(t\right)}{2\alpha }\mathrm{exp}\left(-|t|/\sqrt{\alpha }\right).$ (3)

Then ${y}_{\Delta }\left(s\right)$ in (2) is the approximate value of the operator A at ${x}_{0}$ for this method.

Cite this paper: Van Kinh, N. (2020) On the Stable Method Computing Values of Unbounded Operators. Open Journal of Optimization, 9, 129-137. doi: 10.4236/ojop.2020.94009.
References

   Tikhonov, A.N. and Arsenin, V.Y. (1978) Solution of Ill-Posed Problems. John Wiley & Sons, Hoboken.

   Morozov, V.A. (1984) Methods for Solving Incorrectly Posed Problems. Springer, Berlin.
https://doi.org/10.1007/978-1-4612-5280-1

   Groetsch, C.W. and Scherzer, O. (1993) The Optimal Order of Convergence for Stable Evaluation of Differential Operators. Electronic Journal of Differential Equations, 3, 1-12.

   Groetsch, C.W. (2006) Stable Approximate Evaluation of Unbounded Operators. Springer, Berlin.
https://doi.org/10.1007/3-540-39942-9

   Van Kinh, N. (2001) On the Stable Method of Computing Values of Unbounded Operators. Proceedings of Science of Quinhon University of Educations, 14, 27-38.
http://www.ictp.trieste.it/pub_off

   Van Kinh, N. (2014) On the Stable Method of Computing Values of Unbounded Operators. Journal Science of Ho Chi Minh City University of Food Industry, 2, 21-30.

   Van Kinh, N., Chuong, N.M. and Gorenflo, R. (1996) Regularization Method for Nonlinear Variational Inequalities. Proceedings of the First National Workshop “Optimization and Control”, Freie Universitat, Berlin.

   Vaiberg, I.M. (1972) The Variational Method and the Monotonic-Operator Method. Nauka, Moscow.

   Zeidler, E. (1989) Nonlinear Functional Analysis and Its Applications II/B (Nonlinear Monotone Operators). Springer-Verlag, Berlin.

   Al’ber, Y.I. and Ryazantseva, P. (1979) Solution of Nonlinear Problems Involving Monotonic Discontinuous Mapping. Differentsial’nyje Uravnenija, 15, 31-342.

   Riesz, S. and Sz-Nagy, B. (1955) Functional Analysis. Ungar, New York.

   Browder, F.E. (1966) On the Unification of the Calculus of Variations and the Theory Monotone Nonlinear in Banach Spaces. Proceedings of the National Academy of Sciences of the United States of America, 56, 419-425.
https://doi.org/10.1073/pnas.56.2.419

   Browder, F.E. (1966) Existence and Approximation of Solution of Nonlinear Inequalities. Proceedings of the National Academy of Sciences of the United States of America, 56, 1080-1086.
https://doi.org/10.1073/pnas.56.4.1080

   Browder, F.E. (1968) Nonlinear Maximal Monotone Operators in Banach Spaces. Mathematische Annalen, 175, 89-113.
https://doi.org/10.1007/BF01418765

   Liskovets, O.A. (1983) Regularizaion Problem with Monotone Discontinuous Perturbations of Operators. Proceedings of the USSR Academy of Sciences, 272, 30-34.

   Liskovets, O.A. (1983) Solution of the First Kind Operator Equations with Non- Monotone Perturbations. Proceedings of the USSR Academy of Sciences, 272, 101-104.

   Abramov, A. and Gaipova, A.N. (1972) On the Solvability of Certain Equations Containing Monotonic Discontinuous Transformations. USSR Computational Mathematics and Mathematical Physics, 12, 320-324.
https://doi.org/10.1016/0041-5553(72)90191-7

   Lions, J.L. (1972) Methods of Solution of Nonlinear Boundary-Values Problems. Mir, Moscow.

   Rockafellar, R.T. (1970) On the Maximality of Sum of Nonlinear Monotone Operators. American Mathematical Society, 149, 75-88.
https://doi.org/10.1090/S0002-9947-1970-0282272-5

Top