Jensen Inequality of Bivariate Function in the G-Expectation Framework
Abstract: In the G-expectation framework, Wang [1] first obtained the Jensen inequality of one-dimensional function. In this paper, under some stronger conditions, we obtain the Jensen inequality of bivariate function based on Wang’s proof method. And we give some examples to illustrate the application of Jensen inequality of bivariate function.

1. Introduction

As we know, expected utility theory has been widely used in the field of mathematical finance, especially in measuring the preference and aversion of risk. However, because the classical mathematical expectation is linear, the Von-Neumann expected utility cannot accurately measure risk aversion. Hence, economists hope to find a tool that can have certain properties of the classical expectation and accurately measure risk aversion. Driven by this problem, Peng [2] introduced a nonlinear expectation—g-expectation by backward stochastic differential equation in 1997 and in 2006, Peng [3] [4] [5] introduced a new nonlinear expectation—G-expectation through the nonlinear heat equation and established a systematical theoretical framework. Both of these are sublinear expectation, but compared with g-expectation, G-expectation doesn’t have to build on a given probability space and is more effective and general. In the G-expectation framework, Peng obtained the central limit theorem [6] and law of large numbers [7] under nonlinear expectations.

Since Peng’s pioneering work, many scholars devoted themselves to the study of related problems and obtained a wealth of scientific achievements. Bai and Buckdahn [8] gave some application of G-expectation in risk measurement and studied the problem of optimal risk transfer and convolution formula under G-expectations. Gao [9] studied the pathwise properties and homeomorphic flows for stochastic differential equations driven by G-brownian motion. In 2009, Hu and Peng [10] gave the representation theorem of G-expectation and proved the existence of weakly compact probability family, and studied paths of G-Brownian Motion.

When expected utility function denotes the aversion and preference of risk, the Jensen’s inequality of mathematical expectation takes an important role. Because of the importance of the Jensen inequality, many scholars studied the Jensen inequality in different cases. In the g-expectation framework, Li [11] proved the Jensen inequality of g-expectation when function $g\left(x\right)$ is convex, concave or piecewise. Jiang [12] gave the sufficient and necessary conditions of Jensen inequality for g-expectation. Moreover, Jiang [13] proved the Jensen inequality of bivariate function when $g\left(x\right)$ is a sublinear generator. Correspondingly, in the G-expectation framework, Wang [1] studied the Jensen inequality of one-dimensional function under some sufficient and necessary conditions and illustrated the significant application of Jensen inequality in the G-martingale theory. However, we found the theorems in [1] do not hold true for the bivariate function under some weaker condition. Thus in this paper, based on Wang’s proof method, under some reasonable conditions, we obtain the Jensen inequality of bivariate function in the G-expectation framework. Moreover, we use some examples to illustrate the application of Jensen inequality of binary function.

This paper is organized as follows. In Section 2, we present a brief review of the primary concepts under the G-framework, including the definition and some useful properties of G-expectation. Then, we give the basic concept about the G-Browian and the computation of $\stackrel{^}{\mathbb{E}}\left[{|{B}_{t}|}^{n}\right]$. In Section 3, we demonstrate the G-Jensen inequality of bivariate function under the stronger conditions and give some examples of Jensen inequality of binary function.

2. Preliminaries and Notation

In this section, we will give some basic theories about G-expectation and G-Brownian motion. Some more details can be found in literatures [3] [4] [5]. Let $\Omega$ be a given set and let $H$ be a vector lattice of real functions defined on $\Omega$ containing 1 such that ${X}_{1},\cdots ,{X}_{n}\in H$ implies $\phi \left({X}_{1},\cdots ,{X}_{n}\right)\in H$ for each $\phi \in {C}_{l.Lip}\left({ℝ}^{n}\right)$, where ${C}_{l.Lip}\left({ℝ}^{n}\right)$ denotes the linear functions space satisfying condition:

$|\phi \left(x\right)-\phi \left(y\right)|\le C\left(1+{|x|}^{m}+{|y|}^{m}\right)|x-y|,\text{ }\forall x,y\in {ℝ}^{n},$ (1)

for $C>0,m\in ℕ$ rely on $\phi$. For each $T\in \left[0,\infty \right)$, let

${L}_{ip}\left({F}_{T}\right):=\left\{\phi \left({B}_{{t}_{1}},\cdots ,{B}_{{t}_{n}}\right):0\le {t}_{1},\cdots ,{t}_{n}\le T,\forall \phi \in {C}_{l.Lip}\left({ℝ}^{n}\right),n\in ℕ\right\}$

where ${B}_{t}$ is a canonical process. Let ${L}_{G}^{p}\left(F\right):=\underset{n=1}{\overset{\infty }{\cup }}{L}_{ip}\left({F}_{n}\right)$. For a given $p\ge 1$, we also denote ${L}_{G}^{p}\left(F\right)$ the completion of ${L}_{ip}\left(F\right)$ under norm ${‖X‖}_{p}:={\left(\stackrel{^}{\mathbb{E}}\left[{|X|}^{p}\right]\right)}^{\frac{1}{p}}$. Then let $G\left(x\right)$ be a monotonic and sublinear function:

$G\left(x\right)=\frac{1}{2}\left({x}^{+}-{\underset{_}{\sigma }}^{2}{x}^{-}\right),\text{ }x\in ℝ,$

where ${x}^{+}=\mathrm{max}\left\{0,x\right\}$, ${x}^{-}={\left(-x\right)}^{+}$.

2.1. G-Expectation and Its Properties

Firstly, we introduce some notations about G-expectations.

Definition 1. [3] A nonlinear expectation $\stackrel{^}{\mathbb{E}}:H↦ℝ$ on $H↦ℝ$ is a functional satisfying the following properties: for all $X,Y\in H$, we have

a) Monotonicity: If $X\ge Y$, then $\stackrel{^}{\mathbb{E}}\left[X\right]\ge \stackrel{^}{\mathbb{E}}\left[Y\right]$.

b) Preserving of constants: $\stackrel{^}{\mathbb{E}}\left[c\right]=c,\forall c\in ℝ$.

c) Sub-additivity: $\stackrel{^}{\mathbb{E}}\left[X\right]-\mathbb{E}\left[Y\right]\le \stackrel{^}{\mathbb{E}}\left[X-Y\right]$.

d) Positive homogeneity: $\stackrel{^}{\mathbb{E}}\left[\lambda X\right]=\lambda \stackrel{^}{\mathbb{E}}\left[X\right],\forall \lambda \ge 0$.

e) Constant translatability: $\stackrel{^}{\mathbb{E}}\left[X+c\right]=\stackrel{^}{\mathbb{E}}\left[X\right]+c,\forall c\in ℝ$.

The triple $\left(\Omega ,\mathcal{H},\stackrel{^}{\mathbb{E}}\right)$ is called a sublinear expectation spaces. If only c) and d) are satisfied, $\stackrel{^}{\mathbb{E}}$ is called a sublinear functional.

Remark 1. If the inequality in c) is equality, then $\stackrel{^}{\mathbb{E}}$ is a linear expectation on $H$. Moreover, the sublinear expectation $\stackrel{^}{\mathbb{E}}$ can be represented as the upper expectation of a subset of linear expectation $\left\{{E}_{\theta }:\theta \in \Theta \right\}$, i.e., $\stackrel{^}{\mathbb{E}}\left[X\right]={\mathrm{sup}}_{\theta \in \Theta }{E}_{\Theta }\left[X\right]$. In most cases, this subset is often treated as an uncertain model of probabilities $\left\{{P}_{\theta }:\theta \in \Theta \right\}$ and the notion of sublinear expectation provides a robust way to measure a risk loss X.

The following simple properties is very useful in sublinear analysis.

Lemma 1. [3] 1) Let $X,Y\in H$ be such that $\stackrel{^}{\mathbb{E}}\left[Y\right]=-\stackrel{^}{\mathbb{E}}\left[-Y\right]$, then we have

$\stackrel{^}{\mathbb{E}}\left[X+Y\right]=\stackrel{^}{\mathbb{E}}\left[X\right]+\stackrel{^}{\mathbb{E}}\left[Y\right].$

In particular, if $\stackrel{^}{\mathbb{E}}\left[Y\right]=\stackrel{^}{\mathbb{E}}\left[-Y\right]=0$, then $\stackrel{^}{\mathbb{E}}\left[X+Y\right]=\stackrel{^}{\mathbb{E}}\left[X\right]$.

2) According to the property d) of G-expectation, it is easy to deduce that

$\stackrel{^}{\mathbb{E}}\left[\lambda X\right]={\lambda }^{+}\stackrel{^}{\mathbb{E}}\left[X\right]+{\lambda }^{-}\stackrel{^}{\mathbb{E}}\left[-X\right],\text{ }\lambda \in R.$

3) For arbitrary $X,Y\in H$, we have

$|\stackrel{^}{\mathbb{E}}\left[X\right]-\stackrel{^}{\mathbb{E}}\left[Y\right]|\le \stackrel{^}{\mathbb{E}}\left[X-Y\right]\vee \stackrel{^}{\mathbb{E}}\left[Y-X\right]\le \stackrel{^}{\mathbb{E}}\left[|X-Y|\right].$

For this properties of G-expectation will often be used in this article. They can can simplify our calculations.

Now let us introduce the notation about G-Brownian.

Definition 2. [5] A d-dimensional process ${\left({B}_{t}\right)}_{t\ge 0}$ on a sublinear expectation space $\left(\Omega ,\mathcal{H},\stackrel{^}{\mathbb{E}}\right)$ is called a G-Brownian motion if the following properties are satisfied

1) ${B}_{0}\left(\omega \right)=0$ ;

2) For each $t,s\ge 0$, the increment ${B}_{t+s}-{B}_{t}$ and ${B}_{s}$ have the identically distribution. For arbitrary $n\in ℕ$ and $\left\{{t}_{1},{t}_{2},\cdots ,{t}_{n}\right\}\in \left[0,t\right]$, ${B}_{t+s}-{B}_{t}$ is independent from $\left\{{B}_{{t}_{1}},{B}_{{t}_{2}},\cdots ,{B}_{{t}_{n}}\right\}$.

Just Like the classical expectation situation, the increments of G-Brownian motion ${\left({B}_{t+s}-{B}_{s}\right)}_{t\ge 0}$ is independent of ${F}_{s}$. In fact it is a new G-Brownian motion since, just like the classical situation, the increments of B are identically distributed. Then we introduce some computation formula of standard G-Browian motion.

Lemma 2. [3] For each $n=0,1,2,\cdots$, and $0\le s-t$, we have

$\begin{array}{l}\stackrel{^}{\mathbb{E}}\left[{B}_{t}-{B}_{s}|{F}_{s}\right]=0\\ \stackrel{^}{\mathbb{E}}\left[{|{B}_{t}-{B}_{s}|}^{n}|{F}_{s}\right]=\stackrel{^}{\mathbb{E}}\left[{|{B}_{t}-{B}_{s}|}^{n}\right]=\frac{1}{\sqrt{2\pi \left(t-s\right)}}{\int }_{-\infty }^{+\infty }{|x|}^{n}{\text{e}}^{-\frac{{x}^{2}}{2\left(t-s\right)}}\text{d}x\end{array}$

Exactly as in classical cases, we have

$\begin{array}{l}\stackrel{^}{\mathbb{E}}\left[{\left({B}_{t}-{B}_{s}\right)}^{2}\right]=t-s,\text{ }\text{ }\text{ }\stackrel{^}{\mathbb{E}}\left[{|{B}_{t}-{B}_{s}|}^{3}\right]=\frac{2\sqrt{2}{\left(t-s\right)}^{3/2}}{\sqrt{\pi }}\\ \stackrel{^}{\mathbb{E}}\left[{\left({B}_{t}-{B}_{s}\right)}^{4}\right]=3{\left(t-s\right)}^{2},\text{ }\stackrel{^}{\mathbb{E}}\left[{|{B}_{t}-{B}_{s}|}^{5}\right]=\frac{8\sqrt{2}{\left(t-s\right)}^{5/2}}{\sqrt{\pi }}.\end{array}$ (2)

2.2. Bivariate Convex Function

Definition 3. [14] Assume that the bivariate function $f\left(x,y\right)$ is defined in the region D, for $\forall \left(x+\Delta x,y+\Delta y\right),\left(x-\Delta x,y-\Delta y\right)$, we have

$f\left(x,y\right)-\frac{1}{2}\left[f\left(x+\Delta x,y+\Delta y\right)+f\left(x-\Delta x,y-\Delta y\right)\right]\le 0$

We can call the the bivariate function $f\left(x,y\right)$ is convex function in the region D.

Lemma 3. [14] Assume that the bivariate function $f\left(x,y\right)$ has continuous first partial derivatives in the convex region $D$, $f\left(x,y\right)$ is convex function if and only if for $\forall \left({x}_{1},{y}_{1}\right),\left({x}_{2},{y}_{2}\right)\in \mathcal{D}$,

$f\left({x}_{2},{y}_{2}\right)\ge f\left({x}_{1},{y}_{1}\right)+{{f}^{\prime }}_{x}\left({x}_{1},{y}_{1}\right)\left({x}_{2}-{x}_{1}\right)+{{f}^{\prime }}_{y}\left({x}_{1},{y}_{1}\right)\left({y}_{2}-{y}_{1}\right)$

Lemma 4. [14] Assume that the bivariate function $f\left(x,y\right)$ has the second partial derivatives in the convex region $D$, $f\left(x,y\right)$ is convex function if and only if the Hesse matrix $\begin{array}{cc}⌜\frac{{\partial }^{2}f}{\partial {x}^{2}}& \frac{{\partial }^{2}f}{\partial x\partial y}⌝\\ ⌞\frac{{\partial }^{2}f}{\partial y\partial x}& \frac{{\partial }^{2}f}{\partial {y}^{2}}⌟\end{array}$ is positive semi-definite.

3. Demonstrations

Using Wang’s proof method, we can easily obtain the following theorem.

Theorem 1. Assuming that function $h\left(x,y\right):ℝ×ℝ↦ℝ$ has the second partial derivatives and satisfies the inequation:

$h\left(\stackrel{^}{\mathbb{E}}\left[X\right],\stackrel{^}{\mathbb{E}}\left[Y\right]\right)\le \stackrel{^}{\mathbb{E}}\left[h\left(X,Y\right)\right],$ (3)

where $X,Y\in {L}_{G}^{1}\left(F\right)$, $h\left(x,y\right)\in {L}_{G}^{1}\left(F\right)$, then $h\left(x,y\right)$ is the viscosity subsolution of the following equation:

$-G\left(\frac{{\partial }^{2}h}{\partial {x}^{2}}\left(x,y\right){b}^{2}+\frac{{\partial }^{2}h}{\partial {y}^{2}}\left(x,y\right){d}^{2}+2\frac{{\partial }^{2}h}{\partial x\partial y}\left(x,y\right)bd\right)=0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left(b,d\right)\in {ℝ}^{2}.$ (4)

Based on the Theorem 1, we obtain the following Jensen inequality of bivariate function.

Theorem 2. Assuming that function $h\left(x,y\right):ℝ×ℝ↦ℝ$ has the second partial derivatives and the bivariate function is non-increasing w.r.t. one variable at least. The following two conditions are equivalent:

1) Function h is convex function;

2) The Jensen inequality based on G-expectation can hold:

$h\left(\stackrel{^}{\mathbb{E}}\left[X\right],\stackrel{^}{E}\left[Y\right]\right)\le \stackrel{^}{\mathbb{E}}\left[h\left(X,Y\right)\right],$ (5)

where $X,Y\in {L}_{G}^{1}\left(F\right)$, $h\left(X,Y\right)\in {L}_{G}^{1}\left(F\right)$.

Proof: $\left(\text{i}\right)⇒\left(\text{ii}\right)$ Suppose for a moment that convex function $h\left(x,y\right)$ is non-increasing w.r.t. independent variable y. For each $\left(X,Y\right)$ and $\left(\stackrel{^}{\mathbb{E}}\left[X\right],\stackrel{^}{\mathbb{E}}\left[Y\right]\right)$, we have

$\begin{array}{c}h\left(X,Y\right)\ge h\left(\stackrel{^}{\mathbb{E}}\left[X\right],\stackrel{^}{\mathbb{E}}\left[Y\right]\right)+{{h}^{\prime }}_{x}\left(\stackrel{^}{\mathbb{E}}\left[X\right],\stackrel{^}{\mathbb{E}}\left[Y\right]\right)\left(X-\stackrel{^}{\mathbb{E}}\left[X\right]\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+{{h}^{\prime }}_{y}\left(\stackrel{^}{\mathbb{E}}\left[X\right],\stackrel{^}{\mathbb{E}}\left[Y\right]\right)\left(Y-\stackrel{^}{\mathbb{E}}\left[Y\right]\right).\end{array}$

Then let $l:={{h}^{\prime }}_{x}\left(\stackrel{^}{\mathbb{E}}\left[X\right],\stackrel{^}{\mathbb{E}}\left[Y\right]\right)$, $k:={{h}^{\prime }}_{y}\left(\stackrel{^}{\mathbb{E}}\left[X\right],\stackrel{^}{\mathbb{E}}\left[Y\right]\right)$. Apparently, $k\le 0$.

Then we have

$\begin{array}{l}\stackrel{^}{\mathbb{E}}\left[h\left(X,Y\right)\right]-h\left(\stackrel{^}{\mathbb{E}}\left[X\right],\stackrel{^}{\mathbb{E}}\left[Y\right]\right)\\ \ge \stackrel{^}{\mathbb{E}}\left[l\left(X-\stackrel{^}{E}\left[X\right]\right)+k\left(Y-\stackrel{^}{\mathbb{E}}\left[Y\right]\right)\right]=\stackrel{^}{\mathbb{E}}\left[l\left(X-\stackrel{^}{E}\left[X\right]\right)-{k}^{-}\left(Y-\stackrel{^}{\mathbb{E}}\left[Y\right]\right)\right]\\ \ge \stackrel{^}{\mathbb{E}}\left[l\left(X-\stackrel{^}{E}\left[X\right]\right)\right]-\stackrel{^}{\mathbb{E}}\left[{k}^{-}\left(Y-\stackrel{^}{E}\left[Y\right]\right)\right]=\stackrel{^}{\mathbb{E}}\left[l\left(X-\stackrel{^}{\mathbb{E}}\left[X\right]\right)\right].\end{array}$

Now we only consider $\stackrel{^}{\mathbb{E}}\left[l\left(X-\stackrel{^}{\mathbb{E}}\left(X\right)\right)\right]$.

$\begin{array}{c}\stackrel{^}{\mathbb{E}}\left[l\left(X-\stackrel{^}{\mathbb{E}}\left[X\right]\right)\right]=\stackrel{^}{\mathbb{E}}\left[{l}^{+}\left(X-\stackrel{^}{\mathbb{E}}\left[X\right]\right)+{l}^{-}\left(X-\stackrel{^}{\mathbb{E}}\left[X\right]\right)\right]\\ ={l}^{+}\stackrel{^}{\mathbb{E}}\left[X-\stackrel{^}{\mathbb{E}}\left[X\right]\right]+{l}^{-}\stackrel{^}{\mathbb{E}}\left[-\left(X-\stackrel{^}{\mathbb{E}}\left[X\right]\right)\right].\end{array}$

Because of ${l}^{+}\stackrel{^}{\mathbb{E}}\left[X-\stackrel{^}{\mathbb{E}}\left[X\right]\right]=0$ and ${l}^{-}\stackrel{^}{\mathbb{E}}\left[-\left(X-\stackrel{^}{\mathbb{E}}\left[X\right]\right)\right]\ge 0$, we can know

$\stackrel{^}{\mathbb{E}}\left[h\left(X,Y\right)\right]-h\left(\stackrel{^}{\mathbb{E}}\left[X\right],\stackrel{^}{\mathbb{E}}\left[Y\right]\right)\ge 0.$

$\left(\text{ii}\right)⇒\left(\text{i}\right)$ The proof by contradiction.Suppose the function $h\left(x,y\right)$ is not a convex function. And exist constants $\alpha ,\beta ,p,q$ such that the inequality $\rho \left(x,y\right)\ge h\left(x,y\right)$ can’t hold true in the domain of definition $D=\left\{\left(x,y\right)|\alpha \le x\le \beta ,p\le y\le q\right\}$, where

$\begin{array}{l}\rho \left(x,y\right)=\frac{h\left(\beta ,p\right)-h\left(\alpha ,p\right)}{\beta -\alpha }\left(x-\alpha \right)+\frac{h\left(\alpha ,q\right)-h\left(\alpha ,p\right)}{q-p}\left(y-p\right)+h\left(\alpha ,p\right).\hfill \end{array}$

Define a new function

${\rho }_{\delta }\left(x,y\right)=\rho \left(x,y\right)-\delta \left[\frac{\left(x-\alpha \right)\left(x-\beta \right)+\left(y-p\right)\left(y-q\right)}{2}\right],\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left(x,y\right)\in D.$

Exist a constant ${\delta }_{0}>0$ and a point $\left({x}^{*},{y}^{*}\right)\in D$, let $h\left({x}^{*},{y}^{*}\right)>{\rho }_{{\delta }_{0}}\left({x}^{*},{y}^{*}\right)$. For this fixed constant ${\delta }_{0}$, we assume the maximum of function $h\left(x,y\right)-{\rho }_{{\delta }_{0}}$ is achieved at a point $\left(\stackrel{¯}{x},\stackrel{¯}{y}\right)$. Thenet let $l=h\left(\stackrel{¯}{x},\stackrel{¯}{y}\right)-{\rho }_{{\delta }_{0}}\left(\stackrel{¯}{x},\stackrel{¯}{y}\right)$. We can obtain the following function

${h}_{{\delta }_{0}}\left(x,y\right)={\delta }_{0}\left(x,y\right)+l,\text{ }\forall \left(x,y\right)\in D.$

Obviously, $h\le {h}_{{\delta }_{0}}$. According to theorem 1, for $\forall \left(b,d\right)\in {ℝ}^{2}$, $h\left(x,y\right)$ is the viscosity subsolution of the Equation (1), which yields

$-G\left(\frac{{\partial }^{2}{h}_{{\delta }_{0}}}{\partial {x}^{2}}\left(x,y\right){b}^{2}+\frac{{\partial }^{2}{h}_{{\delta }_{0}}}{\partial {y}^{2}}\left(x,y\right){d}^{2}+2\frac{{\partial }^{2}{h}_{{\delta }_{0}}}{\partial x\partial y}\left(x,y\right)bd\right)\le 0.$ (6)

The Inequation (6) can be rewritten as follows

$-G\left(-{\delta }_{0}\left({b}^{2}+{d}^{2}\right)\right)\le 0.$

According to definition of $G\left(x\right)$, we can know $-\left(-\frac{1}{2}{\delta }_{0}\left({b}^{2}+{d}^{2}\right){\sigma }_{0}^{2}\right)\le 0$. This conflicts with ${\delta }_{0}\ge 0$. Therefore, function $h\left(x,y\right)$ is convex function.

Remark 2. Similarly, for the n-variables function, if we suppose that the n-variables function has the second partial derivatives and it is non-increasing w.r.t. $n-1$ variables at least, we can also obtain the corresponding Jensen inequality In the G-expectation framework. Then, we give some examples to illustrate the application of Jensen inequality of bivariate function.

Example 1. Assume ${B}_{t}$ and ${Z}_{s}$ is the standard G-Borwnian motion. Then $\stackrel{^}{\mathbb{E}}\left[{B}_{t}\right]=\stackrel{^}{\mathbb{E}}\left[{B}_{s}\right]=0$. The function $h\left(x,y\right)={x}^{2m}+{\text{e}}^{-y}$, $\left(x,y\right)\in {ℝ}^{2}$, $m\in ℕ$.

Obviously, the function $h\left(x,y\right)$ is convex function and satisfies $\frac{\partial h}{\partial y}\le 0$ in the region D. According to Theorem 2, we can obtain

$\stackrel{^}{\mathbb{E}}\left[{B}_{t}^{2m}+{\text{e}}^{-{Z}_{t}}\right]\ge {\left(\stackrel{^}{\mathbb{E}}\left[{B}_{t}\right]\right)}^{2m}+{\text{e}}^{-\stackrel{^}{\mathbb{E}}\left[{Z}_{s}\right]}={t}^{2m}+1$

From this example, we can find that the Jensen inequality of bivariate function can be used to proof the inequality or estimate the G-expectation. We can also use the bivariate expected utility function to define uncertain preference based on this Jensen inequality of the bivariate function.

4. Conclusion

In this work, we suppose that the bivariate function is non-increasing w.r.t. one variable at least and has the second partial derivatives. Then we obtain the Jensen inequality of bivariate function in the G-expectation framework. Moreover, we give some examples to illustrate the application of Jensen inequality of bivariate function. As discussed in Section 1, this effort focuses on the Jensen inequality of bivariate function in the G-expectation. Our future efforts will focus on demonstrating the Jensen inequality of multivariate function and exploring the condition for this inequality.

Cite this paper: Feng, L. (2020) Jensen Inequality of Bivariate Function in the G-Expectation Framework. Journal of Mathematical Finance, 10, 35-41. doi: 10.4236/jmf.2020.101004.
References

[1]   Wang, W. and Chen, Z. (n.d.) Jensen’s Inequality for G-Expectation, Submitted to Comptes Rendus de l'Académie des Sciences Paris Series I.

[2]   Peng, S. (1997) Backward Stochastic Differential Equations and Related g-Expectation. In: EI, Karoui, N. and Mazliak, L., Eds., Pitman Research Notes in Mathematics Series. Longman, Harlow, 364, 141-159.

[3]   Peng, S. (2007) G-Expectation, G-Brownian Motion and Related Stochastic Calculus of Itô’s Type. Stochastic Analysis and Applications, 2, 541-567.
https://doi.org/10.1007/978-3-540-70847-6_25

[4]   Peng, S. (2008) Multi-Dimensional G-Brownian Motion and Related Stochastic Calculus under G-Expectation. Stochastic Processes and their Applications, 118, 2223-2253.
https://doi.org/10.1016/j.spa.2007.10.015

[5]   Peng, S. (2010) Nonlinear Expectations and Stochastic Calculus under Uncertainty. arXiv: 1002.4546.

[6]   Peng, S. (2008) A New Central Limit Theorem under Sublinear Expectations. arXiv: 0803.2656v1.

[7]   Peng, S. (2007) Law of Large Numbers and Central Limit Theorem under Nonlinear Expectations. arXiv: 0702.358v1.

[8]   Bai, X. and Buckdahn, R. (2009) Inf-Convolution of G-Expectations. arXiv: 0910. 5398v1.

[9]   Gao, F. (2009) Pathwise Properties and Homeomorphic Flow for Stochastic Differential Equations Driven by G-Brownian Motion. Stochastic Process and Their Application, 119, 3356-3382. https://doi.org/10.1016/j.spa.2009.05.010

[10]   Hu, M. and Peng, S. (2009) On Representation Theorem of G-Expectations and Paths of G-Browian Motion. Acta Mathematicae Applicatae Sinica, English Series, 25, 539-546. https://doi.org/10.1007/s10255-008-8831-1

[11]   Li, B. (2000) Jensen Inequality of g-Expectation and Its Applications. Journal of Shandong University, 35, 413-417.

[12]   Jiang, L. and Chen, Z. (2004) On Jensen’s Inequality for g-Expectation. Chinese Annals of Mathematics, 25, 401-412. https://doi.org/10.1142/S0252959904000378

[13]   Jiang L. (2003) Jensen’s Inequality of Bivariate Function for g-Expectation. Journal of Shandong University, 38, 13-17.

[14]   Fang, K., Zhu, X. and Liu, H. (2008) The Discriminant Conditions of the Bivariate Convex Function. Pure Mathematics and Applied Mathematics, 24, 97-101.

Top