There are two inequalities which take important places in mathematics. One is the inequalities of the Cebysev type or the Ostrowski type, which is mainly applied in probability theory, mathematical statistics, information theory, numerical integration, and integral operator theory. Another one is the inequalities which involves the moments of the random variables. In this paper, we will present our study in the Cebysev type. Firstly we show the Cebysev functional which is defined as follows:
where the two functions are measurable.
In 1882, Cebysev  has proved that
where , .
Over the years, according Cebysev inequality, some new inequalities of Ostrowski-Cruss type or Ostrowski-Cebysev type have been obtained, see e.g.  -  . Among these recent works, we especially mention the monograph of Zhefei He and Mingjin Wang  . Following their work, we denote that is a random variable having a certain cumulative distribution function . According to the probability theory, it is easy to obtain the expectation and variance of such that
For two random variables , the covariance of and is defined by
Applying the Lagrange mean theorem (That is, for any differentiable function F in , there exists a number satisfying
we can get the following two inequalities
Similarly we have
Moreover, as the application of the new inequality, the random variable was given a certain distribution, the uniform distribution . A new equality has been made as below,
Obviously, in the process of proving this inequality (4), the proof of the inequality (5) was the key to accomplish the proof. The main result of this paper is the following theorem which generalizes the inequality (5), then we obtain a new inequality for covariance which involves three continuous functions with bounded ratios between the derivatives of two of them and the third one.
2. Main Result
In this section, we assume throughout this section and next section that is a random variable having the cumulative distributing function , and give a proof of the following theorem.
Theorem 2.1. Suppose that the three functions are continuous in and differentiable in , the ratios of two derivatives
are bounded in . If is a random variable which
has finite expected value , then we have
Particularly, let , the inequality is (7).
Proof. By using the hypotheses, we have is bounded, so the expected value exists. Applying the definition of the variance and the Cauchy mean theorem, we can get
Applying the Cauchy inequality, one has
Similarly we have
which completes the proof of Theorem 2.1.
3. Some Applications
In this section, we will generate some applications of the inequality (9) based on some probability distributions. First, assume that has the uniform distribution , we list the following consequence.
Theorem 3.1. Suppose that , and are continuous in and differentiable in , the ratios of two derivatives
are bounded in . Then
Proof. Let is a random variable which possesses the uniform distribution , which implies that has the following probability density distribution:
By plugging (16) and (17) into (9), we obtain the conclusion of Theorem 3.1.
Next, we can also prove the following theorem while has the uniform distribution the Gamma distribution ,
Theorem 3.2. Suppose that , and are continuous in and differentiable in , the ratios of two derivatives
are bounded in . Then for any ,
where is the Gamma function.
Proof. Since the random variable has the Gamma distribution, its probability density distribution is described as below,
Here, the parameters , . By direct calculation,
By plugging (20) and (21) into (9), we get the conclusion (18). This completes the proof of Theorem 3.2.
After that, we consider the issue that has the Beta distribution and give the following theorem,
Theorem 3.3. Suppose that , and are continuous in and differentiable in , the ratios of two derivatives
are bounded in . Then for any ,
Proof. Since the random variable has the Beta distribution. To prove this theorem, we write out its probability density distribution function,
Then we can get
Putting (24), (25) and (26) into (9), we get (22).
Finally, when satisfies the standard normal distribution , we can also prove the following theorem by using the similar method,
Theorem 3.4. Suppose that , and are continuous in and differentiable in , the ratios of two
derivatives are bounded in . Then for any ,
Proof. Since the random variable has the standard normal distribution , The probability density distribution function is
According to the definition of covariance and expectation, we have
Evidently, while k is odd. When k is an even number, by using the method of mathematical induction, we can conclude that . More precisely, the procedure has been showed as follows, when ,
For an positive even number k, suppose that . By direct calculation, we obtain
which by induction shows the fact that , when k is even.
Similarly, by the same method, we can also prove that for any ,
This equality, combines with (29), (30) and (9), implying the conclusion (27) in Theorem 3.4.
4. Conclusion and Future Work
Concerning the new generalized inequality, we have shown that the process of its proof and given some inequalities as applications, which were similar as the Cebysev type inequalities. Based on He and Wang’s consequences, the inequality for covariance as well as the applications of the main result has been generalized. However, in the section of the application, considering the significance of the normal distribution in the whole area of probability theory, it was as applications that let the random variable possessed the standard normal distribution. That is all consequences what we have obtained.
Nevertheless, before starting this task, a comparison of the inequality (8) with the inequality (2) had been made. Overwhelmingly, if we can prove the inequality , and give the random variable a certain distribution, such as the uniform distribution, we would give a new proof of the well-known Cebysev integral inequality. Therefore whether the coefficient 2 could be 1 evoked our keen interest. We went to do a lot of efforts. Most important, the consequence may be enlarged. Because we found that three enlargements have been applied in the process of the proof of the inequality (5). So we calculated the covariance immediately with integration to reduce the times of using enlargement. Some results have been made. However, it is a higher level of capacity that we need in the course of studying this issue. It was so regrettable that we have not accomplished that task. About the future work of this task, there are two directions we can make great efforts. One is how we can generate the consequence from the one dimensional space R to the multidimensional space Rn, and another one is how to find the approach of accomplishing that anticipated issue.
The authors would like to thank the reviewers for their detailed and helpful suggestions for revising this paper. This work was supported by the National Natural Science Foundation of China (Grant no. 11761027), the Natural Science Foundation of Hainan Province (Grant no. 2018CXTD338), the Scientific Research Foundation of Hainan Province Education Bureau (Grant no. Hnky2016-14), and the Educational Reform Foundation of Hainan Province Education Bureau (Grant no. Hnjg2017ZD-13).
 Agarwal, R.P., Barnett, N.S., Cerone, P. and Dragomir, S.S. (2005) A Survey on Some Inequality for Expectation and Variance. Computational and Applied Mathematics, 49, 439-480.
 Cerone, P. and Dragomir, S.S. (2009) Bounding the Cebysev Functional for the Riemann-Stieltjes Integral via a Beesack Inequality and Applications. Computers & Mathematics with Applications, 58, 1247-1252.
 Dragomir, S.S., Barnet, N.S. and Wang, S. (1999)An Ostrowski’s Type Inequality for a Random Variable Whose Probability Density Function Belongs to
. Mathematical Inequalities & Applications, 2, 501-508.