A Bayesian Regression Model and Applications

Show more

1. Introduction

There has been a lot of interest in studying the Bayesian vector regression and its application on various classification and regression problems [1] [2] [3] [4]. The Bayesian approach considers probability distributions with the observed data; prior distributions are converted to posterior distribution through the use of Bayes’ theorem. Let x be an input vector and t be a vector of target parameters. In a regression formulation our goal is to define a model $y\left(x;w\right)$ that yields an approximation to the true target t, with the model defined by the parameters w. The model is typically designed using a set of “training” data $D={\left\{{x}_{n},{t}_{n}\right\}}_{n=1}^{N}$, Although we initially consider a finite set D, the goal is for the subsequent model $y\left(x;w\right)$ to be applicable to arbitrary $\left(x,t\right)\notin D$, over the anticipated range of t. When developing a regression model one must address the bias-variance tradeoff. A bias is introduced by restricting the form that $y\left(x;w\right)$ may take, while the variance represents the error between the model $y\left(x;w\right)$ and true target parameters t. Models with minimal bias typically have significant flexibility, and therefore the model parameters may vary significantly as a function of the specific training set D employed. To obtain good model generalization, which may be connected to the variation in the model parameters as a function of D, one must introduce a bias. The utilization of a small number of non-zero parameters w often yields a good balance between bias and variance; such models are termed “sparse”. This has led to development of the relevance vector machine [5].

The rest of this paper is organized as follows. The theory of the vector-regression formulation is presented in Section 2, with application example provided in Section 3. The work is summarized in Section 4.

2. Sparse Bayesian Vector Regression

2.1. Model Specification

Assume we have available a set of training data $D={\left\{{x}_{n},{t}_{n}\right\}}_{n=1}^{N}$, where ${x}_{n}={\left[{x}_{n}^{\left(1\right)}\text{\hspace{0.17em}}{x}_{n}^{\left(2\right)}\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}{x}_{n}^{\left(L\right)}\right]}^{\u22ba}$ and ${t}_{n}={\left[{t}_{n}^{\left(1\right)}\text{\hspace{0.17em}}{t}_{n}^{\left(2\right)}\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}{t}_{n}^{\left(M\right)}\right]}^{\u22ba}$. Our objective is to develop a function $y\left(x;w\right)$ that is dependent on the parameters w. After $y\left(x;w\right)$ is so designed, it may be used to map an arbitrary x to an approximation of the target parameters t.

The specific vector-regression function $y\left(x;w\right)={\left[{y}^{\left(1\right)}\left(x;w\right)\text{\hspace{0.17em}}{y}^{\left(2\right)}\left(x;w\right)\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}{y}^{\left(M\right)}\left(x;w\right)\right]}^{\u22ba}$ employed here is defined as

$y\left(x;w\right)={\displaystyle {\sum}_{i=1}^{N}{w}_{i}{t}_{i}K\left(x,{x}_{i}\right)}+{w}_{0}$ (1)

where ${w}_{0}={\left[{w}_{0}^{\left(1\right)}\text{\hspace{0.17em}}{w}_{0}^{\left(2\right)}\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}{w}_{0}^{\left(M\right)}\right]}^{\u22ba}$, and $K\left(x,{x}_{i}\right)$ is a kernel function that is designed such that $K\left(x,{x}_{i}\right)$ is large if ${x}_{i}\approx x$ and otherwise $K\left(x,{x}_{i}\right)$ is small. Hence in (1) only those ${x}_{i}\approx x$ are important in defining $y\left(x;w\right)$.

Let

$w={\left[{w}_{1}{w}_{2}\cdots {w}_{N}{w}_{0}^{\left(1\right)}{w}_{0}^{\left(2\right)}\cdots {w}_{0}^{\left(M\right)}\right]}^{\u22ba},$

${\psi}_{i}\left(x\right)={\left[{\varphi}_{i}^{\left(1\right)}\text{\hspace{0.17em}}{\varphi}_{i}^{\left(2\right)}\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}{\varphi}_{i}^{\left(M\right)}\right]}^{\u22ba},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=1,2,\cdots ,N$

with

${\varphi}_{i}^{\left(k\right)}={t}_{i}^{\left(k\right)}K\left(x,{x}_{i}\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=1,2,\cdots ,N;\text{\hspace{0.17em}}k=1,2,\cdots ,M$ (2)

and $M\times \left(N+M\right)$ matrix

$\Psi \left(x\right)=\left[{\psi}_{1}\left(x\right)\text{\hspace{0.17em}}{\psi}_{2}\left(x\right)\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}{\psi}_{N}\left(x\right)\text{\hspace{0.17em}}{I}_{M}\right],$ (3)

where ${I}_{M}$ is $M\times M$ identity matrix, then (1) can be expressed in matrix form

$y\left(x;w\right)=\Psi \left(x\right)w$ (4)

Assume that target is from the model with additive noise

$t=y\left(x;w\right)+\epsilon =\Psi \left(x\right)w+\epsilon ,$ (5)

where model error $\epsilon ={\left[{\epsilon}^{\left(1\right)}\text{\hspace{0.17em}}{\epsilon}^{\left(2\right)}\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}{\epsilon}^{\left(M\right)}\right]}^{\u22ba}$ and ${\epsilon}^{\left(k\right)},k=1,2,\cdots ,M$ are independent samples from a zero-mean Gaussian process with variance ${\alpha}_{0}^{-1}$

$p\left({\epsilon}^{\left(k\right)}\right)=N\left({\epsilon}^{\left(k\right)}|0,{\alpha}_{0}^{-1}\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=1,2,\cdots ,M$ (6)

We therefore have

$\begin{array}{c}p\left(t|x,w,{\alpha}_{0}\right)={\left(\frac{2\pi}{{\alpha}_{0}}\right)}^{-\frac{M}{2}}\mathrm{exp}\left(-\frac{{\alpha}_{0}}{2}{\Vert t-\Psi \left(x\right)w\Vert}_{2}^{2}\right)\\ =N\left(t|\Psi \left(x\right)w,{\alpha}_{0}^{-1}{I}_{M}\right)\end{array}$ (7)

We wish to constrain the weights w such that a simple model is favored, this accomplished by invoking a prior distribution on w that favors most of the weights being zero. In this context, only the most relevant members of the training set $D={\left\{{x}_{n},{t}_{n}\right\}}_{n=1}^{N}$, those with nonzero weights ${w}_{n}$, are ultimately used in the final regression model. This simplicity allows improved regression performance for $\left(x,t\right)\notin D$ [5] [6].

We employ a zero-mean Gaussian prior distribution for w

$p\left(w|{\alpha}_{0},\alpha \right)=N\left(w|{0}_{N+M},{\alpha}_{0}^{-1}{\alpha}^{-1}{I}_{N+M}\right),$ (8)

where
${0}_{N+M}$ is a (*N *+ *M*)-dimensional zero vector,
${I}_{N+M}$ is a
$\left(N+M\right)\times \left(N+M\right)$ identity matrix, and suitable priors over hyperparameters
${\alpha}_{0}$ and
$\alpha $ are Gamma distributions [7]

$p\left({\alpha}_{0}|a,b\right)=\text{Gamma}\left({\alpha}_{0}|a,b\right)$ (9)

$p\left(\alpha |c,d\right)=\text{Gamma}\left(\alpha |c,d\right)$ (10)

where $\text{Gamma}\left({\alpha}_{0}|a,b\right)=\Gamma {\left(a\right)}^{-1}{b}^{a}{\alpha}_{0}^{a-1}{\text{e}}^{-b{\alpha}_{0}}$ with $\Gamma \left(a\right)={\displaystyle {\int}_{0}^{\infty}{t}^{a-1}{\text{e}}^{-t}\text{d}t}$.

The hierarchical prior over w favors a sparse model and the prior over ${\alpha}_{0}$ will be used to favor small model error on the training data D.

2.2. Inference

For training data
$D={\left\{{x}_{n},{t}_{n}\right\}}_{n=1}^{N}$ we introduce *LN*-dimensional vector

$X={\left[{x}_{1}^{\u22ba}\text{\hspace{0.17em}}{x}_{2}^{\u22ba}\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}{x}_{N}^{\u22ba}\right]}^{\u22ba}$

and *MN*-dimensional vector

$T={\left[{t}_{1}^{\u22ba}\text{\hspace{0.17em}}{t}_{2}^{\u22ba}\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}{t}_{N}^{\u22ba}\right]}^{\u22ba}$

and let $\left(MN\right)\times \left(M+N\right)$ matrix

$\Phi ={\left[{\Phi}_{1}^{\u22ba}\text{\hspace{0.17em}}{\Phi}_{2}^{\u22ba}\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}{\Phi}_{N}^{\u22ba}\right]}^{\u22ba}$ with ${\Phi}_{i}=\Psi \left({x}_{i}\right),\text{\hspace{0.17em}}i=1,2,\cdots ,N$,

then by (7), we have

$\begin{array}{c}p\left(T|w,{\alpha}_{0},X\right)={\left(\frac{2\pi}{{\alpha}_{0}}\right)}^{-\frac{MN}{2}}\mathrm{exp}\left(-\frac{{\alpha}_{0}}{2}{\Vert T-\Phi w\Vert}_{2}^{2}\right)\\ =N\left(T|\Phi w,{\alpha}_{0}^{-1}{I}_{MN}\right)\end{array}$ (11)

Noting that
$p\left(T|{\alpha}_{0},\alpha ,X\right)={\displaystyle \int p\left(T|w,{\alpha}_{0},X\right)p\left(w|{\alpha}_{0},\alpha \right)\text{d}w}$ is a convolution of Gaussians, the posterior distribution over the weights *w* can be derived as

$p\left(w|{\alpha}_{0},\alpha ,X,T\right)=\frac{p\left(T|w,{\alpha}_{0},X\right)p\left(w|{\alpha}_{0},\alpha \right)}{p\left(T|{\alpha}_{0},\alpha ,X\right)}=N\left(w|\mu ,{\alpha}_{0}^{-1}\Sigma \right)$ (12)

where

$\Sigma ={\left({\Phi}^{\u22ba}\Phi +\alpha {I}_{M+N}\right)}^{-1}={\left({\displaystyle {\sum}_{i=1}^{N}{\Phi}_{i}^{\u22ba}{\Phi}_{i}}+\alpha {I}_{M+N}\right)}^{-1}$ (13)

$\mu =\Sigma {\Phi}^{\u22ba}T=\Sigma {\displaystyle {\sum}_{i=1}^{N}\left({\Phi}_{i}{t}_{i}\right)}$ (14)

2.3. Hyperparameter Optimization

We determine $\alpha $ in (13) by maximizing $p\left(\alpha |T,X\right)\propto p\left(T|\alpha ,X\right)p\left(\alpha \right)$ with respect to $\alpha $. It is equivalent to maximize the ln of this quantity. In addition, we can choose to maximize with respect to $\mathrm{ln}\alpha $ as we can assume hyperpriors over a logarithmic scale.

Since

$\begin{array}{l}\mathrm{ln}p\left(T|\alpha ,X\right)\\ =\mathrm{ln}{\displaystyle \int p\left(T|w,{\alpha}_{0},X\right)p\left(w|{\alpha}_{0},\alpha \right)p\left({\alpha}_{0}|a,b\right)\text{d}w\text{d}{\alpha}_{0}}\\ =-\frac{1}{2}\left[\mathrm{ln}\left|B\right|+\left(MN+2a\right)\mathrm{ln}\left({T}^{\u22ba}{B}^{-1}T+2b\right)\right]+const\end{array}$

where $B={I}_{MN}+{\alpha}^{-1}\Phi {\Phi}^{\u22ba}$, and $p\left(\mathrm{ln}\alpha \right)=\alpha p\left(\alpha \right)$, we obtain objective function

$L\left(\alpha \right)=-\frac{1}{2}\left[\mathrm{ln}\left|B\right|+\left(MN+2a\right)\mathrm{ln}\left({T}^{\u22ba}{B}^{-1}T+2b\right)\right]+c\mathrm{ln}\alpha -d\alpha $ (15)

By the determinant identity [8], we have

$\begin{array}{c}\left|B\right|=\left|{I}_{MN}+{\alpha}^{-1}\Phi {\Phi}^{\u22ba}\right|\\ ={\alpha}^{-\left(M+N\right)}\left|\alpha {I}_{M+N}+{\Phi}^{\u22ba}\Phi \right|\\ ={\alpha}^{-\left(M+N\right)}\left|{\Sigma}^{-1}\right|,\end{array}$

and so

$\mathrm{ln}\left|B\right|=-\left(M+N\right)\mathrm{ln}\alpha +\mathrm{ln}\left|{\Sigma}^{-1}\right|$ (16)

Using the Woodbury formula, we obtain

$\begin{array}{c}{B}^{-1}={\left({I}_{MN}+{\alpha}^{-1}\Phi {\Phi}^{\u22ba}\right)}^{-1}\\ ={I}_{MN}-\Phi {\left(\alpha {I}_{M+N}+{\Phi}^{\u22ba}\Phi \right)}^{-1}{\Phi}^{\u22ba}\\ ={I}_{MN}-\Phi \Sigma {\Phi}^{\u22ba},\end{array}$

thus

${T}^{\u22ba}{B}^{-1}T={T}^{\u22ba}\left(T-\Phi \Sigma {\Phi}^{\u22ba}T\right)$

$={T}^{\u22ba}\left(T-\Phi \mu \right)$ (17)

$={\Vert T\Vert}^{2}-{T}^{\u22ba}\Phi \Sigma {\Phi}^{\u22ba}T$ (18)

Then by (16) and Jacobi’s formula, we have

$\begin{array}{c}\frac{\text{d}\mathrm{ln}\left|B\right|}{\text{d}\mathrm{ln}\alpha}=-\left(M+N\right)+\frac{1}{\left|{\Sigma}^{-1}\right|}\frac{\text{d}\left|{\Sigma}^{-1}\right|}{\text{d}\mathrm{ln}\alpha}\\ =-\left(M+N\right)+tr\left(\Sigma \frac{\text{d}{\Sigma}^{-1}}{\text{d}\mathrm{ln}\alpha}\right)\\ =-\left(M+N\right)+\alpha {\displaystyle \underset{j=1}{\overset{M+N}{\sum}}{\Sigma}_{jj}}\end{array}$ (19)

where
${\Sigma}_{jj}$ is the *j*-th diagonal element of matrix
$\Sigma $.

By (18)

$\begin{array}{c}\frac{\text{d}{T}^{\u22ba}{B}^{-1}T}{\text{d}\mathrm{ln}\alpha}=-\frac{\text{d}{T}^{\u22ba}\Phi \Sigma {\Phi}^{\u22ba}T}{\text{d}\mathrm{ln}\alpha}\\ =-{T}^{\u22ba}\Phi \frac{\text{d}\Sigma}{\text{d}\mathrm{ln}\alpha}{\Phi}^{\u22ba}T\\ =-{T}^{\u22ba}\Phi \Sigma \frac{\text{d}{\Sigma}^{-1}}{\text{d}\mathrm{ln}\alpha}\Sigma {\Phi}^{\u22ba}T\\ =\alpha {\Vert \mu \Vert}^{2}\end{array}$ (20)

Using (17), (19) and (20), we have

$\begin{array}{c}\frac{\text{d}L\left(\alpha \right)}{\text{d}\alpha}=\frac{1}{2}\left(M+N-\alpha {\displaystyle \underset{j=1}{\overset{M+N}{\sum}}{\Sigma}_{jj}}\right)-\frac{\left(MN+2a\right)}{2\left({T}^{\u22ba}{B}^{-1}T+2b\right)}\frac{\text{d}{T}^{\u22ba}{B}^{-1}T}{\text{d}\mathrm{ln}\alpha}+c-d\alpha \\ =\frac{1}{2}\left(M+N-\alpha {\displaystyle \underset{j=1}{\overset{M+N}{\sum}}{\Sigma}_{jj}}\right)-\frac{\left(MN+2a\right){\Vert \mu \Vert}^{2}\alpha}{2\left[{T}^{\u22ba}\left(T-\Phi \mu \right)+2b\right]}+c-d\alpha \end{array}$ (21)

Setting (21) to zero, followed by algebra operations, yield

$\alpha =\frac{M+N+2c}{{\displaystyle {\sum}_{j=1}^{M+N}{\Sigma}_{jj}}+2d+\left(MN+2a\right){\Vert \mu \Vert}^{2}/\left[{T}^{\u22ba}\left(T-\Phi \mu \right)+2b\right]}$ (22)

The algorithm consists of (13), (14) and (22) with iteration for $\alpha ,\Sigma $ and $\mu $.

2.4. Making Predictions

Assume ${\alpha}_{MP}$ and ${\alpha}_{0}{}_{{}_{MP}}$ are maximizing values obtained by maximizing $p\left(\alpha |T,X\right)$ (Sec. 2.3) and $p\left({\alpha}_{0}|T,X\right)$, respectively. Assume

$p\left({\alpha}_{0},\alpha |X,T\right)\approx \delta \left({\alpha}_{0}-{\alpha}_{0}{}_{{}_{MP}}\right)\delta \left(\alpha -{\alpha}_{MP}\right)$

then

$\begin{array}{l}p\left(t|x,X,T\right)={\displaystyle \int p\left(t|x,w,{\alpha}_{0},\alpha \right)p\left(w,{\alpha}_{0},\alpha |X,T\right)\text{d}w\text{d}{\alpha}_{0}\text{d}\alpha}\\ ={\displaystyle \int p\left(t|x,w,{\alpha}_{0}\right)p\left(w|{\alpha}_{0},\alpha ,X,T\right)p\left({\alpha}_{0},\alpha |X,T\right)\text{d}w\text{d}{\alpha}_{0}\text{d}\alpha}\\ \approx {\displaystyle \int p\left(t|x,w,{\alpha}_{0}\right)p\left(w|{\alpha}_{0},\alpha ,X,T\right)\delta \left({\alpha}_{0}-{\alpha}_{0}{}_{{}_{MP}}\right)\delta \left(\alpha -{\alpha}_{MP}\right)\text{d}w\text{d}{\alpha}_{0}\text{d}\alpha}\\ ={\displaystyle \int p\left(t|x,w,{\alpha}_{0}{}_{{}_{MP}}\right)p\left(w|{\alpha}_{0}{}_{{}_{MP}}{\alpha}_{MP},X,T\right)\text{d}w}\\ =N\left(t|y\left(x;\mu \right),{\left({\alpha}_{0}{}_{{}_{MP}}\right)}^{-1}\Omega \right)\end{array}$ (23)

with

$y\left(x;\mu \right)=\Psi \left(x\right)\mu $ (24)

$\Omega ={I}_{M}+\Psi \left(x\right)\Sigma \Psi {\left(x\right)}^{\u22ba}$ (25)

3. Applications

In examples we employ a radial-basis-function kernel $K\left(x,{x}_{i}\right)=\mathrm{exp}\left(-{\Vert x-{x}_{i}\Vert}^{2}/{r}^{2}\right)$, and just parameters a, b, c and d by training and testing on given training data, finally we take $a=b=c=d=0.05$ for all examples in this section. In all figures the horizontal axis is the index of samples and the vertical axis is output.

3.1. Regression: Function Approximation

The model can be used to establish the relation between independent variables and dependent variables of a function.

Example 1 2-dimensional vector function with two variables

${t}_{1}=\text{sinc}\left(\frac{{x}_{1}+{x}_{2}}{4}\right)$

${t}_{2}=-0.5\text{sinc}\left(\frac{{x}_{1}+{x}_{2}}{4}\right)\mathrm{sin}\left(\frac{{x}_{1}{x}_{2}}{20}\right)-0.4$

in domain $\left\{\left({x}_{1},{x}_{2}\right)|-10\le {x}_{1}\le 10,0\le {x}_{2}\le 20\right\}$, where $\text{sinc}\left(x\right)=\mathrm{sin}\left(x\right)/x$.

Figure 1 and Figure 2 illustrate the results. Figure 1 is learning from 100 noise-free training samples. Figure 2 is based on 100 noisy training samples. The noise is generated from zero-mean Gaussian with 5% of average training data $\Vert t\Vert $ as standard deviation. Both test on 100 examples that are not in training data.

Example 2 3-dimensional vector function with 200 variables $\left({x}_{1},{x}_{2},\cdots ,{x}_{200}\right)\to \left({t}_{1},{t}_{2},{t}_{3}\right)$.

${t}_{1}={\displaystyle \underset{k=1}{\overset{200}{\sum}}\mathrm{sin}\left({\left({x}_{k}\right)}^{5/7}\right)}+\frac{{x}_{50}}{100}$

${t}_{2}=\frac{{x}_{200}}{800}{t}_{1}+\frac{{x}_{50}}{200}+\mathrm{cos}\left(\frac{{x}_{100}}{5}\right)-10$

${t}_{3}=\text{atan}\left(\frac{{t}_{1}+{t}_{2}}{6}\right)+\frac{{t}_{2}-{t}_{1}}{2}-10$

We choose samples at point ${x}^{n}=\left({x}_{1}^{n},{x}_{2}^{n},\cdots ,{x}_{200}^{n}\right)$ with ${x}_{k}^{n}=k+\left(n-1\right)\pi /4$. 100 samples at points ${x}^{n}$ with $n=1,3,5,\cdots ,199$ used as training data, and 100 samples at points ${x}^{n}$ with $n=2,4,6,\cdots ,200$ used as testing data.

Figure 3 is from noise-free training samples. Figure 4 is based on noisy training samples. The noise is generated from zero-mean Gaussian with 5% of average training data $\Vert t\Vert $ as standard deviation.

3.2. Regression: Inverse Scattering

The model can be used to characterize the connection between measured vector

Figure 1. Results for 2-dim vector function with noise-free data: (a) predict on training points; (b) predict on testing points.

Figure 2. Results for 2-dim vector function with noisy data: (a) predict on training points; (b) predict on testing points.

Figure 3. Results for 3-dim vector function with noise-free data: (a) predict on training points; (b) predict on testing points.

scattered-field data *x *and the underlying target responsible for these fields, characterized by the parameter vector *t*. The scattering data *x* may be measured at multiple positions. In the examples the measure data is simulated by forward model.

We consider a homogeneous lossless dielectric target buried in a lossy dielectric half space. The objective is to invert for the parameters of the target. In the examples, the parameter vector *t* is composed of three real numbers: the depth of target, the size of target, and the dielectric constant of target. For each target there are 100 simulated measure data. Training data
$D={\left\{{x}_{n},{t}_{n}\right\}}_{n=1}^{N}$ is composed of *N *= 180 examples and testing data is composed of 125 examples that are not in D.

Example 1 We consider cube target in this example. Figure 5 and figure 6 illustrate the results. Figure 5 is from noise-free data. Figure 6 is based on noisy data. The noise is generated from zero-mean Gaussian with 10% of average training data $\Vert x\Vert $ as standard deviation. The “size” is the width of cube.

Figure 4. Results for 3-dim vector function with noisy data: (a) predict on training points; (b) predict on testing points.

Figure 5. Results for cube target with noise-free data: (a) predict on training points; (b) predict on testing points.

Figure 6. Results for cube target with noisy data: (a) predict on training points; (b) predict on testing points.

Figure 7. Results for sphere target with noise-free data: (a) predict on training points; (b) predict on testing points.

Figure 8. Results for sphere target with noisy data: (a) predict on training points; (b) predict on testing points.

Example 2 We consider sphere target in this example. Figure 7 and figure 8 illustrate the results. Figure 7 is from noise-free data. Figure 8 is based on noisy data. The noise is generated from zero-mean Gaussian with 10% of average training data $\Vert x\Vert $ as standard deviation. The “size” is the diameter of sphere.

We applied the model to two completely different types of problems, the model works well for both application. The results display this regression model can apply to various types of regression problems.

4. Conclusion

A Bayesian vector-regression algorithm has been developed. The model employs a statistical prior that favors a sparse model, for which most of its weights are zero [5]. This model improves the algorithm in [9], and reduces the number of hyperparameters, which need to be calculated in the algorithm, from two to one. The model is not established for one specific problem, and so can be applied to different regression problems. We have discussed the theoretical development of the model and have presented several example results for two different applications. One is for function approximation, and the other is for inverse scattering of dielectric targets buried in a lossy half space. It has been demonstrated that the algorithm works well for different applications.

References

[1] Law, T. and Shawe-Taylor, J. (2017) Practical Bayesian Support Vector Regression for Financial Time Series Prediction and Market Condition Change Detection. Quantitative Finance, 17, 1403-1416.

https://doi.org/10.1080/14697688.2016.1267868

[2] Yu, J. (2012) A Bayesian Inference Based Two-Stage Support Vector Regression Framework for Soft Sensor Development in Batch Bioprocesses. Computers & Chemical Engineering, 41, 134-144.

https://doi.org/10.1016/j.compchemeng.2012.03.004

[3] Jacobs, J.P. (2012) Bayesian Support Vector Regression with Automatic Relevance Determination Kernel for Modeling of Antenna Input Characteristics. IEEE Transactions on Antennas and Propagation, 60, 2114-2118.

https://doi.org/10.1109/TAP.2012.2186252

[4] Hans, C. (2009) Bayesian Lasso Regression. Biometrika, 96, 835-845.

https://doi.org/10.1093/biomet/asp047

[5] Tipping, M.E. (2001) Sparse Bayesian Learning and the Relevance Vector Machine. Journal of Machine Learning Research, 1, 211-244.

[6] Scholkopf, B. and Smola, A.J. (2001) Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, Cambridge.

[7] Berger, J.O. (1985) Statistical Decision Theory and Bayesian Analysis. 2nd Edition, Springer, Berlin.

https://doi.org/10.1007/978-1-4757-4286-2

[8] Mardia, K.V., Kent, J.T. and Bibby, J.B. (1979) Multivariate Analysis. Academic Press, New York.

[9] Yu, Y., Krishnapuram, B. and Carin, L. (2004) Inverse Scattering with Sparse Bayesian Vector Regression. Inverse Problems, 20, 217-231.

https://doi.org/10.1088/0266-5611/20/6/S13