Student’s *t* Increments

Author(s)
Daniel T. Cassidy

ABSTRACT

Some moments and limiting properties of independent Student’s*t* increments are studied. Inde-pendent Student’s *t* increments are independent draws from not-truncated, truncated, and effectively truncated Student’s t-distributions with shape parameters and can be used to create random walks. It is found that sample paths created from truncated and effectively truncated Student’s *t*-distributions are continuous. Sample paths for Student’s *t*-distributions are also continuous. Student’s *t* increments should thus be useful in construction of stochastic processes and as noise driving terms in Langevin equations.

Some moments and limiting properties of independent Student’s

KEYWORDS

Student’s*t*-Distribution,
Truncated,
Effectively Truncated,
Cauchy Distribution,
Random Walk,
Sample Paths,
Continuity

Student’s

Received 18 December 2015; accepted 23 February 2016; published 26 February 2016

1. Introduction: Student’s t Increments

The interest of this paper is independent Student’s t increments. These increments are independent draws from a Student’s t-distribution with support, a truncated Student’s t -distribution with support, or an effectively truncated Student’s t-distribution with support, but which has a multiplicative envelope which effectively truncates the distribution. Here is the scale parameter for the Stu- dent’s t-distribution and is a real constant.

These independent Student’s t increments can be used to generate a random walk such as the Markov sequence, , , , where the are independent draws from a Student’s t-distribution, a truncated Student’s t-distribution, or an effectively truncated Student’s t-distribution.

Attention will be restricted to Student’s t-distributions with location parameter (i.e., mean), scale factor, and shape parameter, which cover the Cauchy distribution, for which, to the Gaussian or normal distribution, for which.

To distinguish between time t and and a realization of a random variable that is distributed as a Student’s t-distribution, a bold face t will be used with the name of the distribution and a regular face t will represent time. The symbols x and will represent random variables, and specific realizations of the random variables x and will be represented as x and. A stochastic process, which is a family of functions of time, is then whereas is a random variable for some constant and is a number in that both t and the value of are specified.

A Student’s t-distribution with location parameter, shape parameter, and scale parameter, is given by [1] - [3]

(1)

with. gives the probability that a random draw from the Student’s t-dis- tribution lies in the interval.

A truncated Student’s t-distribution with location parameter, shape parameter, and scale parameter, is given by

(2)

(3)

(4)

where the rectangle function if and for has been used to truncate the

distribution and limit support to.

A Student’s t-distribution is obtained from a mixture of a normal distribution with a standard deviation that is distributed as inverse chi with support [4] -[7] . Let, then a is distributed as chi, , and

(5)

Using chi as defined above and a normal distribution with zero mean and standard deviation of, the mixing integral when evaluated from to yields a Student’s t-distribution

(6)

with a mean of zero, shape parameter, and a scale parameter of.

The probability that, , is needed to normalize properly a truncated chi distribution. A left-truncated chi distribution is zero for values:

(7)

An effectively truncated Student’s t-distribution is the pdf for a mixture of a left-truncated chi and normal distribution:

(8)

is a Student’s t-distribution with shape parameter and scale parameter.

This paper is organized as follows. The development in time of the variance for the sum of independent draws from distributions is reviewed in Section 2. It is shown that truncation of a Student’s t-distribution keeps the moments finite and thus variances add, even if the distributions are not stable under convolution. Gaussian and Cauchy distributions are stable under self-convolution. A Gaussian convolved with a Gaussian yields a Gaussian. Student’s t-distributions other than and distributions are not stable under self-convolution. The tails of the self-convolution of Student’s t-distributions are “stable”; only the deep tails retain the characteristic power-law dependence of the original t-distribution [6] [8] [9] . However, the fact that the moments are finite and variances add under convolution allows the time development of the variance to be determined. Examples of smoothing of the characteristic function owing to truncation are given and examples of the mo- ments of distributions are given.

The continuity of sample paths is discussed in Section 3. It is shown that truncated and effectively truncated Student’s t-distributions have continuous sample paths. It is also shown that sample paths created by Student’s t-distributions with have continuous sample paths. Random walks are shown for independent increments drawn from a uniform distribution, from a normal distribution, and from and Student’s t-distri- butions. The samples paths for the different distributions were all simulated from the same sequence of pseudo random numbers. This enables observation of the effects of different shape parameters and truncations on the random walks.

Section 4 is a conclusion.

2. Variances Add under Convolution

Let g and h be zero mean probability density functions (pdf’s) with variances and, and let be the convolution of g and h:

(9)

is also a zero mean pdf and hence the variance of is

(10)

where is the Fourier transform of. From the convolution theorem, , and

(11)

since g and h are zero-mean pdf’s:, , and

,. Thus

(12)

and variances add under convolution. The argument holds even if the means for g and h are non-zero. The argument also holds for distributions that are stable or are not-stable under convolution, or for combinations of distributions that might not retain shape under the action of convolution.

The Fourier transforms, and will exist for pdf’s that are continuous or have finite dis- continuities [10] p. 9. The derivatives of the transforms might not exist at some values of s owing to higher- order discontinuities, but truncation of the pdf will smooth the transform and remove the discontinuities. For

example, consider. This distribution in the x domain is a Cauchy

distribution. The derivatives in the transform domain do not exist at. However, provided that, derivatives at exist for the convolution

(13)

which is the Fourier transform of the truncated distribution,

(14)

where if and for.

The convolution of Equation (13) does not appear to have an analytic expression except at.

An expression for the convolution, Equation (13), can be written for as

(15)

from which the derivatives at can, with some effort, be calculated. For, and

(16)

(17)

The smoothing power of the convolution of Equation (13) can be observed if the sinc function is replaced by a unit area rectangle function with a similar width as the main lobe of the sinc function. Using this approximation for the sinc function, the convolution of Equation (13) becomes

(18)

(19)

and can be evaluated to give

(20)

which is, for, a continuous function of s and for which derivatives exist at. This stands in stark contrast to the Fourier transform for the not-truncated function (i.e., for), which is .

Figure 1 shows the effect of convolution on the Fourier transform of a Cauchy distribution. The Cauchy dis- tribution was truncated as indicated in Equation (14) with T = 100. The scale parameter of the Cauchy distri- bution of Equation (14) is. The truncation thus removes values that have magnitudes greater than times the scale factor. The probability of an observation with magnitude >50 is 0.002, i.e., , for the distribution of Equation (14). For a normally distributed random variable with mean and standard deviation,.

Figure 2 shows similar quantities as Figure 1 but with. The probability of an observation with magnitude >5000 is for the Cauchy distribution of Equation (14). In a “normal” world, . Truncation smooths the characteristic function and keeps moments finite.

The variance of an n-fold convolution, , is. If, then the variance for the n-fold self-convolution is. The pdf for the sum of n-independent draws from the same parent distribution that is characterized by a pdf f with variance is the n-fold self-convolution of the parent pdf f and the variance of the sum of the n-independent draws is. For a process that is the summation of samples that are periodically drawn from a parent population, the variance of the process would be proportional to time.

Following Papoulis [11] p. 292, consider a homogenous and stationary Markov sequence [11] p. 530, , , , where the, are independent Student’s t increments, i.e., the are independent draws from a Student’s t-distribution. The sequence is homogeneous since the pdf’s for each are independent of n. The sequence is stationary since it is homogeneous and all have the

Figure 1. From top to bottom at:, , and for T = 100.

Figure 2. From top to bottom at:, , and for.

same pdf. Let be the time between increments. The total time t taken to acquire the sequence is. The mean of is zero and the variance of, , is

(21)

where is the variance of any of the Student’s t increments. Allow, which requires. The variance will remain finite and non-zero only if as, where is a constant. Thus varies linearly with sampling period and the variance of the sequence varies linearly with time t.

The linear dependence on time of the variance of the Markov sequence arises not because of Gaussian properties, but because of the assumed independence of samples. The variance of a summation of independent samples is the sum of the variance of each sample, and thus the variance will increase linearly with the number of samples. If the samples are obtained by periodic sampling, then the variance will increase linearly with time.

Papoulis [11] p. 292 writes that the limit as, which requires, results in a Wiener-Lévy process, which is a stochastic process that is continuous for almost all outcomes. Papoulis then shows is a normally distributed random variable. Papoulis assumed n samples were drawn from a binomial distribution

with and appealed to the DeMoivre-Laplace theorem to obtain a normal distribution in the limit that

[11] , p. 66.

Not all functions tend to a normal pdf under repeated convolution [10] , p. 186. A Cauchy distribution is probably the most noted distribution that does not follow the central limit theorem. Not all Student’s t-distri- butions tend to a normal distribution. According to Bracewell [10] , p. 190, only functions with finite area, finite mean, and finite second moments will tend to normal distributions under repeated convolution. For convolution of non-identical functions, Lyapunov’s condition on the ratio of absolute moments to power of the variance must be satisfied.

The dependence on time of the variance for the Markov sequence can be obtained in a slightly different manner than the approach of Papoulis [11] and in a manner that does not specify the underlying pdf’s. Following Shreve [12] , p. 98, the expectation of the quadratic variation can be calculated

(22)

and the mean-square limit of the variance of the quadratic variation can be used to show convergence.

The variance of Q is. The fourth central moment,

, is proportional to the variance squared for a Student’s t-distribution (see below) and there-

fore is a constant. The variance of Q is then, which tends to zero linearly with as. Thus in a mean-square sense, the expectation of the quadratic variation is, i.e.,. The variance of the stochastic process increases linearly with time t. For Gaussian increments,. For Student’s t increments, is a simple function of the shape parameter, the scale parameter, and the degree and form of truncation of the underlying pdf.

As the moments and continuity of a stochastic process are of interest, these topics are covered in the following sections. In the following, it is assumed on the strength of the arguments in this section and owing to the assumption of independent increments, that the scale factor varies as. The scale factor for a normal distribution is, the standard deviation. For Brownian motion, is a normally distributed random variable and. For Brownian motion the increments are independent, Gaussian random variables.

2.1. Moments for Student’s t-Distributions With Support

The central moment for a Student’s t-distribution with support is given by

(23)

Closed form expressions for the second, fourth, and sixth central moment are given, along with the values of for which the expressions are valid.

The second central moment, which is the variance, is proportional to and is valid for:

(24)

The fourth central moment is proportional to and is valid for:

(25)

The sixth central moment is proportional to and is valid for:

(26)

Not all central moments exist when the region of support for the t-distribution is.

2.2. Moments for Truncated Student’s t-Distributions With Support

Truncation of Student’s t-distributions keeps the moments finite and defined [6] [7] . As an example, consider a Student’s t-distribution with one degree of freedom, , (i.e., a Cauchy or Lorentzian distribution) with support where is a scale parameter and b is a number. Provided that, central moments for the truncated Cauchy (and for all other truncated Student’s t-distributions with) exist.

The integrals that define the truncated central moment for the Student’s t-distribution are

(27)

(28)

Closed form expressions for the central moments for a truncated distribution are given. As might be expected, with is proportional to with a constant of proportionality that is a function of b and n.

For truncated Student’s t-distributions, the second central moment is

(29)

the fourth central moment is

(30)

and the sixth central moment is

(31)

All of these moments are defined with the single restriction that (i.e., that the distribution is truncated). Since the tails of distributions for decrease more rapidly than for a distribution, the central moments can be evaluated for all truncated Student’s t-distributions with. In this sense, the Cauchy distribution is a worst case.

3. Continuous Sample Paths

For a Markov process, the sample paths are continuous functions of t, if for any,

(32)

uniformly in, and [13] , p. 46. is the pdf for the process and t is time.

The condition for continuous sample paths, Equation (32), can be written in different forms. For independent, zero mean (), symmetric pdf’s

(33)

or equivalently, since for,

(34)

Both forms will be used.

A stochastic process that is created as the sums of independent draws from a normal distribution (i.e., Gaussian increments) with variance and mean z has continuous sample paths. For simplicity in no- tation, assume that. For this process with pdf given by

(35)

the limit

(36)

(37)

(38)

equals zero and the sample paths are continuous.

An expansion of Equation (38) about shows that the dominant term goes as:

(39)

and thus the limiting value as is zero.

For samples paths that are created as the sums of independent draws from a Student’s t-distribution with (i.e., Student’s t increments), which is a Cauchy distribution, the pdf

(40)

does not have continuous sample paths. The limit

(41)

(42)

does not equal zero. An expansion of about

(43)

shows that the dominant term is and thus the limit is infinity as.

Sample paths for both normal distributions and Student’s t-distributions (Cauchy) have

(44)

as required for consistency [13] , p. 47.

For a process with Student’s t-distribution increments, the sample paths are continuous if the limit

(45)

(46)

(47)

is zero. Since the limit is not zero, a process with Student’s t-distribution increments does not have con- tinuous paths.

For a process with Student’s t-distribution increments, the sample paths are continuous if the limit

(48)

(49)

(50)

is zero.

An expansion about shows that the dominant term for the condition for continuous sample paths for a Student’s t-distribution, Equation (50), is as:

(51)

Processes with Student’s t-distributions increments with have continuous paths since the limit as. However, the fourth moments for Student’s t-distributions with do not exist. Thus it would not be possible to use the mean-square variance of the quadratic variation to prove convergence of the ex- pecation of the quadratic variation to. See Equation (22) and associated discussion. The moments exist for truncated and effectively truncated Student’s t-distributions.^{1}

3.1. Sample Paths: Truncated Cauchy

Consider a truncated Cauchy with support, or with.

The variance for a truncated Cauchy with support is given by Equation (29) since the mean is zero and truncation keeps the integral finite. The truncation need not be severe to obtain useful results; the variance diverges linearly with b.

The condition for continuity is that the limit

(52)

(53)

equals zero for any. The rectangle function, if and otherwise, has been used to truncate the distribution.

If, then the limit is zero and a process with truncated Cauchy increments should have a continuous path. However, it is not clear that the limit is zero for any. The limit is zero when all the area of the pdf is enclosed by for any. In fact, the limit is zero only for equal to “infinity”, since the maximum value (i.e., “infinity”) allowed is. Support for the truncated Cauchy distribution was taken as . The support was chosen to scale with the scale factor of the distribution so that the distri- bution was truncated to include the same fraction of the area of the not-truncated distribution, regardless of the choice of the scale factor. That is, the truncation was chosen such that the value of, which is defined by Equation (28), is independent of the scale factor.

3.2. Sample Paths: Effectively Truncated n = 1 Distribution

The pdf for a mixture of a left-truncated chi distribution for, , , and a normal distribution is [6] , [7]

(54)

The tails of the pdf decrease as for non-zero q, where is the maximum value of that is included in the mixing integral.

The condition for continuous sample paths for can be written in several equivalent forms:

(55)

which, owing to symmetry in, is equivalent to

(56)

The equation can be written as

(57)

Consider the inequality

(58)

An analytic expression for the integral of the upper bound of the inequality can be found. The dominant term in a series expansion for

(59)

about with, where is a positive number, is

(60)

and the limit

(61)

for. The scaling ensures that the truncation scales appropriately with S and thus keeps constant the area in the tails of the pdf that has been truncated.

Since probability is, i.e., , and the limit as of the integral of the upper bound times S is zero, then

(62)

and the sample paths for stochastic processes that are created by summing independent draws from effectively truncated Student’s t-distributions (i.e., effectively truncated Cauchy distributions) are continuous. The same reasoning can be applied to all effectively truncated Student’s t-distributions with and thus all stochastic processes created by summing independent draws from effectively truncated Student’s t-distributions have continuous sample paths.

Figure 3 is composed of random walks wherein the increments for the walks were obtained from a uniform distribution, a normal distribution, and distributions. All walks were manufactured from the same se- quence of 2048 draws from a uniform distribution. This allows comparison of the walks and demonstrates the moderating influence of truncation and effective truncation on the walks. The parameter b was arbitrarily chosen to equal 50 and q = 0.025 was chosen to match approximately quadratic variation for the walks for the truncated and effectively truncated distributions. The walks with truncated and effectively truncated increments are more angular than the walk with normal increments. The walk shows the occasional large jump. The magnitudes of the jumps are significantly smaller in the truncated and effectively truncated walks. Note that the increments were scaled for presentation of the walks in the figure. See Table 1 and

Figure 3. Random walks for draws from a uniform distribution (red), a normal distribution (black), a distribution (blue), a truncated distribution (cyan), and an effec- tively truncated distribution (magenta) for, , and. All ran- dom walks were derived from the same uniform distribution. To display the data, the incre- ments drawn from the distribution were divided by 50 and the increments drawn from the truncated and effectively truncated distributions were divided by 3.

Table 1. Descriptive statistics for 2048 draws from distributions with and.

Table 2 for sample and parent statistics of the distributions used to generate the figures. For a Cauchy distri- bution with,. For a normal distribution,.

Figure 4 displays the pdf’s on a plot. This figure clearly shows that there is little difference be- tween the truncated and effectively truncated pdf for where b is the point of truncation. The tail of the truncated distribution falls off infinitely fast whereas the tails of the effectively truncated distribution fall off at the same rate as a Gaussian pdf. Since the random walk with Gaussian increments has continuous sample paths, one would expect an effectively truncated distribution to have continuous sample paths as the roll- off of the tails is similar. And since the tails of the truncated distribution roll-off faster than a Gaussian, one would expect a truncated walk to have continuous sample paths. The tails of the Cauchy distribution (i.e., a not truncated Student’s t-distribution) do not roll off as fast as a Gaussian. A Cauchy random walk does not have continuous samples paths. Large steps in the Cauchy random walk are obvious in Figure 3.

There is little difference in shape between a truncated Student’s t-distribution and an effectively truncated Student’s t-distribution. From taking limits of the pdf, continuous sample paths were found for the effectively truncated distribution, yet a truncated distribution did not appear to have continuous sample paths (c.f. Equation (53) and related discussion). This discrepancy would seem to point to a problem with the con- dition for continuous sample paths or the interpretation of the condition for continuous sample paths.

Table 2. Parent statistics for distributions with, and.

Figure 4. Plots of a normal distribution (black), a distri- bution (red), a truncated distribution (cyan), and an effec- tively truncated distribution (blue) for, b = 50, and q = 0.025. Note the scaling on the axes: the ordinate is and the abscissa is.

3.3. Sample Paths: Effectively Truncated n = 3 Student’s t-Distribution

The pdf for a mixture of a left-truncated chi distribution for, , , and a normal distribution is [6] [7]

(63)

The left truncation of the chi distribution imparts a multiplicative Gaussian envelope that effectively truncates the underlying t distribution.

Figure 5 displays a normal pdf and pdfs on a plot. Note the similarity between the distribution, the truncated distribution, and the effectively truncated distribution for where is the point of truncation. The value of was chosen to yield approximately the same standard deviation for the effectively truncated distribution as was obtained with the truncated distri- bution. See Table 1 and Table 2 for sample and parent statistics of the distributions. The effectively truncated distribution is just starting to show the same slope in the tail as the normal distribution. This owes to the characteristic from truncation of the underlying distribution in the -normal mixture that creates the Student’s distribution. For a t-distribution with. For a normal dis- tribution,.

Figure 6 is similar to Figure 3 except increments were drawn from Student’s t-distributions and the walks were not scaled. There are five random walks displayed in Figure 6: a walk with uniform increments (red), a walk with normal increments (black), and walks with increments (not-truncated, truncated, and effectively truncated). The three walks almost perfectly overlap, showing that the walks are almost iden- tical, as one might surmise from Figure 5. The normal walk and the sample paths appear to have similar features. Figure 7 displays random walks with uniform increments, with Gaussian increments, and with truncated increments. All walks were created from the same 2048 random draws from a uniform distribution. The sample paths for the Gaussian increments are displayed twice; once with a scale factor of 1 and once with a scale factor of 1.693. The multiplicative scale factor of 1.693 is the ratio of standard deviations of the increments for the parent distributions of the normal and truncated distributions. The scaled plot is presented to facilitate comparison of the truncated and normal sample paths. It is clear that the and normal sample paths (for the 2048 time steps displayed) are similar.

All walks shown in Figure 3, Figure 6, and Figure 7 were manufactured from the same sequence of variates drawn from a uniform distribution. This allows comparison of the effect of different distributions (uniform,

Figure 5. Plots of a distribution (blue), truncated dis- tribution (magenta), and effectively truncated distribution (cyan) for, , and. Note the scaling on the axes: the ordinate is and the abscissa is. For comparison, a normal distribution (black) and a (i.e., a Cauchy) distribution (red) are also plotted.

Figure 6. Random walks for draws from a uniform distribution (red), a normal distribution (black), a distribution (blue), a trun- cated distribution (cyan), and an effectively truncated distribution (magenta) for, , and. All ran- dom walks were derived from the same uniform distribution. The increments were not scaled for this figure. Note that the three walks almost perfectly overlap.

Figure 7. Random walks for draws from a uniform distribution (red), a normal distribution (black), a truncated distribution (blue), and a scaled normal distribution (black) for, , and. The increments for the scaled normal distribution were multiplied by 1.693 All random walks were derived from the same uniform distribution.

Gaussian, Cauchy, and distributions) and truncation (both truncation by a rectangle function and effective truncation) on the sample paths.

Table 1 lists descriptive statistics for the draws that were used to create the sample paths shown in Figure 3, Figure 6, and Figure 7. Table 2 lists the values found in Table 1, but calculated for the parent distributions. In Table 1 and Table 2, Q is the quadratic variation and is the average quadratic variation per step. There is good correspondence between the values obtained for the sample parameters and the parent parameters. In Table 2, the symbol indicates a value that was obtained by a symmetry argument.

The data in Table 1 and Table 2, and in Figures 3-6, clearly show the effectiveness of truncation and effec- tive truncation.

4. Conclusions

Independent Student’s t increments, from which stochastic processes such as random walks are created, are investigated. Attention is restricted to increments from not-truncated, truncated, and effectively truncated Student’s t-distributions with shape parameters, which covers a broad range of distributions: from Cauchy distributions, for which, to normal or Gaussian distributions, for which. A Student’s t-distribution can be obtained as a mixture of a chi distribution of the reciprocal of and a normal distribution with standard deviation of. Effectively truncated t-distributions arise from left-truncation of the chi distri- butions in the mixing integrals. An effectively truncated Student’s t-distribution has a Gaussian envelope that imparts interesting properties to the effectively truncated Student’s t-distribution.

Random walks, specifically Markov sequences, , , , where the are independent Student’s t increments, are considered. The development in time of the scale parameter of the Student’s t-distributions (not-truncated, truncated, and effectively truncated t-distributions) is investigated. It is found for distributions for which the variance exists that where is a constant and t is time. The variance exists for truncated and effectively truncated Student’s t-distributions, and for Student’s t-distributions with shape parameter. The development in time of the scale parameter for Student’s t-distributions is consistent with a normal distribution, for which the variance, which equals, in- creases linearly with time. A Gaussian (or normal) distribution is stable under convolution; in general, a Student’s t-distribution is not stable under convolution.

The continuity of the sample paths is investigated and it is found that truncated and effectively truncated Student’s t-distributions, and that Student’s t-distributions with, have continuous sample paths. This opens the possibility for modelling with a greater number of distributions.

Gardiner [13] , p. 79 defines a Wiener process as

(64)

with the constraints that, that is a continuous function of time t, and that is a Markov process. The requirement is not restrictive as any non-zero mean value of can be considered to be signal. Gardner explains that one normally assumes that is Gaussian and that. The assumption of Gaussian statistics follows from a desire to have continuous paths for. The white noise property of the Wiener process, i.e., , follows not from the assumption of Gaussian statistics for but from the assumption that is a Markov process. For a Markov process, is not determined probabilistically by any past values [13] , p. 78 and thus and are independent for all and.

A random walk process that is constructed from truncated or effectively truncated Student’s t increments with is continuous. This process can also be constructed under the Markov assumption that, where for simplicity increments are assumed to be draws from zero mean dis- tributions such that. Student’s t-distributions with appear also to be continuous, without the need for truncation. Thus it appears that there exists more than independent Gaussian increments for construction of random walks with continuous sample paths. Given continuous sample paths and second mo- ments that depend linearly on time for random walks with independent, not-truncated, truncated, and effectively truncated Student’s t-increments, it seems reasonable to speculate that the diffusion coefficients [13] [14] , p. 79, p. 133

(65)

exist and thus it should be possible to model noise in Langevin equations with appropriate t-distributions.

Acknowledgements

This work was funded by the Natural Science and Engineering Research Council (NSERC) Canada.

Cite this paper

Cassidy, D. (2016) Student’s*t* Increments. *Open Journal of Statistics*, **6**, 156-171. doi: 10.4236/ojs.2016.61014.

Cassidy, D. (2016) Student’s

References

[1] Student (1908) The Probable Error of a Mean. Biometrika, 6, 1-25.

http://dx.doi.org/10.1093/biomet/6.1.1

[2] Zabell, S.L. (2008) On Student’s 1908 Article “The Probable Error of a Mean”. Journal of the American Statistical Association, 103, 1-7.

http://dx.doi.org/10.1198/016214508000000030

[3] Nadarajah, S. (2007) Explicit Expressions for Moments of t Order Statistic. Comptes Rendus Mathematique, 345, 523-526.

http://dx.doi.org/10.1016/j.crma.2007.10.027

[4] Praetz, P.D. (1972) The Distribution of Share Price Changes. The Journal of Business, 45, 49-55.

http://dx.doi.org/10.1086/295425

[5] Gerig, A., Vicente, J. and Fuentes, M. (2009) Model for Non-Gaussian Intraday Stock Returns. Physical Review E, 80, Article ID: 065102.

http://dx.doi.org/10.1103/PhysRevE.80.065102

[6] Cassidy, D.T. (2011) Describing n-Day Returns with Student’s t-Distributions. Physica A, 390, 2794-2802.

http://dx.doi.org/10.1016/j.physa.2011.03.019

[7] Cassidy, D.T. (2012) Effective Truncation of a Student’s t-Distribution by Truncation of the Chi Distribution in a Mixing Integral. Open Journal of Statistics, 2, 519-525.

http://dx.doi.org/10.4236/ojs.2012.25067

[8] Bouchaud, J.-P. and Potters, M. (2003) Theory of Financial Risk and Derivative Pricing. 2nd Edition, Cambridge University Press, Cambridge.

http://dx.doi.org/10.1017/CBO9780511753893

[9] Nadarajah, S. and Dey, D.K. (2005) Convolutions of the T distribution. Computers and Mathematics with Applications, 49, 715-721.

http://dx.doi.org/10.1016/j.camwa.2004.10.032

[10] Bracewell, R.N. (2000) The Fourier Transform and Its Applications. 3rd Edition, McGraw-Hill, New York.

[11] Papoulis, A. (1965) Probability, Random Variables, and Stochastic Processes. McGraw-Hill, New York.

[12] Shreve, S.E. (2004) Stochastic Calculus for Finance II: Continuous Time Models. Springer, New York.

http://dx.doi.org/10.1007/978-1-4757-4296-1

[13] Gardiner, C. (2009) Stochastic Methods: A Handbook for the Natural and Social Sciences. 4th Edition, Springer-Verlag, Berlin.

[14] Lax, M., Cai, W. and Xu, M. (2006) Random Processes in Physics and Finance. Oxford University Press, New York.

http://dx.doi.org/10.1093/acprof:oso/9780198567769.001.0001

[1] Student (1908) The Probable Error of a Mean. Biometrika, 6, 1-25.

http://dx.doi.org/10.1093/biomet/6.1.1

[2] Zabell, S.L. (2008) On Student’s 1908 Article “The Probable Error of a Mean”. Journal of the American Statistical Association, 103, 1-7.

http://dx.doi.org/10.1198/016214508000000030

[3] Nadarajah, S. (2007) Explicit Expressions for Moments of t Order Statistic. Comptes Rendus Mathematique, 345, 523-526.

http://dx.doi.org/10.1016/j.crma.2007.10.027

[4] Praetz, P.D. (1972) The Distribution of Share Price Changes. The Journal of Business, 45, 49-55.

http://dx.doi.org/10.1086/295425

[5] Gerig, A., Vicente, J. and Fuentes, M. (2009) Model for Non-Gaussian Intraday Stock Returns. Physical Review E, 80, Article ID: 065102.

http://dx.doi.org/10.1103/PhysRevE.80.065102

[6] Cassidy, D.T. (2011) Describing n-Day Returns with Student’s t-Distributions. Physica A, 390, 2794-2802.

http://dx.doi.org/10.1016/j.physa.2011.03.019

[7] Cassidy, D.T. (2012) Effective Truncation of a Student’s t-Distribution by Truncation of the Chi Distribution in a Mixing Integral. Open Journal of Statistics, 2, 519-525.

http://dx.doi.org/10.4236/ojs.2012.25067

[8] Bouchaud, J.-P. and Potters, M. (2003) Theory of Financial Risk and Derivative Pricing. 2nd Edition, Cambridge University Press, Cambridge.

http://dx.doi.org/10.1017/CBO9780511753893

[9] Nadarajah, S. and Dey, D.K. (2005) Convolutions of the T distribution. Computers and Mathematics with Applications, 49, 715-721.

http://dx.doi.org/10.1016/j.camwa.2004.10.032

[10] Bracewell, R.N. (2000) The Fourier Transform and Its Applications. 3rd Edition, McGraw-Hill, New York.

[11] Papoulis, A. (1965) Probability, Random Variables, and Stochastic Processes. McGraw-Hill, New York.

[12] Shreve, S.E. (2004) Stochastic Calculus for Finance II: Continuous Time Models. Springer, New York.

http://dx.doi.org/10.1007/978-1-4757-4296-1

[13] Gardiner, C. (2009) Stochastic Methods: A Handbook for the Natural and Social Sciences. 4th Edition, Springer-Verlag, Berlin.

[14] Lax, M., Cai, W. and Xu, M. (2006) Random Processes in Physics and Finance. Oxford University Press, New York.

http://dx.doi.org/10.1093/acprof:oso/9780198567769.001.0001