Construction of k-Variate Survival Functions with Emphasis on the Case k = 3
Abstract: The purpose of this paper is to present a general universal formula for k-variate survival functions for arbitrary k = 2, 3, ..., given all the univariate marginal survival functions. This universal form of k-variate probability distributions was obtained by means of “dependence functions” named “joiners” in the text. These joiners determine all the involved stochastic dependencies between the underlying random variables. However, in order that the presented formula (the form) represents a legitimate survival function, some necessary and sufficient conditions for the joiners had to be found. Basically, finding those conditions is the main task of this paper. This task was successfully performed for the case k = 2 and the main results for the case k = 3 were formulated as Theorem 1 and Theorem 2 in Section 4. Nevertheless, the hypothetical conditions valid for the general k ≥ 4 case were also formulated in Section 3 as the (very convincing) Hypothesis. As for the sufficient conditions for both the k = 3 and k ≥ 4 cases, the full generality was not achieved since two restrictions were imposed. Firstly, we limited ourselves to the, defined in the text, “continuous cases” (when the corresponding joint density exists and is continuous), and secondly we consider positive stochastic dependencies only. Nevertheless, the class of the k-variate distributions which can be constructed is very wide. The presented method of construction by means of joiners can be considered competitive to the copula methodology. As it is suggested in the paper the possibility of building a common theory of both copulae and joiners is quite possible, and the joiners may play the role of tools within the theory of copulae, and vice versa copulae may, for example, be used for finding proper joiners. Another independent feature of the joiners methodology is the possibility of constructing many new stochastic processes including stationary and Markovian.

1. Introduction

This work includes a continuation of our previous papers    and  on analysis and construction methods of multivariate survival functions, given their univariate marginals.

This subject was widely developed in the literature since the late 1950s and 60s (for example, see     ) until nowadays. Many methods of construction of bivariate and multivariate probability distributions were developed more recently, only to mention the conditionally determined bivariate and multivariate distributions in   . Some similar but different methods of constructing models, under the name “parameter dependence method”, can be found in  . As it turned out many of the multivariate distributions obtained by that method can also be obtained by the “method of triangular transformations” (especially by pseudoaffine and pseudopower transformations), see for example .

Many other options for construction one can be found in the references of .

The approach that is developed here and in    has its origin  in the model which is the Aalen version  of the famous Cox model  for stochastic dependence.

However, as a result of further development we finally were able to formulate (here and in   ) a version of an emerging theory independent of its Aalen origin.

Thus, in our approach, as presented here and in   , both the construction and the universal characterization of multivariate models rely on incorporating the so called “joiners”. These joiners we propose to name “Aalen factors” (but we will use the name “joiner” for short). They are the functions which fully determine all underlying stochastic dependencies between the considered random variables.

As a result, the so obtained k-variate survival functions gain a nice factored form

$P\left({X}_{1}>{x}_{1},\cdots ,{X}_{k}>{x}_{k}\right)={J}_{1,\cdots ,k}\left({x}_{1},\cdots ,{x}_{k-1},{x}_{k}\right){S}_{1}\left({x}_{1}\right){S}_{2}\left({x}_{2}\right)\cdots {S}_{k}\left({x}_{k}\right)$,

where ${S}_{1}\left({x}_{1}\right),{S}_{2}\left({x}_{2}\right),\cdots ,{S}_{k}\left({x}_{k}\right)$ represent all the, given in advance, univariate marginal survival functions and the “dependence factor” ${J}_{1,\cdots ,k}\left({x}_{1},\cdots ,{x}_{k-1},{x}_{k}\right)$ is the joiner. Clearly, the case, where ${J}_{1,\cdots ,k}\left({x}_{1},\cdots ,{x}_{k-1},{x}_{k}\right)=1$ everywhere, is equivalent to stochastic independence.

Now, the task of the construction of any k-variate probability distribution can simply be formulated as follows: given all the univariate marginal distributions, represented by the survival functions ${S}_{1}\left({x}_{1}\right),{S}_{2}\left({x}_{2}\right),\cdots ,{S}_{k}\left({x}_{k}\right)$, finda proper joiner ${J}_{1,\cdots ,k}\left({x}_{1},\cdots ,{x}_{k-1},{x}_{k}\right)$ so that the product ${J}_{1,\cdots ,k}\left({x}_{1},\cdots ,{x}_{k-1},{x}_{k}\right){S}_{1}\left({x}_{1}\right){S}_{2}\left({x}_{2}\right)\cdots {S}_{k}\left({x}_{k}\right)$ is a valid k-variate survival function.

This task formulation, in terms of the survival functions, resembles the task of finding a proper copula for given univariate cdfs, say, ${F}_{1}\left({x}_{1}\right),{F}_{2}\left({x}_{2}\right),\cdots ,{F}_{k}\left({x}_{k}\right)$ in order to obtain a k-variate cdf. as a stochastic model for some investigated reality (i.e., “outside of mathematics”) represented by a given set of data.

It becomes then clear that, given, “essentially the same” input data, ${F}_{1}\left({x}_{1}\right),{F}_{2}\left({x}_{2}\right),\cdots ,{F}_{k}\left({x}_{k}\right)$, the “method of joiners” stands as an alternative (and competitive ) method to the copula methodology.

On the other hand, as we point out in Section 2, there is a possibility to formulate a common general theory of both copula and joiner representations of bivariate and, possibly, also k-variate (k ≥ 3) probability distributions.

This possibility follows from the fact that both representations (by copulas and by joiners) are equivalent as describing the same probability distribution. Therefore, any copula uniquely determines a corresponding joiner, and any joiner uniquely determines a copula. This fact indicates the way to find more copulas through joiners, as well as more joiners that correspond to known copulas. Moreover, the method of joiners may, likely, be used to develop more copulas theory for higher dimensions.

These remarks only signalize the possibility of such a common theory. We intend to develop this in more detail in the nearest future.

At the moment, we rather concentrate on the joiner representation and the joiner based methods of models construction.

In the next section we provide a short introduction to the joiners’ theory by providing a slightly different (as compared with our previous works) formulation of bivariate case. This case is fundamental to our considerations in sections 3 and 4 since for the k-variate (k ≥ 3) survival functions we restrict our attention to “bi-dependence” only, which means only bivariate joiners may be different than 1 (see, ). This restriction dramatically simplifies theory of k-variate distributions as compared with the general theory (for arbitrary k) developed in .

In Section 3 we provide only a general scheme of k-variate distributions’ analytical form (for arbitrary k) under the bi-dependence assumption.

As for the k-variate distribution for which the joiner (in this case the joineris reduced to aproduct of bivariate joiners that correspond to all, or to some only, bivariate marginals) must fit the given k univariate marginals, we only formulate the main result as Hypothesis.

Even though the Hypothesis is very convincing, we were not able to provide a formal proof. The arguments, which made us strongly believe it holds, mostly (but not exclusively) follow from the fact that the same pattern as we presume holds for an arbitrary k, holds for the cases k = 2 and k = 3.

As mentioned, the (hypothetical) conclusion, extending these two cases to all k, is not only based on that analogy, but also on the naturalness of the assertion. The case k = 3 is the subject of Section 4. The main thesis about the 3-dimensional model is formulated and proven there in two theorems.

The general case, i.e., k-variate distribution for any $k=1,2,\cdots$, actually defines an infinite sequence of random variables ${X}_{1},{X}_{2},\cdots ,{X}_{k},\cdots$ which, as it is pointed out at end of Section 3, satisfies the Kolmogorov consistency condition. So, we actually defined a class of discrete time (now, k represents “time”) stochastic processes with a variety of interesting special cases.

Clearly, such processes need not to be very complicated if we assume that most of the bivariate joiners reduce to 1. Based on such possibility we may construct m-Markovian processes for $m=1,2,\cdots$ (they are Markovian if m = 1). Also by adopting natural assumptions we may gain stationarity of some constructed stochastic processes. It is an exciting possibility that having only one bivariate distribution of any neighboring random variables, say ${X}_{k-1},{X}_{k}$, we may easily construct both stationary and Markovian stochastic processes.

This subject is only touched upon in Section 3, but is out of scope of this work.

At the end of this Introduction we must notice that, according to our common assumption that every joiner is not bigger than 1 (i.e., its corresponding representation, by the defined throughout the text function $\Psi \left(\text{ }\text{ },\text{ }\text{ }\right)$, is nonnegative) we limited ourselves, to positive stochastic dependencies. Extension to models also comprising negative dependencies is quite possible, but requires a more complex theory.

2. The Bivariate Case

Before constructing new classes of multivariate and, especially, tri-variate survival functions, in this section we repeat and slightly modify the bivariate cases which involve their bivariate universal forms, referring for more details to our previous papers   .

Thus, according to those papers, any bivariate survival function $S\left({x}_{1},{x}_{2}\right)=P\left({X}_{1}>{x}_{1},{X}_{2}>{x}_{2}\right)$ of an arbitrary random vector $\left({X}_{\text{1}},{X}_{\text{2}}\right)$ can be represented as:

$S\left({x}_{1},{x}_{2}\right)={S}_{1}\left({x}_{1}\right){S}_{2}\left({x}_{2}\right)J\left({x}_{1},{x}_{2}\right)$, (1)

where ${S}_{1}\left({x}_{1}\right),{S}_{2}\left({x}_{2}\right)$ are the marginal functions of $S\left({x}_{1},{x}_{2}\right)$ and the function $J\left({x}_{1},{x}_{2}\right)$, that was called the joiner, determines all the stochastic dependence between the random variables ${X}_{\text{1}},{X}_{\text{2}}$.

As it was argued in , form (1) is universal in the sense that every bivariate survival function has the unique representation (1).

Unlike papers   in this work we restrict our attention to the so called “continuous” case  which means we adopt the assumption that both hazard rates of the marginals ${S}_{1}\left({x}_{1}\right),{S}_{2}\left({x}_{2}\right)$, say ${\lambda }_{1}\left({x}_{1}\right),{\lambda }_{2}\left({x}_{2}\right)$, do exist and are continuous.

Moreover, we assume that for any considered joiner $J\left({x}_{1},{x}_{2}\right)$ there is unique representation by the continuous function $\Psi \left({x}_{1},{x}_{2}\right)$ such that:

$J\left({x}_{1},{x}_{2}\right)=\mathrm{exp}\left[-\underset{0}{\overset{{x}_{1}}{\int }}\underset{0}{\overset{{x}_{2}}{\int }}\Psi \left(t,u\right)\text{d}t\text{d}u\right]$ (2)

The above assumptions allow us to represent the bivariate survival function (1) in the following exponential form:

$S\left({x}_{1},{x}_{2}\right)=\mathrm{exp}\left[-\underset{0}{\overset{{x}_{1}}{\int }}{\lambda }_{1}\left(t\right)\text{d}t-\underset{0}{\overset{{x}_{1}}{\int }}\underset{0}{\overset{{x}_{2}}{\int }}\Psi \left(t,u\right)\text{d}t\text{d}u-\underset{0}{\overset{{x}_{2}}{\int }}{\lambda }_{2}\left(u\right)\text{d}u\right]$ (3)

The problem that occurs at this stage is to find an answer to the following question :

Given are a fixed, but arbitrary, pair of marginals ${S}_{1}\left({x}_{1}\right),{S}_{2}\left({x}_{2}\right)$ represented by the corresponding continuous hazard rates ${\lambda }_{1}\left({x}_{1}\right),{\lambda }_{2}\left({x}_{2}\right)$. What conditions must be satisfied by any continuous function of two nonnegative real variables, say $G\left({x}_{1},{x}_{2}\right)$, so that the product ${S}_{1}\left({x}_{1}\right){S}_{2}\left({x}_{2}\right)G\left({x}_{1},{x}_{2}\right)$ is a legitimate survival function, i.e., $G\left({x}_{1},{x}_{2}\right)=J\left({x}_{1},{x}_{2}\right)$ for a proper joiner $J\left({x}_{1},{x}_{2}\right)$ associated with ${\lambda }_{1}\left({x}_{1}\right),{\lambda }_{2}\left({x}_{2}\right)$.

Necessary conditions, for $G\left({x}_{1},{x}_{2}\right)=J\left({x}_{1},{x}_{2}\right)$, where $J\left({x}_{1},{x}_{2}\right)$ is a proper joiner, are that

$G\left({x}_{1},0\right)=1$, for each ${x}_{\text{1}}$, and $G\left(0,{x}_{2}\right)=1$ for each ${x}_{\text{2}}$, (4)

and consequently $G\left(0,0\right)=1$, see  or .

Condition (4) is, however, not sufficient.

Assuming there is an exponential representation of $G\left({x}_{1},{x}_{2}\right)$

$G\left({x}_{1},{x}_{2}\right)=\mathrm{exp}\left[-\underset{0}{\overset{{x}_{1}}{\int }}\underset{0}{\overset{{x}_{2}}{\int }}\Phi \left(t,u\right)\text{d}t\text{d}u\right],$

where $\Phi \left(t,u\right)$ is a continuous function satisfying (4), equality (3) may be rewritten into the form

$R\left({x}_{1},{x}_{2}\right)=\mathrm{exp}\left[-\underset{0}{\overset{{x}_{1}}{\int }}{\lambda }_{1}\left(t\right)\text{d}t-\underset{0}{\overset{{x}_{1}}{\int }}\underset{0}{\overset{{x}_{2}}{\int }}\Phi \left(t,u\right)\text{d}t\text{d}u-\underset{0}{\overset{{x}_{2}}{\int }}{\lambda }_{2}\left(u\right)\text{d}u\right]$ (5)

The question whether, for the marginals given by the hazard rates ${\lambda }_{1}\left({x}_{1}\right)$, ${\lambda }_{2}\left({x}_{2}\right)$, the function $R\left({x}_{1},{x}_{2}\right)$ is a legitimate bivariate survival function (in symbols, whether $R\left({x}_{1},{x}_{2}\right)=S\left({x}_{1},{x}_{2}\right)$ ) reduces to the question whether $\Phi \left(t,u\right)$ determines a proper joiner, i.e., whether $\Phi \left(t,u\right)=\Psi \left(t,u\right)$, where the function $\Psi \left(t,u\right)$ determines a proper joiner $J\left({x}_{1},{x}_{2}\right)$ by (2).

Suppose the function $G\left({x}_{1},{x}_{2}\right)$ satisfies the necessary conditions (4).

Now, the sufficient condition for $\Phi \left(t,u\right)=\Psi \left(t,u\right)$ (for some proper function $\Psi \left(t,u\right)$, given by (2) is equivalent to the requirement that the second mixed derivatives ${\partial }^{2}/\partial {x}_{1}\partial {x}_{2}$ and ${\partial }^{2}/\partial {x}_{2}\partial {x}_{1}$ of $R\left({x}_{1},{x}_{2}\right)$, as given by (5), are equal to each other and are nonnegative for all ${x}_{\text{1}}$, ${x}_{\text{2}}$.

Obviously they do exist and are continues by the earlier assumed continuity of the functions ${\lambda }_{1}\left(t\right)$, ${\lambda }_{2}\left(u\right)$ and $\Psi \left(t,u\right)$.

The nonnegativity requirement for ${\partial }^{2}/\partial {x}_{1}\partial {x}_{2}R\left({x}_{1},{x}_{2}\right)$ is equivalent to the simple common fact that the joint density of any random vector $\left({X}_{1},{X}_{2}\right)$, if it exists, must be nonnegative.

As it follows from the form of (3), other properties, characterizing that density’s antiderivative (the cdf.) are satisfied automatically.

Thus, to obtain the sufficient condition for $R\left({x}_{1},{x}_{2}\right)=S\left({x}_{1},{x}_{2}\right)$ it is enough to set

${\partial }^{2}/\partial {x}_{1}\partial {x}_{2}R\left({x}_{1},{x}_{2}\right)\ge 0,$ (6)

where $R\left({x}_{1},{x}_{2}\right)$ is given by (5).

After calculating the second mixed derivative from (5), then simplifying underlying expressions and setting expressions with negative signs to the right-hand side of the inequality, one obtains (6) in the form of the following integral inequality:

$\left[{\lambda }_{1}\left({x}_{1}\right)+\underset{0}{\overset{{x}_{2}}{\int }}\Phi \left({x}_{1},u\right)\text{d}u\right]\cdot {\lambda }_{2}\left({x}_{2}\right)+\underset{0}{\overset{{x}_{1}}{\int }}\Phi \left(t,{x}_{2}\right)\text{d}t\ge \Phi \left({x}_{1},{x}_{2}\right)$ (7)

Now, the task of finding the bivariate distribution, given the marginals as represented by the hazard rates ${\lambda }_{1}\left({x}_{1}\right)$, ${\lambda }_{2}\left({x}_{2}\right)$, reduces to solving integral inequality (7) with respect to the only unknown function $\Phi \left({x}_{1},{x}_{2}\right)$. This means, any solution $\Phi \left({x}_{1},{x}_{2}\right)=\Psi \left({x}_{1},{x}_{2}\right)$ of (7) is functionally dependent on the marginal distributions ${S}_{1}\left({x}_{1}\right)$, ${S}_{2}\left({x}_{2}\right)$, here represented by ${\lambda }_{1}\left({x}_{1}\right)$, ${\lambda }_{2}\left({x}_{2}\right)$. So, in the continuous case, the set of all the solutions $\Phi \left({x}_{1},{x}_{2}\right)$ of (7) uniquely determines the set of all bivariate distributions having the same fixed marginals. All the functions $\Phi \left({x}_{1},{x}_{2}\right)$, satisfying conditions (4) and (7), will be denoted by the symbol $\Psi \left({x}_{1},{x}_{2}\right)$. So that whenever writing $\Psi \left({x}_{1},{x}_{2}\right)$ we will mean the function representing a proper joiner and the corresponding, by (3), function denoted by “ $S\left({x}_{1},{x}_{2}\right)$ ” will be treated as a legitimate bivariate survival function.

In the case $\Phi \left({x}_{1},{x}_{2}\right)\ge 0$, for all ${x}_{\text{1}},{x}_{\text{2}}$, (positive stochastic dependence case) if the inequality:

${\lambda }_{1}\left({x}_{1}\right){\lambda }_{2}\left({x}_{2}\right)\ge \Phi \left({x}_{1},{x}_{2}\right)$ (8)

holds then (7) holds too.

Thus, the conditions $\Phi \left({x}_{1},{x}_{2}\right)\ge 0$ and (8) are sufficient conditions for $\Phi \left({x}_{1},{x}_{2}\right)=\Psi \left({x}_{1},{x}_{2}\right)$, where $\Psi \left({x}_{1},{x}_{2}\right)$ determines a proper bivariate survival function $S\left({x}_{1},{x}_{2}\right)$ by (3).

Notice, however, that condition (8) is not a necessary condition.

Nevertheless, any solution of (8) is a solution of (7) and is very easy to find.

The simplest set of such solutions obviously is given by $\Psi \left({x}_{1},{x}_{2}\right)=a{\lambda }_{1}\left({x}_{1}\right){\lambda }_{2}\left({x}_{2}\right)$, where for the constant parameter a (to be statistically estimated) we require $0\le a\le 1$.

One then obtains as a special case of (3) the following model:

$S\left({x}_{1},{x}_{2}\right)=\mathrm{exp}\left[-\underset{0}{\overset{{x}_{1}}{\int }}{\lambda }_{1}\left(t\right)\text{d}t-a\underset{0}{\overset{{x}_{1}}{\int }}\underset{0}{\overset{{x}_{2}}{\int }}{\lambda }_{1}\left(t\right){\lambda }_{2}\left(u\right)\text{d}t\text{d}u-\underset{0}{\overset{{x}_{2}}{\int }}{\lambda }_{2}\left(u\right)\text{d}u\right]$ (9)

Model (9), which is somehow related to the first Gumbel bivariate exponential , is the most natural (and, possibly, kind of “canonical”) solution to the problem of finding the joint distribution of random variables X1, X2, given the margins represented by the hazard rates.

We expect many applications of this model according to the common opinion that the simplest (“but not yet simpler”) models mostly are the best reflections of modeled realities.

Model (9) can be generalized to the following one:

$S\left({x}_{1},{x}_{2}\right)=\mathrm{exp}\left[-\underset{0}{\overset{{x}_{1}}{\int }}{\lambda }_{1}\left(t\right)\text{d}t-ac\left({x}_{1},{x}_{2}\right)\underset{0}{\overset{{x}_{1}}{\int }}\underset{0}{\overset{{x}_{2}}{\int }}{\lambda }_{1}\left(t\right){\lambda }_{2}\left(u\right)\text{d}t\text{d}u-\underset{0}{\overset{{x}_{2}}{\int }}{\lambda }_{2}\left(u\right)\text{d}u\right]$ (10)

where an additional factor of the middle term in the exponent of(10) is any continuous function $c\left({x}_{1},{x}_{2}\right)$ satisfying: $0\le c\left({x}_{1},{x}_{2}\right)\le 1$ for all nonnegative ${x}_{\text{1}}$, ${x}_{\text{2}}$. Moreover, again $0\le a\le 1$.

In particular, we propose the following natural model, with $c\left({x}_{1},{x}_{2}\right)=\mathrm{exp}\left[-\gamma {x}_{1}{x}_{2}\right]$ :

$S\left({x}_{1},{x}_{2}\right)=\mathrm{exp}\left[-\underset{0}{\overset{{x}_{1}}{\int }}{\lambda }_{1}\left(t\right)\text{d}t-a\mathrm{exp}\left[-\gamma {x}_{1}{x}_{2}\right]\underset{0}{\overset{{x}_{1}}{\int }}\underset{0}{\overset{{x}_{2}}{\int }}{\lambda }_{1}\left(t\right){\lambda }_{2}\left(u\right)\text{d}t\text{d}u-\underset{0}{\overset{{x}_{2}}{\int }}{\lambda }_{2}\left(u\right)\text{d}u\right]$ (11)

where the constant real parameter $\gamma$ (to be estimated) satisfies $\gamma \ge 0$.

The theory of joiner representations (as developed in this work and in    ) is competitive  to the theory of copulas .

However, it can be shown that the two theories are equivalent in the sense that there is a one-to-one relationship between joiners and copulas, at least in the bivariate case. Thus, having a joiner one immediately obtains the corresponding unique copula and vice versa. As it is well-known, every copula is “good” to any pair of marginal distributions, but, as follows from (7), not every “joiner” fits the marginals given by hazard rates ${\lambda }_{1}\left({x}_{1}\right),{\lambda }_{2}\left({x}_{2}\right)$. On the other hand substituting into any copula the given marginals, one obtains back (through that copula) the joiner that fits the substituted marginals. Thus, in such a way, one can obtain all the proper joiners through all the copulas that are known.

It is important to notice that, in applications to practical problems, it is easier to find a joiner that reflects a modeled reality than a proper copula (That “easiness” in finding a proper joiner follows from the fact that the models involving joiners are closely related to the Aalen  version of the Cox  model for stochastic dependencies.).

On the other hand, such a proper joiner determines the corresponding copula. This facilitates the choice of proper copula. In that sense the joiner approach may be considered as an enrichment of the copula methodology. This subject of a possible common theory of copulas and joiners will be included in our next publication which is now in preparation.

3. A Class of k-Variate Survival Functions

Before starting a more detailed analysis of tri-variate survival functions, which is our main goal, we first introduce a simplified version of k-variate survival functions for any k ³ 3. This version turns out to be a special case of the most general k-variate model presented by formula (1) in .

The model here presented is much simpler as it only describes the “pairwise stochastic dependence” which was defined in . According to the terminology in  such distributions are “three-independent”.

As it will be seen, this simplified case still preserves a significant amount of generality.

Consider the following simplified formula for a k-variate survival function of the random vector $\left({X}_{1},\cdots ,{X}_{k}\right)$ :

$\begin{array}{c}S\left({x}_{1},\cdots ,{x}_{k}\right)=P\left({X}_{1}>{x}_{1},\cdots ,{X}_{k}>{x}_{k}\right)\\ ={J}_{1,\cdots ,k}^{\ast }\left({x}_{1},\cdots ,{x}_{k-1},{x}_{k}\right){S}_{1}\left({x}_{1}\right){S}_{2}\left({x}_{2}\right)\cdots {S}_{k}\left({x}_{k}\right)\end{array}$ (12)

As mentioned, this formula may be considered as a special case of formula (1) in .

Now, however, as we assume pairwise dependence only, the joiner in (12) reduces to the following product of bivariate joiners:

${J}_{1,\cdots ,k}^{\ast }\left({x}_{1},\cdots ,{x}_{k}\right)=\underset{1\le i (13)

Comparing (13) to the joiner as present in formula (3) of , one sees that (13) is obtainable from the most general case considered in  by setting to 1 all the joiners which are not bivariate.

Resuming, formula (12) can be rewritten as:

$S\left({x}_{1},\cdots ,{x}_{k}\right)=\left\{\underset{1\le j (14)

where any single bivariate joiner ${J}_{i,j}\left({x}_{i},{x}_{j}\right)$ is assumed to fit well to the corresponding two marginal survival functions ${S}_{i}\left({x}_{i}\right),{S}_{j}\left({x}_{j}\right)$ so that the products ${J}_{i,j}\left({x}_{i},{x}_{j}\right){S}_{i}\left({x}_{i}\right){S}_{j}\left({x}_{j}\right)$ are legitimate bivariate survival functions of the (marginal) random vectors $\left({X}_{i},{X}_{j}\right)$.

Upon the conditions ${J}_{r,l}\left(0,{x}_{l}\right)={J}_{r,l}\left({x}_{r},0\right)=1$ for all $r,l$ ( $1\le r ) realize that by substituting ${x}_{r}={x}_{l}=0$ for all ${x}_{r}$ and ${x}_{l}$ different from any member of the fixed pair $\left\{{x}_{i},{x}_{j}\right\}$ one obtains from (14), as the bivariate marginal, the (well defined) survival function:

$S\left({x}_{i},{x}_{j}\right)={J}_{i,j}\left({x}_{i},{x}_{j}\right){S}_{i}\left({x}_{i}\right){S}_{j}\left({x}_{j}\right).$ (15)

In the continuous case this means that all the k hazard rates ${\lambda }_{1}\left({x}_{1}\right),\cdots ,{\lambda }_{k}\left({x}_{k}\right)$ do exist and are continuous, and every bivariate joiner ${J}_{i,j}\left({x}_{i},{x}_{j}\right)$ has a unique representation by a continuous function, say ${\Psi }_{i,j}\left({x}_{i},{x}_{j}\right)$, so that

${J}_{i,j}\left({x}_{i},{x}_{j}\right)=\mathrm{exp}\left[-\underset{0}{\overset{{x}_{i}}{\int }}\underset{0}{\overset{{x}_{j}}{\int }}{\Psi }_{i,j}\left({u}_{i},{u}_{j}\right)\text{d}{u}_{i}\text{d}{u}_{j}\right].$ (16)

Naturally, for all pairs of hazard rates ${\lambda }_{\iota }\left({x}_{i}\right),{\lambda }_{j}\left({x}_{j}\right)$ the corresponding functions ${\Psi }_{i,j}\left({x}_{i},{x}_{j}\right)$ are chosen in such a way that, for $1\le i, the following inequalities:

$\left[{\lambda }_{i}\left({x}_{i}\right)+\underset{0}{\overset{{x}_{j}}{\int }}{\Psi }_{i,j}\left({x}_{i},{u}_{j}\right)\text{d}{u}_{j}\right]\cdot \left[{\lambda }_{j}\left({x}_{j}\right)+\underset{0}{\overset{{x}_{i}}{\int }}{\Psi }_{i,j}\left({t}_{i},{x}_{j}\right)\text{d}{t}_{i}\right]\ge {\Psi }_{i,j}\left({x}_{i},{x}_{j}\right),$ (17)

hold for all ${x}_{i},{x}_{j}$, given any fixed pair of indexes i, j.

Inequalities (17) have the same structure as inequality (7).

With the above continuity assumptions, formula (14) can be rewritten in the following exponential form:

$\begin{array}{l}S\left({x}_{1},\cdots ,{x}_{k}\right)\\ =\mathrm{exp}\left[-\underset{0}{\overset{{x}_{1}}{\int }}{\lambda }_{1}\left(t\right)\text{d}t-\cdots -\underset{0}{\overset{{x}_{k}}{\int }}{\lambda }_{k}\left({t}_{k}\right)\text{d}{t}_{k}-\underset{1\le i (18)

Notice that if we again substitute into (18) ${x}_{r}={x}_{l}=0$ for all ${x}_{r}$ and ${x}_{l}$ different from any member of the fixed pair $\left\{{x}_{i},{x}_{j}\right\}$, we obtain (15) in exponential form:

${S}^{\left(2\right)}\left({x}_{i},{x}_{j}\right)=\mathrm{exp}\left[-\underset{0}{\overset{{x}_{i}}{\int }}{\lambda }_{i}\left({t}_{i}\right)\text{d}{t}_{i}-\underset{0}{\overset{{x}_{i}}{\int }}\underset{0}{\overset{{x}_{j}}{\int }}{\Psi }_{i,j}\left({u}_{i},{u}_{j}\right)\text{d}{u}_{i}\text{d}{u}_{j}\right]-\underset{0}{\overset{{x}_{j}}{\int }}{\lambda }_{j}\left({u}_{j}\right)\text{d}{u}_{j}$ (19)

which upon condition (17) is a well-defined bivariate survival function.

Remark

It’s easy to find out that if we set in (18) any $k-r$ ( $1\le r ) variables (among all the variables ${x}_{1},\cdots ,{x}_{k}$ ) to 0, we obtain the r-dimensional marginal survival function of the remaining random variables, say ${X}_{i1},\cdots ,{X}_{ir}$ which, syntactically, has an identical form as the k-dimensional survival function (18). This observation implies the following:

1) Since the pattern given by (18) is valid for every $k=2,3,\cdots$ we, in fact, defined (at least theoretically) an infinite sequence of probability distributions.

2) The above observation on r-dimensional marginals of every k-dimensional distribution, (for each $r=1,2,\cdots ,k-1$ ) clearly indicates that for the underlying sequence of random variables ${X}_{1},\cdots ,{X}_{k},\cdots$ ( $k=1,2,\cdots$ ) the Kolmogorov and Daniels consistency requirement is satisfied in this case. Therefore a fairly wide class of stochastic processes ${\left\{{X}_{k}\right\}}_{k=1,2,\cdots }$ with discrete “time” k is defined.

3) These stochastic processes will be significantly simplified to Markovian if we set all the joiners ${J}_{i,j}\left({x}_{i},{x}_{j}\right)$ in (14) (not necessarily in the continuous case) to 1 or (in the continuous case) all the functions ${\Psi }_{i,j}\left({u}_{i},{u}_{j}\right)$ in (18) to 0 for all pairs $\left(i,j\right)$ such that $j-i>1$. Recall, we only considered situations where $i.

Thus, under the foregoing assumption, only the “adjacent” random variables, say ${X}_{i-1}$, ${X}_{i}$ are dependent, while, for example, variables ${X}_{i-2}$, ${X}_{i}$ for any $i$ such that $i-2\ge 1$ are independent.

If, however, we additionally let the joiners ${J}_{i-2,\text{}i}\left({x}_{i-2},{x}_{i}\right)$ be ≠ 1, but all the joiners ${J}_{i-q,i}\left({x}_{-q},{x}_{i}\right)=1$ for $q\ge 3$, then we obtain the “bi-Markovian” stochastic process in the sense that the probability distribution of any ${X}_{i}$ only depends on realizations of the random variables ${X}_{i}{}_{-\text{1}}$ and ${X}_{i}{}_{-\text{2}}$ while it is independent of any realization of random variables “earlier” than ${X}_{i}{}_{-\text{2}}$.

Quite similarly to bi-Markovian, one can define m-Markovian stochastic processes for $m=\text{1},\text{2},\cdots$, so that the probability distribution of any ${X}_{i}\left(i>m\right)$ depends only on realizations of the random variables ${X}_{i-m},{X}_{i-m+1},\cdots ,{X}_{i-1}$, but does not depend on realizations of any of the random variables ${X}_{1},{X}_{2},\cdots ,{X}_{i-m-1}$.

In the latter case it is enough to set all the joiners ${J}_{s,i}\left({x}_{s},{x}_{i}\right)$ to 1, for $s=1,2,\cdots ,i-m-1$.

Of course, in applications, the choice of joiner (i.e., which one was to be set to 1 ) depends on the character of a modeled reality. Anyway, there is no need always to rely on Markovian (m = 1) processes.

4) Suppose, for the defined above Markovian (m = 1) stochastic processes, we assume that all the underlying hazard rates have the same functional form, i.e.,

$\lambda \left(u\right)={\lambda }_{1}\left(u\right)={\lambda }_{2}\left(u\right)=\cdots ={\lambda }_{k}\left(u\right)=\cdots$.

Also assume one functional form ${J}_{i,j}\left(u,t\right)=J\left(u,t\right)$ for all the bivariate joiners present in defining formula (14).

Then the so defined Markovian (as well as any m-Markovian) process will be stationary, and all we need to analyze and for any further computations regarding the whole so encountered process, it is enough to know one bivariate survival function, say

$S\left({x}_{i-1},{x}_{i}\right)=\mathrm{exp}\left[-\underset{0}{\overset{{x}_{i-1}}{\int }}\lambda \left(t\right)\text{d}t-\underset{0}{\overset{{x}_{i-1}}{\int }}\underset{0}{\overset{{x}_{i}}{\int }}\Psi \left(t,u\right)\text{d}t\text{d}u-\underset{0}{\overset{{x}_{i}}{\int }}\lambda \left(u\right)\text{d}u\right],$ (20)

for all $i=2,3,\cdots$.

In more general cases (not necessarily “continuous”) one may consider, instead, the survival function in the form:

${S}^{\left(2\right)}\left({x}_{i-1},{x}_{i}\right)=S\left({x}_{i-1}\right)S\left({x}_{i}\right)J\left({x}_{i-1},{x}_{i}\right)$ for all $i=1,2,\cdots$.

5) As one can realize, an interesting new theory of both random vectors and stochastic processes emerges. This is, however, not in the scope of this work and we postpone that subject for future research. □

As for (19) we already know that if [for all pairs $\left({x}_{i},{x}_{j}\right)$ ] given the marginals ${\lambda }_{i}\left({x}_{i}\right)$, ${\lambda }_{j}\left({x}_{j}\right)$ the corresponding functions ${J}_{i,j}\left({x}_{i},{x}_{j}\right)$ satisfy the conditions ${J}_{i,j}\left(0,{x}_{j}\right)={J}_{i,j}\left({x}_{i},0\right)=1$ uniformly for all nonnegative ${x}_{i},{x}_{j}$, and if they also satisfy inequality (17) with respect to their representations ${\Psi }_{i,j}\left({x}_{i},{x}_{j}\right)$, then (19) represents all the valid marginal bivariate survival functions.

Unfortunately, in the general k-variate case (k ≥ 4) we do not know yet sufficient conditions for all the functions ${\Psi }_{i,j}\left({u}_{i},{u}_{j}\right)$ that are present in (18) so that, given the marginals represented by ${\lambda }_{1}\left({x}_{1}\right),\cdots ,{\lambda }_{k}\left({x}_{k}\right)$, (18) represents a valid k-variate survival function for $k=4,5,\cdots$.

Nevertheless, we have reasons to propose the following hypothesis which, unfortunately, at the moment, we are not able to prove rigorously.

In the general k-variate case an eventual rigorous proof would, possibly, require to establish some common pattern that would comprise the kth mixed derivative ${\partial }^{k}/\partial {x}_{1}\cdots \partial {x}_{k}$ from the right-hand side of (18).

Such a pattern should be valid for any $k=3,4,\cdots$, and it is more than finding the kth derivative for any particular relatively small k.

As for the hypothesis we presume what follows:

Hypothesis: For any given univariate marginals of a random vector $\left({X}_{1},\cdots ,{X}_{k}\right)$ represented by the hazard rates ${\lambda }_{1}\left({x}_{1}\right),\cdots ,{\lambda }_{k}\left({x}_{k}\right)$ consider the following expression:

$\mathrm{exp}\left[-\underset{0}{\overset{{x}_{1}}{\int }}{\lambda }_{1}\left({t}_{1}\right)\text{d}{t}_{1}-\cdots -\underset{0}{\overset{{x}_{k}}{\int }}{\lambda }_{k}\left({t}_{k}\right)\text{d}{t}_{k}-\underset{1\le i (21)

where for any pair of indexes i, j such that $1\le i the continuous functions ${\Psi }_{i,j}\left({u}_{i},{u}_{j}\right)$ satisfy

$0\le {\Psi }_{i,j}\left({u}_{i},{u}_{j}\right)\le {\lambda }_{i}\left({u}_{i}\right){\lambda }_{j}\left({u}_{j}\right)$. (22)

Moreover, let the nonnegative real constants ${c}_{ij}$ satisfy:

$\underset{1\le i (23)

Then expression (21) represents a valid k-dimensional survival function of $\left({X}_{1},\cdots ,{X}_{k}\right)$. □

If the above hypothesis holds, then from (22) one obtains the following specific k-variate model:

$\begin{array}{l}S\left({x}_{1},\cdots ,{x}_{k}\right)\\ =\mathrm{exp}\left[-\underset{0}{\overset{{x}_{1}}{\int }}{\lambda }_{1}\left({t}_{1}\right)\text{d}{t}_{1}-\cdots -\underset{0}{\overset{{x}_{k}}{\int }}{\lambda }_{k}\left({t}_{k}\right)\text{d}{t}_{k}-\underset{1\le i (24)

where the nonnegative constants ${c}_{ij}$ satisfy (23).

Model (24) is an extension of the similar bivariate model (9) as well as the tri-variate model (26) given in the next section.

If the Hypothesis holds, then we also can generalize (24) to the following:

$\begin{array}{c}S\left({x}_{1},\cdots ,{x}_{k}\right)=\mathrm{exp}\left[-\underset{0}{\overset{{x}_{1}}{\int }}{\lambda }_{1}\left({t}_{1}\right)\text{d}{t}_{1}-\cdots -\underset{0}{\overset{{x}_{k}}{\int }}{\lambda }_{k}\left({t}_{k}\right)\text{d}{t}_{k}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}-\underset{1\le i (25)

for any set of continuous functions ${a}_{ij}\left({x}_{i},{x}_{j}\right)$ all satisfying $0\le {a}_{ij}\left({x}_{i},{x}_{j}\right)\le 1$.

For example, one may choose ${a}_{ij}\left({x}_{i},{x}_{j}\right)=\mathrm{exp}\left[-{b}_{ij}{x}_{i}{x}_{j}\right]$ with a given set of real nonnegative constants $\left[{b}_{ij}\right]$.

However, if the hypothesis holds, model (24) seems to be the most natural in the class of k-dimensional survival functions in the continuous case. We expect many applications of (24) in multivariate survival analysis.

Although we do not have at our disposal a formal proof of the considered Hypothesis, an argument that may support it is that it holds, in particular, for k = 2 (Section 2) and for k = 3 (next section). So the truthfulness of the hypothesis (for all k) is (unfortunately) based solely on that analogy. Nonetheless, we are strongly convinced it holds in all cases k ≥ 2. In a case of occurrence an essential difficulty in finding a formal proof for dimensions higher than 3, in applications (only), some statistical arguments for the cases $k=4,5,\cdots$ may possibly be applied. Another way out might be the use of CAS (Computer Algebra Systems) such as MAPLE or MATEMATICA for underlying computations.

In the case when possessing a k-variate “model” (k ≥ 4) such as (21) is especially important for a given practical purpose one might, eventually, take a (slight) risk and adopt some model (21) together with condition (23), and then try to verify it statistically. This approach may turn out to be useful, at least from a practical point of view. However, such “purely statistical” arguments would not be equivalent to a proper analytical (mathematical) proof.

4. 3-Variate Survival Functions. A Closer Look

4.1. The Continuous “Canonical” Case

Consider the following 3-dimensional version of (24):

$\begin{array}{l}S\left({x}_{1},{x}_{2},{x}_{3}\right)\\ =\mathrm{exp}\left[-\underset{0}{\overset{{x}_{1}}{\int }}{\lambda }_{1}\left({t}_{1}\right)\text{d}{t}_{1}-\underset{0}{\overset{{x}_{2}}{\int }}{\lambda }_{2}\left({t}_{2}\right)\text{d}{t}_{2}-\underset{0}{\overset{{x}_{3}}{\int }}{\lambda }_{3}\left({t}_{3}\right)\text{d}{t}_{3}-{c}_{12}\underset{0}{\overset{{x}_{1}}{\int }}\underset{0}{\overset{{x}_{2}}{\int }}{\lambda }_{1}\left({u}_{1}\right){\lambda }_{2}\left({u}_{2}\right)\text{d}{u}_{1}\text{d}{u}_{2}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }\text{ }-{c}_{13}\underset{0}{\overset{{x}_{1}}{\int }}\underset{0}{\overset{{x}_{3}}{\int }}{\lambda }_{1}\left({u}_{1}\right){\lambda }_{3}\left({u}_{3}\right)\text{d}{u}_{1}\text{d}{u}_{3}-{c}_{23}\underset{0}{\overset{{x}_{2}}{\int }}\underset{0}{\overset{{x}_{3}}{\int }}{\lambda }_{2}\left({u}_{2}\right){\lambda }_{3}\left({u}_{3}\right)\text{d}{u}_{2}\text{d}{u}_{3}\right]\end{array}$ (26)

where we assume that all three continuous hazard rates ${\lambda }_{1}\left({x}_{1}\right),{\lambda }_{2}\left({x}_{2}\right),{\lambda }_{3}\left({x}_{3}\right)$ are never zero. This assumption may, possibly, be weakened. We adopt it only for simplification of our calculations.

The question now to be answered is, for which hazard rates and for which values of the constant parameters ${c}_{\text{12}},{c}_{\text{13}},{c}_{\text{23}}$ does expression (26) represent a valid 3-variate survival function.

The answer to this question, together with proper restrictions, we formulate as the following:

Theorem 1. Given is a random vector $\left({X}_{1},{X}_{2},{X}_{3}\right)$ whose univariate marginal survival functions are represented by any given continuous hazard rates ${\lambda }_{1}\left({x}_{1}\right),{\lambda }_{2}\left({x}_{2}\right),{\lambda }_{3}\left({x}_{3}\right)$, which never are zero.

The function $S\left({x}_{1},{x}_{2},{x}_{3}\right)$ defined by formula (26) is a valid joint survival function of the random vector $\left({X}_{1},{X}_{2},{X}_{3}\right)$, if the nonnegative coefficients ${c}_{ij}$ ( $1\le i ) in (26) satisfy: ${c}_{12}+{c}_{13}+{c}_{23}\le 1$.

Proof

First realize, that all of the three implicitly present in (26) candidates ${G}_{ij}\left({x}_{i},{x}_{j}\right)$ for the joiners are in the form

${G}_{ij}\left({x}_{i},{x}_{j}\right)=\mathrm{exp}\left[-{c}_{ij}\underset{0}{\overset{{x}_{i}}{\int }}\underset{0}{\overset{{x}_{j}}{\int }}{\lambda }_{i}\left({u}_{i}\right){\lambda }_{j}\left({u}_{j}\right)\text{d}{u}_{i}\text{d}{u}_{j}\right]$ (27)

for all $1\le i.

This form of the functions ${G}_{ij}\left(\right)$ , whose arguments are only present as upper limits of the involved integrals, indicates that for all pairs (i, j) the following necessary conditions for the functions ${G}_{ij}\left({x}_{i},{x}_{j}\right)$ to be legitimate joiners

${G}_{ij}\left(0,{x}_{j}\right)={G}_{ij}\left({x}_{i},0\right)=1$, for all ${x}_{i},{x}_{j}$ (28)

are satisfied by elementary properties of Riemann integrals.

Therefore, we may return to our original notation for the joiners, i.e., ${G}_{ij}\left({x}_{i},{x}_{j}\right)={J}_{ij}\left({x}_{i},{x}_{j}\right)$.

Now consider the sufficient condition for (26) to be a survival function.

Since for the, here considered, “continuous case” the derivative ${\partial }^{3}/\partial {x}_{1}\partial {x}_{2}\partial {x}_{3}S\left({x}_{1},{x}_{2},{x}_{3}\right)$ exists and is continuous, the sufficient condition reduces to the following inequality

$\left(-1\right){\partial }^{3}/\partial {x}_{1}\partial {x}_{2}\partial {x}_{3}S\left({x}_{1},{x}_{2},{x}_{3}\right)\ge 0$, (29)

which must hold for all the triples $\left({x}_{1},{x}_{2},{x}_{3}\right)\in {R}_{+}^{3}$ .

Recall, $S\left({x}_{1},{x}_{2},{x}_{3}\right)$ is determined by the right hand side of (26).

Now, we need to find proper additional conditions for the hazard rates ${\lambda }_{1}\left({x}_{1}\right),{\lambda }_{2}\left({x}_{2}\right),{\lambda }_{3}\left({x}_{3}\right)$ and for the coefficients ${c}_{\text{12}},{c}_{\text{13}},{c}_{\text{23}}$ in order for inequality (29) to hold.

After calculating the derivative ${\partial }^{3}/\partial {x}_{1}\partial {x}_{2}\partial {x}_{3}\left(\text{ }\right)$ from the right-hand side of (26) we set inequality equivalent to (29). Then, we simplify it by dividing both sides of the so obtained inequality by the common (always positive) factor

$\begin{array}{l}{\lambda }_{1}\left({x}_{1}\right){\lambda }_{2}\left({x}_{2}\right){\lambda }_{3}\left({x}_{3}\right)\cdot \mathrm{exp}\left[-\underset{0}{\overset{{x}_{1}}{\int }}{\lambda }_{1}\left({t}_{1}\right)\text{d}{t}_{1}-\underset{0}{\overset{{x}_{2}}{\int }}{\lambda }_{2}\left({t}_{2}\right)\text{d}{t}_{2}-\underset{0}{\overset{{x}_{3}}{\int }}{\lambda }_{3}\left({t}_{3}\right)\text{d}{t}_{3}\\ \text{ }-{c}_{12}\underset{0}{\overset{{x}_{1}}{\int }}\underset{0}{\overset{{x}_{2}}{\int }}{\lambda }_{1}\left({u}_{1}\right){\lambda }_{2}\left({u}_{2}\right)\text{d}{u}_{1}\text{d}{u}_{2}-{c}_{13}\underset{0}{\overset{{x}_{1}}{\int }}\underset{0}{\overset{{x}_{3}}{\int }}{\lambda }_{1}\left({u}_{1}\right){\lambda }_{3}\left({u}_{3}\right)\text{d}{u}_{1}\text{d}{u}_{3}\\ \text{ }-{c}_{23}\underset{0}{\overset{{x}_{2}}{\int }}\underset{0}{\overset{{x}_{3}}{\int }}{\lambda }_{2}\left({u}_{2}\right){\lambda }_{3}\left({u}_{3}\right)\text{d}{u}_{2}\text{d}{u}_{3}\right]\end{array}$

and set all the negative terms to the right-hand side of the inequality.

As a result we obtain an inequality equivalent to (29) as follows:

$\begin{array}{l}\left[1+{c}_{13}\underset{0}{\overset{{x}_{1}}{\int }}{\lambda }_{1}\left({u}_{1}\right)\text{d}{u}_{1}+{c}_{23}\underset{0}{\overset{{x}_{2}}{\int }}{\lambda }_{2}\left({u}_{2}\right)\text{d}{u}_{2}\right]\left[1+{c}_{12}\underset{0}{\overset{{x}_{1}}{\int }}{\lambda }_{1}\left({u}_{1}\right)\text{d}{u}_{1}+{c}_{23}\underset{0}{\overset{{x}_{3}}{\int }}{\lambda }_{3}\left({u}_{3}\right)\text{d}{u}_{3}\right]\\ \left[1+{c}_{12}\underset{0}{\overset{{x}_{2}}{\int }}{\lambda }_{2}\left({u}_{2}\right)\text{d}{u}_{2}+{c}_{13}\underset{0}{\overset{{x}_{3}}{\int }}{\lambda }_{3}\left({u}_{3}\right)\text{d}{u}_{3}\right]\\ \ge {c}_{12}\left[1+{c}_{13}\underset{0}{\overset{{x}_{1}}{\int }}{\lambda }_{1}\left({u}_{1}\right)\text{d}{u}_{1}+{c}_{23}\underset{0}{\overset{{x}_{2}}{\int }}{\lambda }_{2}\left({u}_{2}\right)\text{d}{u}_{2}\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+{c}_{13}\left[1+{c}_{12}\underset{0}{\overset{{x}_{1}}{\int }}{\lambda }_{1}\left({u}_{1}\right)\text{d}{u}_{1}+{c}_{23}\underset{0}{\overset{{x}_{3}}{\int }}{\lambda }_{3}\left({u}_{3}\right)\text{d}{u}_{3}\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+{c}_{23}\left[1+{c}_{12}\underset{0}{\overset{{x}_{2}}{\int }}{\lambda }_{2}\left({u}_{2}\right)\text{d}{u}_{2}+{c}_{13}\underset{0}{\overset{{x}_{3}}{\int }}{\lambda }_{3}\left({u}_{3}\right)\text{d}{u}_{3}\right]\end{array}$ (30)

(At this point realize that if for some ${x}_{i}\ge 0$ ( $i=1,2,3$ ), one admits ${\lambda }_{i}\left({x}_{i}\right)=0$, then at those ${x}_{i}$ the inequality equivalent to (29) and (30) holds trivially as 0 ≥ 0.)

Inequality (30) reduces to a simple relation upon the following substitutions:

$A={c}_{13}\underset{0}{\overset{{x}_{1}}{\int }}{\lambda }_{1}\left({u}_{1}\right)\text{d}{u}_{1}+{c}_{23}\underset{0}{\overset{{x}_{2}}{\int }}{\lambda }_{2}\left({u}_{2}\right)\text{d}{u}_{2}$

$B={c}_{12}\underset{0}{\overset{{x}_{1}}{\int }}{\lambda }_{1}\left({u}_{1}\right)\text{d}{u}_{1}+{c}_{23}\underset{0}{\overset{{x}_{3}}{\int }}{\lambda }_{3}\left({u}_{3}\right)\text{d}{u}_{3}$

$C={c}_{12}\underset{0}{\overset{{x}_{2}}{\int }}{\lambda }_{2}\left({u}_{2}\right)\text{d}{u}_{2}+{c}_{13}\underset{0}{\overset{{x}_{3}}{\int }}{\lambda }_{3}\left({u}_{3}\right)\text{d}{u}_{3}$

Now (30) can be expressed as:

$\left(1+A\right)\left(1+B\right)\left(1+C\right)\ge {c}_{12}\left(1+A\right)+{c}_{13}\left(1+B\right)+{c}_{23}\left(1+C\right)$ (31)

where all the expressions A, B, C and all the constants ${c}_{\text{12}},{c}_{\text{13}},{c}_{\text{23}}$ are nonnegative. Also, A, B, C are increasing functions of the variables ${x}_{\text{1}},{x}_{\text{2}},{x}_{\text{3}}$, while they all are 0 at $\left({x}_{1},{x}_{2},{x}_{3}\right)=\left(0,0,0\right)$.

It is easy to solve inequality (30) since it is equivalent to inequality (31). (31) is quite easy to solve and get to the conclusion that when it holds, (30) does too whenever the nonnegative constants ${c}_{ij}$ satisfy:

${c}_{12}+{c}_{13}+{c}_{23}\le 1$. (32)

The very important conclusion from the above form (31) of (30) and its set of solutions (given by (32)) is that both inequalities’ truthfulness does not depend on either values or functional forms of the involved three continuous hazard rates ${\lambda }_{1}\left({x}_{1}\right),{\lambda }_{2}\left({x}_{2}\right),{\lambda }_{3}\left({x}_{3}\right)$. Obviously, only if we admit the possibility that some random events ( ${X}_{i}=+\infty$ ) ( $i=1,2,3$ ) may happen with positive probabilities; otherwise, the only additional assumption for the hazard rates we need to adopt is that, for each $i=1,2,3$

$\underset{0}{\overset{\infty }{\int }}{\lambda }_{i}\left({u}_{i}\right)\text{d}{u}_{i}=+\infty$.

This ends the proof. □

Thus, in the continuous case all triples of marginal distributions $\left({S}_{\text{1}}\left({x}_{\text{1}}\right),{S}_{\text{2}}\left({x}_{\text{2}}\right),{S}_{\text{3}}\left({x}_{\text{3}}\right)\right)$ represented by the corresponding hazard rates, “form” their (natural) joint survival function (26) whenever(32) holds.

Recall, the necessary conditions (28) are a direct consequence of joiners’ definition (27) for the continuous case.

Condition (32) is also necessary for (29) [or (30)] to hold. Otherwise, at the point $\left({x}_{1},{x}_{2},{x}_{3}\right)=\left(0,0,0\right)$ or just at points such as, for example, $\left(0,{x}_{2},{x}_{3}\right)$, (31) will be violated, and, by the continuity argument, this situation will persist in an open neighborhood of any such point. That would imply (31) was not true on a set of a positive Lebesgue measure in ${R}_{+}^{3}$.

At the end of this subsection notice that setting any of the variables ${x}_{\text{1}}$, ${x}_{\text{2}}$, ${x}_{\text{3}}$ to 0, one directly obtains the bivariate survival function with respect to remaining two.

For example setting ${x}_{\text{3}}=0$ in (26) one obtains (9), with ${c}_{\text{12}}=a$.

4.2. A More General Continuous Case

One of the most general forms of “continuous” 3-dimensional survival functions, which also comprise the case considered in Section 4.1, can be defined as follows:

$\begin{array}{l}{S}^{g}\left({x}_{1},{x}_{2},{x}_{3}\right)\\ =\mathrm{exp}\left[-\underset{0}{\overset{{x}_{1}}{\int }}{\lambda }_{1}\left({t}_{1}\right)\text{d}{t}_{1}-\underset{0}{\overset{{x}_{2}}{\int }}{\lambda }_{2}\left({t}_{2}\right)\text{d}{t}_{2}-\underset{0}{\overset{{x}_{3}}{\int }}{\lambda }_{3}\left({t}_{3}\right)\text{d}{t}_{3}-{c}_{12}\underset{0}{\overset{{x}_{1}}{\int }}\underset{0}{\overset{{x}_{2}}{\int }}{\Psi }_{1,2}\left({u}_{1},{u}_{2}\right)\text{d}{u}_{1}\text{d}{u}_{2}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }-{c}_{13}\underset{0}{\overset{{x}_{1}}{\int }}\underset{0}{\overset{{x}_{3}}{\int }}{\Psi }_{1,3}\left({u}_{1},{u}_{3}\right)\text{d}{u}_{1}\text{d}{u}_{3}-{c}_{23}\underset{0}{\overset{{x}_{2}}{\int }}\underset{0}{\overset{{x}_{3}}{\int }}{\Psi }_{2,3}\left({u}_{2},{u}_{3}\right)\text{d}{u}_{2}\text{d}{u}_{3}\right]\end{array}$ (33)

where, for all $1\le i, ${\Psi }_{ij}\left({u}_{i},{u}_{j}\right)$ are arbitrary continuous functions, and

$0\le {\Psi }_{ij}\left({u}_{i},{u}_{j}\right)\le {\lambda }_{i}\left({u}_{i}\right){\lambda }_{j}\left({u}_{j}\right)$. (34)

Moreover, (32) holds.

We assumed nonnegativity of all ${\Psi }_{ij}\left({u}_{i},{u}_{j}\right)$, which, as in the case of the models (26), restricts the considerations to positive [but, possibly, to all positive] stochastic dependencies only. As a matter of fact that nonnegativity assumption is not necessary, and is adopted only for simplicity reasons.

We will prove the following theorem:

Theorem 2

Given that (32) and (34) hold, formula (33) defines a class of valid 3-variate survival functions.

Proof. The argumentation is mainly based on the already proven validity of (26) as defining survival functions. As in the case (26), we need to check if the following inequality (similar to (29)) holds:

$\left(-1\right){\partial }^{3}/\partial {x}_{1}\partial {x}_{2}\partial {x}_{3}{S}^{g}\left({x}_{1},{x}_{2},{x}_{3}\right)\ge 0$, (35)

where ${S}^{g}\left({x}_{1},{x}_{2},{x}_{3}\right)$ is given by (33).

After similar computations as in the previous case we arrive at the following inequality equivalent to (35):

$\begin{array}{l}\left[{\lambda }_{1}\left({x}_{1}\right)+{c}_{12}\underset{0}{\overset{{x}_{2}}{\int }}{\Psi }_{1,2}\left({x}_{1},{u}_{2}\right)\text{d}{u}_{2}+{c}_{13}\underset{0}{\overset{{x}_{3}}{\int }}{\Psi }_{1,3}\left({x}_{1},{u}_{3}\right)\text{d}{u}_{3}\right]\\ \left[{\lambda }_{2}\left({x}_{2}\right)+{c}_{12}\underset{0}{\overset{{x}_{1}}{\int }}{\Psi }_{1,2}\left({u}_{1},{x}_{2}\right)\text{d}{u}_{1}+{c}_{23}\underset{0}{\overset{{x}_{3}}{\int }}{\Psi }_{2,3}\left({x}_{2},{u}_{3}\right)\text{d}{u}_{3}\right]\\ \left[{\lambda }_{3}\left({x}_{3}\right)+{c}_{13}\underset{0}{\overset{{x}_{1}}{\int }}{\Psi }_{1,3}\left({u}_{1},{x}_{3}\right)\text{d}{u}_{1}+{c}_{23}\underset{0}{\overset{{x}_{2}}{\int }}{\Psi }_{2,3}\left({u}_{2},{x}_{3}\right)\text{d}{u}_{2}\right]\\ \ge {c}_{23}{\Psi }_{2,3}\left({x}_{2},{x}_{3}\right)\left[{\lambda }_{1}\left({x}_{1}\right)+{c}_{12}\underset{0}{\overset{{x}_{2}}{\int }}{\Psi }_{1,2}\left({x}_{1},{u}_{2}\right)\text{d}{u}_{2}+{c}_{13}\underset{0}{\overset{{x}_{3}}{\int }}{\Psi }_{1,3}\left({x}_{1},{u}_{3}\right)\text{d}{u}_{3}\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+{c}_{13}{\Psi }_{1,3}\left({x}_{1},{x}_{3}\right)\left[{\lambda }_{2}\left({x}_{2}\right)+{c}_{12}\underset{0}{\overset{{x}_{1}}{\int }}{\Psi }_{1,2}\left({u}_{1},{x}_{2}\right)\text{d}{u}_{1}+{c}_{23}\underset{0}{\overset{{x}_{3}}{\int }}{\Psi }_{2,3}\left({x}_{2},{u}_{3}\right)\text{d}{u}_{3}\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+{c}_{12}{\Psi }_{1,2}\left({x}_{1},{x}_{2}\right)\left[{\lambda }_{3}\left({x}_{3}\right)+{c}_{13}\underset{0}{\overset{{x}_{1}}{\int }}{\Psi }_{1,3}\left({u}_{1},{x}_{3}\right)\text{d}{u}_{1}+{c}_{23}\underset{0}{\overset{{x}_{2}}{\int }}{\Psi }_{2,3}\left({u}_{2},{x}_{3}\right)\text{d}{u}_{2}\right]\end{array}$ (36)

Now, all that remains to do is to prove inequality (36).

Using the assumption of positivity of the functions ${\lambda }_{1}\left({x}_{1}\right),{\lambda }_{2}\left({x}_{2}\right),{\lambda }_{3}\left({x}_{3}\right)$, inequality (36) can be transformed into the following equivalent form:

$\begin{array}{l}{\lambda }_{1}\left({x}_{1}\right)\left[1+\left({c}_{12}\underset{0}{\overset{{x}_{2}}{\int }}{\Psi }_{1,2}\left({x}_{1},{u}_{2}\right)\text{d}{u}_{2}+{c}_{13}\underset{0}{\overset{{x}_{3}}{\int }}{\Psi }_{1,3}\left({x}_{1},{u}_{3}\right)\text{d}{u}_{3}\right)/{\lambda }_{1}\left({x}_{1}\right)\right]\\ {\lambda }_{2}\left({x}_{2}\right)\left[1+\left({c}_{12}\underset{0}{\overset{{x}_{1}}{\int }}{\Psi }_{1,2}\left({u}_{1},{x}_{2}\right)\text{d}{u}_{1}+{c}_{23}\underset{0}{\overset{{x}_{3}}{\int }}{\Psi }_{2,3}\left({x}_{2},{u}_{3}\right)\text{d}{u}_{3}\right)/{\lambda }_{2}\left({x}_{2}\right)\right]\\ {\lambda }_{3}\left({x}_{3}\right)\left[1+\left({c}_{13}\underset{0}{\overset{{x}_{1}}{\int }}{\Psi }_{1\text{}3}\left({u}_{1},{x}_{3}\right)\text{d}{u}_{1}+{c}_{23}\underset{0}{\overset{{x}_{2}}{\int }}{\Psi }_{2,3}\left({u}_{2},{x}_{3}\right)\text{d}{u}_{2}\right)/{\lambda }_{3}\left({x}_{3}\right)\right]\\ \ge {c}_{23}{\Psi }_{2,3}\left({x}_{2},{x}_{3}\right){\lambda }_{1}\left({x}_{1}\right)\left[1+\left({c}_{12}\underset{0}{\overset{{x}_{2}}{\int }}{\Psi }_{1,2}\left({x}_{1},{u}_{2}\right)\text{d}{u}_{2}+{c}_{13}\underset{0}{\overset{{x}_{3}}{\int }}{\Psi }_{1,3}\left({x}_{1},{u}_{3}\right)\text{d}{u}_{3}\right)/{\lambda }_{1}\left({x}_{1}\right)\right]\\ \text{​}+{c}_{13}{\Psi }_{1,3}\left({x}_{1},{x}_{3}\right){\lambda }_{2}\left({x}_{2}\right)\left[1+\left({c}_{12}\underset{0}{\overset{{x}_{1}}{\int }}{\Psi }_{1,2}\left({u}_{1},{x}_{2}\right)\text{d}{u}_{1}+{c}_{23}\underset{0}{\overset{{x}_{3}}{\int }}{\Psi }_{2,3}\left({x}_{2},{u}_{3}\right)\text{d}{u}_{3}\right)/{\lambda }_{2}\left({x}_{2}\right)\right]\\ \text{​}+{c}_{12}{\Psi }_{1,2}\left({x}_{1},{x}_{2}\right){\lambda }_{3}\left({x}_{3}\right)\left[1+\left({c}_{13}\underset{0}{\overset{{x}_{1}}{\int }}{\Psi }_{1,3}\left({u}_{1},{x}_{3}\right)\text{d}{u}_{1}+{c}_{23}\underset{0}{\overset{{x}_{2}}{\int }}{\Psi }_{2,3}\left({u}_{2},{x}_{3}\right)\text{d}{u}_{2}\right)/{\lambda }_{3}\left({x}_{3}\right)\right]\end{array}$ (37)

From (34) we have that:

$\begin{array}{l}0\le {\Psi }_{2,3}\left({x}_{2},{x}_{3}\right){\lambda }_{1}\left({x}_{1}\right)/{\lambda }_{1}\left({x}_{1}\right){\lambda }_{2}\left({x}_{2}\right){\lambda }_{3}\left({x}_{3}\right)={b}_{1}\left({x}_{1},{x}_{2},{x}_{3}\right)\le 1,\\ 0\le {\Psi }_{1,3}\left({x}_{1},{x}_{3}\right){\lambda }_{2}\left({x}_{2}\right)/{\lambda }_{1}\left({x}_{1}\right){\lambda }_{2}\left({x}_{2}\right){\lambda }_{3}\left({x}_{3}\right)={b}_{2}\left({x}_{1},{x}_{2},{x}_{3}\right)\le 1,\\ 0\le {\Psi }_{1,2}\left({x}_{1},{x}_{2}\right){\lambda }_{3}\left({x}_{3}\right)/{\lambda }_{1}\left({x}_{1}\right){\lambda }_{2}\left({x}_{2}\right){\lambda }_{3}\left({x}_{3}\right)={b}_{3}\left({x}_{1},{x}_{2},{x}_{3}\right)\le 1.\end{array}$ (38)

Also let us make the following substitutions:

$\begin{array}{l}\left({c}_{12}\underset{0}{\overset{{x}_{2}}{\int }}{\Psi }_{1,2}\left({x}_{1},{u}_{2}\right)\text{d}{u}_{2}+{c}_{13}\underset{0}{\overset{{x}_{3}}{\int }}{\Psi }_{1,3}\left({x}_{1},{u}_{3}\right)\text{d}{u}_{3}\right)/{\lambda }_{1}\left({x}_{1}\right)={A}^{*}\\ \left({c}_{12}\underset{0}{\overset{{x}_{1}}{\int }}{\Psi }_{1,2}\left({u}_{1},{x}_{2}\right)\text{d}{u}_{1}+{c}_{23}\underset{0}{\overset{{x}_{3}}{\int }}{\Psi }_{2,3}\left({x}_{2},{u}_{3}\right)\text{d}{u}_{3}\right)/{\lambda }_{2}\left({x}_{2}\right)={B}^{*}\\ \left({c}_{13}\underset{0}{\overset{{x}_{1}}{\int }}{\Psi }_{1,3}\left({u}_{1},{x}_{3}\right)\text{d}{u}_{1}+{c}_{23}\underset{0}{\overset{{x}_{2}}{\int }}{\Psi }_{2,3}\left({u}_{2},{x}_{3}\right)\text{d}{u}_{2}\right)/{\lambda }_{3}\left({x}_{3}\right)={C}^{*}\end{array}$ (39)

where from the first inequalities in (34) [for all $1\le i ] follows the nonnegativity of ${A}^{*},{B}^{*},{C}^{*}$.

Now, upon dividing both sides of (37) by the (always positive) product ${\lambda }_{1}\left({x}_{1}\right){\lambda }_{2}\left({x}_{2}\right){\lambda }_{3}\left({x}_{3}\right)$ and applying substitutions (38) and (39), inequality (37) can be rewritten into the following equivalent form:

$\begin{array}{l}\left(1+{A}^{*}\right)\left(1+{B}^{*}\right)\left(1+{C}^{*}\right)\\ \ge {c}_{23}{b}_{1}\left({x}_{1},{x}_{2},{x}_{3}\right)\left(1+{A}^{*}\right)+{c}_{13}{b}_{2}\left({x}_{1},{x}_{2},{x}_{3}\right)\left(1+{B}^{*}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }\text{ }+{c}_{12}{b}_{3}\left({x}_{1},{x}_{2},{x}_{3}\right)\left(1+{C}^{*}\right)\end{array}$ (40)

The equivalence of (40) with (36) and (37) does not depend on the always nonnegative values of the, given by (39), expressions for ${A}^{*},{B}^{*},{C}^{*}$, even if they differ from the expressions A, B, C present in (31) (in both inequalities (31) and (40) only nonnegativity of these symbols is relevant).

Suppose, we set in (40) ${b}_{1}\left({x}_{1},{x}_{2},{x}_{3}\right)={b}_{2}\left({x}_{1},{x}_{2},{x}_{3}\right)={b}_{3}\left({x}_{1},{x}_{2},{x}_{3}\right)=1$ for all ${x}_{\text{1}},{x}_{\text{2}},{x}_{\text{3}}$. Then inequality (40) is essentially “the same” as inequality (31), and the common sufficient condition for them to hold is (32), now as applied to (36).

Assume that (32) holds. Then, the direction of inequality (40) must be preserved also when some or all ${b}_{i}\left({x}_{1},{x}_{2},{x}_{3}\right)$ satisfy ${b}_{i}\left({x}_{1},{x}_{2},{x}_{3}\right)<1$, ( $i=1,2,3$ ). Therefore, (36) and so (35) holds, and the only sufficient condition for that is the logical conjunction of (32) and (34). This terminates the proof. □

Thus, as was proven above, formula (33) satisfying conditions (32) and (34), represents the class of well-defined 3-variate survival functions.

Notice, the wide generality of that class of stochastic models even if it “only” comprises the “continuous” cases.

Notice too, that a vast majority of applications (such as reliability, for example) of multivariate survival analysis mostly deals with the “continuous” case.

As one can say, scheme (33) also provides a method for construction of a wide variety of tri-variate models for many practical situations. This method only relies on choosing (for all $1\le i ), three proper 2-argument nonnegative functions, say ${\Psi }_{i,j}\left({x}_{i},{x}_{j}\right)\le {\lambda }_{i}\left({x}_{i}\right){\lambda }_{j}\left({x}_{j}\right)$, given the marginal distributions, in the continuous case represented by the hazard rates ${\lambda }_{1}\left({x}_{1}\right),{\lambda }_{2}\left({x}_{2}\right),{\lambda }_{3}\left({x}_{3}\right)$.

Obviously, choices of the functions ${\Psi }_{i,j}\left({x}_{i},{x}_{j}\right)$ are dictated by the underlying “physical” structure of the modeled reality (recall that in this paper the word “physical” is meant to have a very general meaning including biological, social, and financial). For some guidance in choosing proper functions ${\Psi }_{i,j}\left({x}_{i},{x}_{j}\right)$, one may resort to the roots of such models which lie in the Aalen version of the Cox model   . Recall the close relations between our models and the Cox-Aalen approach to stochastic dependence, see .

In order yet to simplify such procedure of construction, we also may, starting with the case ${\Psi }_{i,j}\left({x}_{i},{x}_{j}\right)={\lambda }_{i}\left({x}_{i}\right){\lambda }_{j}\left({x}_{j}\right)$, preserve the separation of the variables ${x}_{i},{x}_{j}$ by choosing: ${\Psi }_{i,j}\left({x}_{i},{x}_{j}\right)={g}_{i}\left({x}_{i}\right){g}_{j}\left({x}_{j}\right)$ for some $0\le {g}_{i}\left({x}_{i}\right)\le {\lambda }_{i}\left({x}_{i}\right)$ and $0\le {g}_{j}\left({x}_{j}\right)\le {\lambda }_{j}\left({x}_{j}\right)$.

That foregoing assumption may facilitate the construction still preserving a significant amount of generality of the so obtained models.

Choosing different possibilities one can obtain, say up to ten or more versions of scheme (33) as the proper model’s candidates.

The next step in the modeling procedure is statistical verification of the admitted analytical models by testing their fit to the given data, after estimating all the underlying parameters [by the method of maximum likelihood estimates, for example]. Then, of course, we choose the one with the best fit to the given data. Another criterion for the right choice can be the degree of the model’s simplicity, sometimes dictated by the number of underlying parameters to be estimated.

However, the second, statistical, part of the modeling procedure is out of scope of this work, and is left as a set of open problems for future research.

5. Conclusions

The here and in  considered approach to the problem of construction of k-variate (k ≥ 3) survival functions, given all the univariate marginals, turned out to be very fruitful. First of all, the form of the k-variate model, with an arbitrary k, as obtained in  is universal in the sense that every k-variate survival function is expressible in that way.

In this work we achieved the following:

1) The most general form of any k-variate model as obtained in  is a bit complicated as it describes all possible stochastic dependencies that might be encountered by means of all the joiners.

In this work, we simplified that model dramatically by assuming all the r-dimensional (r ≥ 3) joiners are equal to 1 (this case was named the 3-independence). That yields a quite simple, nice, and still realistic form of the k-variate survival functions as given by formula (14). For the so defined “continuous” case this formula takes on the exponential form (21) which is easier for further analysis.

2) Formulas such as (14) and (21) only determine the form of k-variate models and not yet anactual model. To find a legitimate “active” model one needs to impose proper conditions on all the underlying functions ${\Psi }_{i,j}\left({x}_{i},{x}_{j}\right)$ and the real coefficients cij. The conditions for the general case were stated in the Hypothesis in Section 3. The formal proof was not explicitly given although the Hypothesis appears to be very convincing.

3) The conditions mentioned above were, however, found in Section 4 for the case k = 3 (in Section 2 the corresponding conditions when k = 2 were provided too). These were formulated as Theorem 1 and Theorem 2. The model found in Theorem 2 is very general and comprises every tri-variate “continuous” model such that the corresponding functions are nonnegative (positive dependence only) and the tri-variate joiners reduce to 1 (the bi-dependence case). The case in which the tri-variate joiner is nontrivial is postponed to our next publication. The special case of the model presented in Theorem 2 is given by Theorem 1. This “limit” case of the general one seems to be very natural, and one may expect many applications of it in practical situations.

4) The underlying statistical problems of the parameters estimation and testing the model’s fit to given data are a very important part of the emerging theory, but we consider it to be out of scope for our current interest.

Two other aspects of the created theory are the following:

5) A pretty close relation of the joiner theory to the copula methodology  is apparent. The joiner theory is not only competitive to the theory of copulas but, as it appears, it may become a part of a common theory of joiners and copulas. Having a joiner one can easily find the corresponding unique copula and every copula determines a corresponding unique joiner. Moreover, the joiner method may highly develop the copula approach in cases of dimensions higher than 2.

6) As mentioned in the Remark in Section 3, the joiners theory is not to be restricted to investigation of distributions of k-dimensional random vectors only. As it was pointed out in that Remark, the method can as well be applied to the construction of stochastic processes with discrete time (the Kolmogorov-Daniels consistency conditions are always satisfied). The so constructed processes may possess very nice properties such as m-Markovianity (the Markovianity when m = 1) and stationarity. Easiness of such constructions indicates the significant value of the created “joiners theory”.

Cite this paper: Filus, J. and Filus, L. (2020) Construction of k-Variate Survival Functions with Emphasis on the Case k = 3. Applied Mathematics, 11, 678-697. doi: 10.4236/am.2020.117046.
References

   Filus, K. and Filus, L.Z. (2020) General Forms of Bi-Variate Survival Functions with Reliability Applications. Under Review for Handbook of Reliability, Maintenance, and System Safety through Mathematical Modeling.

   Filus, J.K. and Filus, L.Z. (2020) Theoretical and Reliability Aspects of Multivariate Probability Distributions in Their Universal Form. In: Pham, H., Ed., Springer Handbook of Engineering Statistics, 2nd Edition.

   Filus, J.K. and Filus, L.Z. (2020) A General (Universal) Form of Multivariate Survival Functions in Theoretical and Modeling Aspect of Multicomponent System Reliability Analysis. In: Ram, M. and Pham, H., Eds., Advances in Reliability Analysis and its Applications. Springer Series in Reliability Engineering, Springer, Cham, 319-342.
https://doi.org/10.1007/978-3-030-31375-3_10

   Filus, J.K. and Filus, L.Z. (2017) The Cox-Aalen Models as Framework for Construction of Bivariate Probability Distributions, Universal Representation. Journal of Statistical Science and Applications, 5, 56-63.
https://doi.org/10.17265/2328-224X/2017.0304.002

   Freund, J.E. (1961) A Bivariate Extension of the Exponential Distribution. Journal of the American Statistical Association, 56, 971-77.
https://doi.org/10.1080/01621459.1961.10482138

   Gumbel, E.J. (1960) Bivariate Exponential Distributions. Journal of the American Statistical Association, 55, 698-707.
https://doi.org/10.1080/01621459.1960.10483368

   Kotz, S., Balakrishnan, N. and Johnson, N.L. (2000) Continuous Multivariate Distributions. Vol. 1, 2nd Edition, John Wiley & Sons, Inc., New York.
https://doi.org/10.1002/0471722065

   Marshall, A.W. and Olkin, I. (1967) A Generalized Bivariate Exponential Distribution. Journal of Applied Probability, 4, 291-303.
https://doi.org/10.2307/3212024

   Sklar, A. (1959) Fonctions de repartition a n dimensions et leurs marges. Publications de l’Institut de Statistique de l’Universite de Paris, 8, 229-231.

   Arnold, B.C., Castillo, E. and Sarabia, J.M. (2001) Conditionally Specified Distributions: An Introduction (with Discussion). Statistical Science, 16, 249-274.

   Arnold, B.C., Castillo, E. and Sarabia, J.M. (1999) Conditional Specification of Statistical Models. In: Springer Series in Statistics, Springer Verlag, New York.

   Castillo, E. and Galambos, J. (1990) Bivariate Distributions with Weibull Conditionals. Analysis Mathematica, 16, 3-9.
https://doi.org/10.1007/BF01906769

   Filus, J.K., Filus, L.Z., Arnold, B.C., Jordanova, P.K., Nunez Soza, L., Lu, Y., Stehlikova, S. and Stehlik, M. (2018) On Parameter Dependence and Related Topics: The Impact of Jerzy Filus from Genesis to Recent Developments. In: Vonta, I. and Ram, M., Eds., Reliability Engineering: Theory and Applications, CRC Press, Boca Raton, 143-172.
https://doi.org/10.1201/9781351130363-8

   Filus, J.K. and Filus, L.Z. (2013) A Method for Multivariate Probability Distributions Construction via Parameter Dependence. Communications in Statistics—Theory and Methods, 42, 716-721.
https://doi.org/10.1080/03610926.2012.731549

   Filus, J.K., Filus, L.Z. and Arnold, B.C. (2010) Families of Multivariate Distributions Involving “Triangular” Transformations. Communications in Statistics—Theory and Methods, 39, 107-116.
https://doi.org/10.1080/03610920802687793

   Aalen, O.O. (1989) A Linear Regression Model for the Analysis of the Life Times. Statistics in Medicine, 8, 907-925.
https://doi.org/10.1002/sim.4780080803

   Cox, D.R. (1972) Regression Models and Life Tables (with Discussion). Journal of the Royal Statistical Society B, 74, 187-220.
https://doi.org/10.1111/j.2517-6161.1972.tb00899.x

   Arnold, B.C. (2017) Private Communication. December 2017.

Top