The Principal Component Transform of Parametrized Functions
Abstract: Many advanced mathematical models of biochemical, biophysical and other processes in systems biology can be described by parametrized systems of nonlinear differential equations. Due to complexity of the models, a problem of their simplification has become of great importance. In particular, rather challengeable methods of estimation of parameters in these models may require such simplifications. The paper offers a practical way of constructing approximations of nonlinearly parametrized functions by linearly parametrized ones. As the idea of such approximations goes back to Principal Component Analysis, we call the corresponding transformation Principal Component Transform. We show that this transform possesses the best individual fit property, in the sense that the corresponding approximations preserve most information (in some sense) about the original function. It is also demonstrated how one can estimate the error between the given function and its approximations. In addition, we apply the theory of tensor products of compact operators in Hilbert spaces to justify our method for the case of the products of parametrized functions. Finally, we provide several examples, which are of relevance for systems biology.

1. Introduction

This study is closely related to applications in the so-called “metamodeling” of differential equations, where a “proper” model of an e.g. complex biological process is replaced by its approximation which contains “most information” about the model, but which is simpler. In particular, the true parameters of the model are replaced by “the latent parameters”, which makes the model linear with respect to the latter and hence enables the usage of the (if necessary, partial) least-squares regression. This explains why this idea proved to be efficient in parameter estimation (see e.g.  ). This also justifies the high numerical efficiency of metamodeling, which has been widely used in statistics  , chemometrics  , biochemstry  , genetics    , infrared spectroscopy  to simplify theoretical and computational analysis of the “true” models.

Let $x=x\left(u,\omega \right)$ be a function, where $u\in U\subset {ℝ}^{N}$ and $\omega \in \Omega$ , $\Omega \subset {ℝ}^{M}$ being a space of parameters and $k\in ℕ$ be a given number. The kth Principal Component Transform (PCT) is a specially constructed parametrized function $\text{PCT}\left(x,k\right)\equiv {x}_{k}$ of the form ${x}_{k}=\underset{i=1}{\overset{k}{\sum }}{p}_{i}\left(u\right){t}_{i}\left(\omega \right)$ . The image ${x}_{k}$ is constructed to yield the minimum distance (in some sense) between $x$ and all possible approximations of $x$ of the form $\underset{i=1}{\overset{k}{\sum }}{z}_{i}\left(u\right){y}_{i}\left(\omega \right)$ . The distance is chosen to ensure an efficient way to estimate the deviation of ${x}_{k}$ from $x$ .

Geometrically, the parametrized function $x$ may be regarded as a curve $\omega ↦x\left(\cdot ,\omega \right)$ in a separable Hilbert space. Then ${x}_{k}=\text{PCT}\left(x,k\right)$ can be inter- preted as a projection of this curve onto an $k$ -dimensional subspace, which is chosen in such a way that the image ${x}_{k}$ gives a best possible individual fit to $x$ among all $k$ -dimensional subspaces. As we will see in Subsection 3.1, this necessarily leads to nonlinearity of the mapping PCT.

As we will see in Subsection 3.3, discretizing the function $x\left(u,\omega \right)$ and its PCT yields matrices and the projections onto their first $k$ principal compo- nents, respectively. This explains our terminology: PCT can be regarded as a functional analog of the principal component analysis (PCA) of matrices. This terminology was suggested by Prof. E. Voit in a private talk with the second author during his seminar lecture in Oslo in 2014.

All the papers cited above concentrate on efficiency of the metamodeling approach and disregard mathematical properties of PCT and their justification, which is, for instance, quite important for understanding the limitations of the method and describing the exact conditions under which the method is applicable. In particular, the convergence properties of the sequence of metamodels to the original model has not been studied in the available literature. In our paper we try to fill this gap suggesting a rigorous mathematical approach to PCT and analysis of its basic properties. More precisely, we demonstrate how the theory of compact operators in separable Hilbert spaces can be used to provide such an analysis.

The paper is organized as follows. In Section 2 we introduce the distance in the space of parametrized functions, formulate the theorem on the best indivi- dual fit in terms of PCT of functions (Subsection 2.1) and provide some examples relevant for systems biology (Subsection 2.2). In Section 3 we study mathematical properties of PCT: nonlinearity (Subsection 3.1), continuity (Subsection 3.2) and show relations of PCT and PCA via discretization of functions (Subsections 3.3 and 3.4). In Section 4 we study PCT of products of parametrized functions which are interpreted as elements of the tensor product of two or several Hilbert spaces (Subsection 4.1). We aslo show that PCT pre- serves the tensor products and therefore the product of parametrized functions (Subsection 4.2) and give some examples (Subsection 4.3). In Appendix 5 we offer short proofs of some auxiliary results used in the paper: Allahverdiev’s theorem (Subsection 5.1) and some propositions related to tensor products of linear compact operators in Hilbert spaces (Subsection 5.2).

2. The Best Individual Fit Theorem

In this section we define the distance in the space of parametrized functions and describe how best individual fits $\text{PCT}\left(x,k\right)\left(k\in ℕ\right)$ to a given function $x$ can be obtained using the theory of compact operators in Hilbert spaces. We also prove nonlinearity and continuity of PCT and give some specific examples.

2.1. The Distance in the Space of Parametrized Functions

Let $U$ be a compact subset of ${ℝ}^{N}$ and $\Omega$ be a compact subset of ${ℝ}^{M}.$ We consider the separable Hilbert spaces ${L}^{2}\left(U\right)$ and ${L}^{2}\left(\Omega \right)$ with the standard scalar products $\left(\cdot ,\cdot \right)$ and the norms $‖\cdot ‖$ .

Suppose we are given a measurable, square integrable function $x:\text{}\text{ }U×\Omega \to ℝ$ , i.e.

$\underset{U}{\int }\underset{\Omega }{\int }{|x\left(u,\omega \right)|}^{2}\text{d}u\text{d}\omega <\infty$ (1)

The aim is to find a best possible approximation of $x$ in the class ${\mathcal{L}}_{k}$ of all functions of the form ${x}_{k}\left(u,\omega \right)=\underset{i=1}{\overset{k}{\sum }}{z}_{i}\left(u\right){y}_{i}\left(\omega \right)$ , where ${z}_{i}\in {L}^{2}\left(U\right)$ and ${y}_{i}\in {L}^{2}\left(\Omega \right)$ .

To explain better the nature of topology we use in this case let us have a look at finite dimensional Hilbert, i.e. Euclidean, spaces. Let $X=\left[{x}_{ij}\right]$ be an $m×n$ -matrix, for instance, a discretized function $x\left(u,\omega \right)$ where ${x}_{ij}=x\left({u}_{i},{\omega }_{j}\right)$ . In this case, the best approximation ${X}_{k}$ to $X$ in the class of $m×n$ -matrices of rank not greater than $k$ is given by the first $k$ terms in the singular value decomposition of $X$ :

${X}_{k}=\underset{i=1}{\overset{k}{\sum }}{t}_{i}{p}_{i}^{*},$ (2)

where ${t}_{i}=X{p}_{i}$ and ${p}_{i}$ are the normalized eigenvectors of the matrix ${X}^{*}X$ and ${A}^{*}$ is the conjugate (transpose) of a matrix $A$ . In other words,

$\mathrm{min}\text{ }‖X-Y‖=‖X-{X}_{k}‖,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{where}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{rank}\text{ }Y\le k$ (3)

The matrix norm is defined as $‖Z‖=\underset{‖\alpha ‖\le 1}{\mathrm{sup}}‖Z\alpha ‖$ , where $‖\alpha ‖$ is the Euclidean norm in ${ℝ}^{n}$ .

Now we will look at arbitrary real separable Hilbert spaces which are denoted by $H$ and $K$ and which are equipped with the scalar products ${\left(\cdot ,\cdot \right)}_{H}$ and ${\left(\cdot ,\cdot \right)}_{K}$ and the corresponding norms ${‖\cdot ‖}_{H}$ and ${‖\cdot ‖}_{K}$ , respectively. Assume that

$X:\text{ }\text{}H\to K$ is a linear compact operator. Its norm is again defined as $‖X‖=\underset{{‖\alpha ‖}_{H}\le 1}{\mathrm{sup}}{‖X\alpha ‖}_{K}$ .

Put

${\mathcal{L}}_{k}\left(H,K\right)=\left\{Y\text{\hspace{0.17em}}\text{isalinearboundedoperatorfrom}\text{\hspace{0.17em}}H\text{\hspace{0.17em}}\text{to}\text{\hspace{0.17em}}K\text{\hspace{0.17em}}\text{suchthat}\text{\hspace{0.17em}}\text{dim}\left(\text{Im}Y\right)\le k\right\}$ (4)

We want to find an operator ${X}_{k}\in {\mathcal{L}}_{k}\left(H,K\right)$ for which $‖X-{X}_{k}‖\to \mathrm{min}$ . The construction of ${X}_{k}$ is very close to the singular value decomposition of matrices.

Assume that ${X}^{*}:H\to K$ is the adjoint of $X$ . Then the linear compact operators ${X}^{*}X:H\to H,$ $X{X}^{*}:K\to K$ are self-adjoint and positive-definite.

Let ${\sigma }_{1}^{2}\ge {\sigma }_{2}^{2}\ge \cdots \ge {\sigma }_{i}^{2}\ge \cdots \to 0,{\sigma }_{i}>0,\text{}\left(i=1,2,\cdots \right)$ be all positive eigen- values of the operator ${X}^{*}X$ , the associated normalized eigenvectors being ${p}_{1},{p}_{2},{p}_{3},\cdots \in H$ , respectively:

${X}^{*}X{p}_{i}={\sigma }_{i}^{2}{p}_{i},\text{ }{‖{p}_{i}‖}_{H}=1,\text{ }i\in ℕ$ (5)

It is well-known that ${p}_{i}$ can always be chosen to be orthogonal: ${p}_{i}\perp {p}_{j},i\ne j$ and for any $\alpha \in H$ there is a unique set ${c}_{i}\in ℝ$ , $i\in ℕ$ and a unique ${p}_{0}\in \text{Null}\left({X}^{*}X\right)$ for which $\alpha ={p}_{0}+\underset{i=1}{\overset{\infty }{\sum }}{c}_{i}{p}_{i}$ and, moreover, ${‖\alpha ‖}_{H}^{2}={‖{p}_{0}‖}_{H}^{2}+\underset{i=1}{\overset{\infty }{\sum }}{c}_{i}^{2}.$ Now, the operator $X$ can be represented as

$X\alpha =\underset{i=1}{\overset{\infty }{\sum }}{\left(\alpha ,{p}_{i}\right)}_{H}{t}_{i},$ (6)

where ${t}_{i}=X{p}_{i}$ and the convergence is understood in the sense of the norm in the space $K$ . The truncated versions ${X}_{k}\in {\mathcal{L}}_{k}\left(H,K\right)$ of this representation is defined by

${X}_{k}\alpha =\underset{i=1}{\overset{k}{\sum }}{\left(\alpha ,{p}_{i}\right)}_{H}{t}_{i}$ (7)

The following result, a short proof of which is offered in Appenix 5.1, is known as Allahverdiev’s theorem, see e.g. [8, Chapter II, p. 28]:

Theorem 1. For any linear compact operator $X:\text{ }\text{}H\to K$

$\underset{Y\in {\mathcal{L}}_{k}\left(H,K\right)}{\mathrm{min}}‖X-Y‖=‖X-{X}_{k}‖={\sigma }_{k+1}$ (8)

The functions in numerical calculations are usually replaced by their discreti- zations, which in the case of parametrized functions gives matrices. That is why, the distance in the space of the parametrized functions $x\left(u,\omega \right)$ should be consistent with the distance in the space of matrices, so that we can get all the advantages of the finite dimensional singular value decomposition as well as Allahverdiev’s theorem. To define the distance in the space of matrices we have to interpret matrices as linear operators between two Euclidean spaces. Analo- gously, we have to interpret parametrized functions as operators between suitable Hilbert spaces, and define the distance accordingly.

Let us therefore go back to the spaces ${L}^{2}\left(U\right)$ , ${L}^{2}\left(\Omega \right)$ , where $U$ , as before, is a compact subset of ${ℝ}^{N}$ and $\Omega$ is a compact subset of ${ℝ}^{M}.$ We denote the norm in both spaces as ${‖\cdot ‖}_{{L}^{2}}.$ Consider the integral operator

$\left(X\alpha \right)\left(\omega \right)=\underset{U}{\int }x\left(u,\omega \right)\alpha \left(u\right)\text{d}u$ (9)

Under the assumptions of the square integrability of the kernel $x\left(u,\omega \right)$ the operator $X$ becomes compact and linear from the space ${L}^{2}\left(U\right)$ to the space ${L}^{2}\left(\Omega \right)$ (see e.g.  , Chapter 7, p. 202]).

The distance between two square integrable parametrized functions $x$ and ${x}^{\prime }$ can be now defined in the following way:

$\text{dist}\left(x,{x}^{\prime }\right)=‖X-{X}^{\prime }‖,$ (10)

where $X$ is defined in (9) and $\left({X}^{\prime }\alpha \right)\left(\omega \right)=\underset{U}{\int }{x}^{\prime }\left(u,\omega \right)\alpha \left(u\right)\text{d}u.$ The norm of the linear operators acting from ${L}^{2}\left(U\right)$ to ${L}^{2}\left(\Omega \right)$ is defined in the standard way.

Remark 1. Evidently,

$‖X‖\le C\underset{U}{\int }\underset{\Omega }{\int }{|x\left(u,\omega \right)|}^{2}\text{d}u\text{d}\omega$ (11)

for some constant $C$ . Therefore, ${L}^{2}$ -convergence of the sequence $\left\{{x}^{\left(n\right)}\right\}$ implies the convergence in the sense of the distance dist.

Let ${X}^{*}:{L}^{2}\left(\Omega \right)\to {L}^{2}\left(U\right)$ be the adjoint of $X$ , so that

$\left({X}^{*}\beta \right)\left(u\right)=\underset{\Omega }{\int }x\left(u,\omega \right)\beta \left(\omega \right)\text{d}\omega$ (12)

Now, the self-adjoint and positive-definite integral operators

${X}^{*}X:{L}^{2}\left(U\right)\to {L}^{2}\left(U\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }\text{and}\text{ }\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}X{X}^{*}:{L}^{2}\left(\Omega \right)\to {L}^{2}\left(\Omega \right)$ (13)

can be written as follows:

$\left({X}^{*}X\alpha \right)\left(u\right)=\underset{U}{\int }\gamma \left(u,v\right)\alpha \left(v\right)\text{d}v,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }\text{where}\text{ }\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\gamma \left(u,v\right)=\underset{\Omega }{\int }x\left(u,\omega \right)x\left(v,\omega \right)\text{d}\omega$ (14)

and

$\left(X{X}^{*}\beta \right)\left(\omega \right)=\underset{\Omega }{\int }\delta \left(\omega ,\xi \right)\beta \left(\xi \right)\text{d}\xi ,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }\text{where}\text{ }\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\delta \left(\omega ,\xi \right)=\underset{U}{\int }x\left(u,\omega \right)x\left(u,\xi \right)\text{d}u,$ (15)

respectively. Let, as before,

${\sigma }_{1}^{2}\ge {\sigma }_{2}^{2}\ge \cdots \ge {\sigma }_{i}^{2}\ge \cdots \to 0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left(i=1,2,\cdots \right)$ (16)

be all positive eigenvalues of the integral operator (14) associated with its normalized and mutually orthogonal eigenfunctions ${p}_{i}\in {L}^{2}\left(U\right)$ , i.e.

$\left(\Gamma {p}_{i}\right)\left(u\right)=\underset{U}{\int }\gamma \left(u,v\right){p}_{i}\left(u\right)\text{d}u={\sigma }_{i}^{2}{p}_{i}\left(u\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\underset{U}{\int }{p}_{i}\left(u\right){p}_{j}\left(u\right)\text{d}u=\left\{\begin{array}{ll}0\hfill & \left(i\ne j\right)\hfill \\ 1\hfill & \left(i=j\right)\hfill \end{array}$ (17)

From Theorem 1 we immediately obtain the Best Individual Fit Theorem.

Theorem 2. For a given function $x:\text{ }\text{}U×\Omega \to ℝ$ satisfying (1) the best approximation of $x$ in the class ${\mathcal{L}}_{k}$ of all functions of the form $\underset{i=1}{\overset{k}{\sum }}{z}_{i}\left(u\right){y}_{i}\left(\omega \right)$ , where ${z}_{i}\in {L}^{2}\left(U\right)$ and ${y}_{i}\in {L}^{2}\left(\Omega \right)$ , is given by

${x}_{k}\left(u,\omega \right)=\underset{i=1}{\overset{k}{\sum }}{p}_{i}\left(u\right){t}_{i}\left(\omega \right),$ (18)

where ${p}_{i}$ are the normalized, mutually orthogonal eigenfunctions of the operator (14) and ${t}_{i}\left(\omega \right)=\left(X{p}_{i}\right)\left(\omega \right)=\underset{U}{\int }x\left(u,\omega \right){p}_{i}\left(u\right)\text{d}u$ . Moreover, $\text{dist}\left(x,{x}_{k}\right)={\sigma }_{k+1}$ for all natural $k$ .

In other words,

$\text{dist}\left(x,y\right)\ge \text{dist}\left(x,{x}_{k}\right)={\sigma }_{k+1}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{forall}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}y\in {\mathcal{L}}_{k}$ (19)

Remark 2. The functions ${t}_{i}$ have the following properties (which we do not use in this paper):

${t}_{i}\perp {t}_{j}$ for all $i\ne j$ ;

${t}_{i}={\sigma }_{i}$ for all $i$ ;

$X{X}^{*}{t}_{i}={\sigma }_{i}^{2}{t}_{i}$ for all $i$ .

Definition 1.

• The kth Principal Component Transform (PCT) of the function $x\in {L}^{2}\left(U×\Omega \right)$ is defined as

$\text{PCT}\left(x,k\right)\left(u,\omega \right)={x}_{k}\left(u,\omega \right)=\underset{i=1}{\overset{k}{\sum }}{p}_{i}\left(u\right){t}_{i}\left(\omega \right)$ (20)

• The Full Principal Component Transform of the function $x\in {L}^{2}\left(U×\Omega \right)$ is given by

$\text{PCT}\left(x,\infty \right)\left(u,\omega \right)=\underset{i=1}{\overset{\infty }{\sum }}{p}_{i}\left(u\right){t}_{i}\left(\omega \right)$ (21)

We will also write $\text{PCT}\left(x,\infty \right)\equiv \text{PCT}\left(x\right).$

We remark that none of these transforms is uniquely defined: even if all ${\sigma }_{i}$ are all different, we have always a choice between two normalized eigenfunctions ${p}_{i}$ . However, the distance between $x$ and any ${x}_{k}$ is independent of the projection we use. On the other hand, this means that the properties of PCT should be formulated with a care.

2.2. Examples of PCT

In this subsection we consider three examples which are of importance in systems biology.

Example 1. Let

$x\left(u,\omega \right)={u}^{\omega }$ (22)

Assume that $u\in \left[a,b\right],\text{\hspace{0.17em}}a,b\in ℝ,a>0,\text{\hspace{0.17em}}\omega \in \left[0,1\right].$ Then, using Formulas (14) and (15), we obtain the following representations of the kernels $\gamma$ and $\delta$

$\gamma \left(u,v\right)=\underset{0}{\overset{1}{\int }}{u}^{\omega }{v}^{\omega }\text{d}\omega =\underset{0}{\overset{1}{\int }}{\left(uv\right)}^{\omega }\text{d}\omega =\frac{uv-1}{\mathrm{ln}\left(uv\right)},$ (23)

$\delta \left(\omega ,\xi \right)=\underset{a}{\overset{b}{\int }}{u}^{\omega }{u}^{\xi }\text{d}u=\underset{a}{\overset{b}{\int }}{u}^{\omega +\xi }\text{d}u=\frac{{b}^{\omega +\xi +1}-{a}^{\omega +\xi +1}}{\omega +\xi +1}$ (24)

Therefore the normalized eigenfunctions ${p}_{i}\left(u\right)$ can be obtained from the equation

$\underset{0}{\overset{1}{\int }}\left(\frac{uv-1}{\mathrm{ln}\left(uv\right)}\right){p}_{i}\left(u\right)\text{d}u={\sigma }_{i}^{2}{p}_{i}\left(u\right)$ (25)

The functions ${t}_{i}\left(\omega \right)=\underset{a}{\overset{b}{\int }}{u}^{\omega }{p}_{i}\left(u\right)\text{d}u$ can be alternatively found from the equations

$\underset{0}{\overset{1}{\int }}\left(\frac{{b}^{\omega +\xi +1}-{a}^{\omega +\xi +1}}{\omega +\xi +1}\right){t}_{i}\left(\omega \right)\text{d}\omega ={\sigma }_{i}^{2}{t}_{i}\left(\omega \right)$ (26)

The parametrized power function ${x}^{\omega }$ is of crucial importance in the bioche- mical system theory, where $u$ represents the concentration of a metabolite, while $\omega$ stands for the kinetic order. In the case of several metabolites, one gets products of such power functions, which, in turn, are included into the right- hand side of the so-called “synergetic system”, see (e.g.  , Chapter 2, p. 51) and the references therein. The products of parametrized power functions are considered in Section 4.

Example 2. Consider the function

$x\left(u,\omega \right)={e}^{-\omega |u|}$ (27)

Assume that $u\in \left[-c,c\right],\text{ }\text{}c\in ℝ\text{ },c>0,\text{ }\text{}\omega \in \left[a,b\right],\text{ }\text{}a,b\in ℝ,\text{ }\text{}a>0.$ Then, using Formulas (14) and (15), we obtain the following representations of the kernels $\gamma$ and $\delta$

$\gamma \left(u,v\right)=\underset{a}{\overset{b}{\int }}{e}^{-\omega |u|}{e}^{-\omega |v|}\text{d}\omega =\underset{a}{\overset{b}{\int }}{e}^{-\omega \left(|u|+|v|\right)}\text{d}\omega =\frac{1}{|u|+|v|}\left({e}^{-a\left(|u|+|v|\right)}-{e}^{-b\left(|u|+|v|\right)}\right),$ (28)

$\delta \left(\omega ,\xi \right)=\underset{-c}{\overset{c}{\int }}{e}^{-\omega |u|}{e}^{-l|u|}\text{d}u=\underset{-c}{\overset{c}{\int }}{e}^{-|u|\left(\omega +l\right)}\text{d}u$ (29)

We denote for simplicity

$F\left(s,\omega ,\xi \right)=\underset{0}{\overset{s}{\int }}{e}^{-|u|\left(\omega +\xi \right)}\text{d}u=\left\{\begin{array}{l}\frac{1}{\omega +l}{e}^{s\left(\omega +l\right)}\text{for}\text{ }\text{\hspace{0.17em}}s<0\\ \frac{1}{\omega +l}{e}^{-s\left(\omega +\xi \right)}\text{}\text{ }\text{for}\text{ }\text{\hspace{0.17em}}s>0\end{array}$ (30)

and get

$\delta \left(\omega ,\xi \right)=F\left(c,\omega ,\xi \right)-F\left(-c,\omega ,\xi \right)$ (31)

Therefore the normalized eigenfunctions ${p}_{i}\left(u\right)$ can be obtained from the equation

$\underset{b}{\overset{a}{\int }}\left(\frac{1}{|u|+|v|}\right)\left({e}^{-a\left(|u|+|v|\right)}-{e}^{-b\left(|u|+|v|\right)}\right){p}_{i}\left(u\right)\text{d}u={\sigma }_{i}^{2}{p}_{i}\left(u\right)$ (32)

The functions ${t}_{i}\left(\omega \right)=\underset{a}{\overset{b}{\int }}{e}^{-\omega |u|}{p}_{i}\left(u\right)\text{d}u$ can be also obtained from the equations

$\underset{-c}{\overset{c}{\int }}\left(F\left(c,\omega ,\xi \right)-F\left(-c,\omega ,\xi \right)\right){t}_{i}\left(\omega \right)\text{d}\omega ={\sigma }_{i}^{2}{t}_{i}\left(\omega \right)$ (33)

The function ${e}^{-\omega |u|}$ is often used in the neural field models, where it serves as the simplest example of the so-called “connectivity functions” describing the interactions between neurons, see e.g.  and the references therein.

Example 3. Consider the Hill function

$x\left(u,\omega \right)=\frac{{u}^{q}}{{u}^{q}+{\theta }^{q}}$ (34)

Assume that $u\in \left[a,b\right],\text{\hspace{0.17em}}\text{}a,b\in ℝ,\text{ }\text{}a>0$ , $q\in \left[{q}_{0},{q}_{m}\right],\text{}\text{\hspace{0.17em}}{q}_{0},{q}_{m}\in ℝ,\text{}\text{ }{q}_{0}>0$ , $\theta \in \left[{\theta }_{0},{\theta }_{m}\right],\text{\hspace{0.17em}}\text{}{\theta }_{0},{\theta }_{m}\in ℝ,\text{ }\text{}{\theta }_{0}>0.$ Putting $\omega =\left(q,\theta \right)$ and $\xi =\left({q}^{\prime },{\theta }^{\prime }\right)$ we obtain

$\gamma \left(u,v\right)=\underset{{q}_{0}}{\overset{{q}_{m}}{\int }}\underset{{\theta }_{0}}{\overset{{\theta }_{m}}{\int }}\frac{{u}^{q}}{{u}^{q}+{\theta }^{q}}\frac{{v}^{q}}{{v}^{q}+{\theta }^{q}}\text{d}q\text{d}\theta$ (35)

and

$\delta \left(\omega ,\xi \right)=\underset{a}{\overset{b}{\int }}\frac{{u}^{q}}{{u}^{q}+{\theta }^{q}}\frac{{u}^{{q}^{\prime }}}{{u}^{{q}^{\prime }}+{{\theta }^{\prime }}^{{q}^{\prime }}}\text{d}u$ (36)

The Hill function plays central role in the theory of gene regulatory networks, where it stands for the gene activation function, $x$ being the gene concentra- tion and $\theta$ being the activation threshold, see e.g.  and the references therein.

3. Some Properties of PCT

The Principal Component Transform $\text{PCT}\left(x,k\right)$ is not uniquely defined. That is why, we will use a special notation when comparing PCT of different func- tions, namely, we will write $\text{PCT}\left(x,k\right)\stackrel{˙}{=}\text{PCT}\left(y,k\right)$ if there exist coinciding versions of PCT of $x$ and $y$ .

3.1. PCT Is Homogeneous, But Not Additive

Theorem 3.

1. $\text{PCT}\left(cx,k\right)\stackrel{˙}{=}c\text{PCT}\left(x,k\right)$ for any $c\in ℝ$ and $k\in ℕ.$

2. In general, $\text{PCT}\left({x}^{\left(1\right)}+{x}^{\left(2\right)},k\right)$ is different from $\text{PCT}\left({x}^{\left(1\right)},k\right)+\text{PCT}\left({x}^{\left(2\right)},k\right).$

Proof.

1. The case $c=0$ is trivial. We assume therefore that $c\ne 0$ . Let $\left(X\alpha \right)\left(\omega \right)=\underset{U}{\int }x\left(u,\omega \right)\alpha \left(u\right)\text{d}u$ and $\text{PCT}\left(x\right)\left(u,\omega \right)=\underset{i=1}{\overset{\infty }{\sum }}{p}_{i}\left(u\right){t}_{i}\left(\omega \right)$ , see (21). By definition, ${p}_{i}$ are normalized, mutually orthogonal eigenfunctions of the ope- rator ${X}^{*}X$ and ${t}_{i}=X{p}_{i}$ . Let ${X}_{c}\left(\alpha \right)\equiv X\left(c\alpha \right)$ . Then

${X}_{c}^{*}{X}_{c}{p}_{i}={X}^{*}\left(c{p}_{i}\right)X\left(c{p}_{i}\right)={c}^{2}{X}^{*}X{p}_{i}={c}^{2}{\sigma }_{i}^{2}{p}_{i},$ (37)

so that ${p}_{i}$ are the same for ${X}_{c}$ and $X$ . On the other hand, ${X}_{c}\left({p}_{i}\right)=X\left(c{p}_{i}\right)=cX\left({p}_{i}\right)=c{t}_{i}$ and

$\text{PCT}\left(cx,k\right)\left(u,\omega \right)=\underset{i=1}{\overset{k}{\sum }}{p}_{i}\left(u\right)c{t}_{i}\left(\omega \right)\stackrel{˙}{=}c\text{PCT}\left(x,k\right)\left(u,\omega \right)$ (38)

2. Before constructing an example illustrating nonlinearity of PCT we remark that this statement, in its more precise formulation, says that there are no

versions of $\text{PCT}\left({x}^{\left(1\right)}+{x}^{\left(2\right)},k\right)$ , $\text{PCT}\left({x}^{\left(1\right)},k\right)$ , $\text{PCT}\left({x}^{\left(2\right)},k\right)$ , for which $\text{PCT}\left({x}^{\left(1\right)}+{x}^{\left(2\right)},k\right)=\text{PCT}\left({x}^{\left(1\right)},k\right)+\text{PCT}\left({x}^{\left(2\right)},k\right).$

Let $U=\Omega =\left[0,1\right]$ and the functions ${r}_{\tau }:\left[0,1\right]\to ℝ\text{}\left(\tau =1,2\right)$ satisfy

$\underset{0}{\overset{1}{\int }}{r}_{\tau }^{2}\left(u\right)\text{d}u=1\text{​}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }\text{and}\text{ }\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\underset{0}{\overset{1}{\int }}{r}_{1}\left(u\right){r}_{2}\left(u\right)\text{d}u=0$ (39)

We put

$\begin{array}{l}\left({X}^{\left(1\right)}\alpha \right)\left(\omega \right)=2{r}_{1}\left(\omega \right)\underset{0}{\overset{1}{\int }}{r}_{1}\left(u\right)\alpha \left(u\right)du+{r}_{2}\left(\omega \right)\underset{0}{\overset{1}{\int }}{r}_{2}\left(u\right)\alpha \left(u\right)\text{d}u,\hfill \\ \left({X}^{\left(2\right)}\alpha \right)\left(\omega \right)={r}_{1}\left(\omega \right)\underset{0}{\overset{1}{\int }}\left(2{r}_{1}\left(u\right)+{r}_{2}\left(u\right)\right)\alpha \left(u\right)\text{d}u+{r}_{2}\left(\omega \right)\underset{0}{\overset{1}{\int }}\left({r}_{1}\left(u\right)+{r}_{2}\left(u\right)\right)\alpha \left(u\right)\text{d}u.\hfill \end{array}$ (40)

To calculate PCT we observe that both operators have a 2-dimensional image in ${L}^{2}\left(\Omega \right)$ . Using the representation $\alpha \left(u\right)={c}_{1}{r}_{1}\left(u\right)+{c}_{2}{r}_{2}\left(u\right)+\stackrel{^}{\alpha }\left(u\right)$ where $\stackrel{^}{\alpha }\perp {r}_{\tau }\text{}\left(\tau =1,2\right)$ we reduce the operators ${X}^{\left(1\right)}$ and ${X}^{\left(2\right)}$ to the matrices

$A=\left[\begin{array}{cc}2& 0\\ 0& 1\end{array}\right]\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }\text{and}\text{ }\text{\hspace{0.17em}}\text{\hspace{0.17em}}B=\left[\begin{array}{cc}2& 1\\ 1& 2\end{array}\right],\text{ }\text{ }\text{respectively}\text{ },$

so that

${X}^{\left(1\right)}\alpha =\left({r}_{1}\text{ }{r}_{2}\right)A{\left({c}_{1}\text{ }{c}_{2}\right)}^{*}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }\text{and}\text{ }\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{X}^{\left(2\right)}\alpha =\left({r}_{1}\text{ }{r}_{2}\right)B{\left({c}_{1}\text{ }{c}_{2}\right)}^{*},$ (41)

where $\left(a,b\right)$ and ${\left(a,b\right)}^{*}$ are row and column vectors, respectively.

Matrices $A$ and $B$ are symmetric. Then ${A}^{*}A={A}^{2}$ and ${B}^{*}B={B}^{2}$ . The first eigenpairs of ${A}^{2}$ and ${B}^{2}$ are $4,{\left(1\text{ }0\right)}^{*}$ and $9,{\left(1\text{ }1\right)}^{*}$ , respectively. There- fore the best rank 1 approximations of $A$ and $B$ are

${A}_{1}=\left[\begin{array}{cc}2& 0\\ 0& 0\end{array}\right]\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }\text{and}\text{ }\text{\hspace{0.17em}}\text{\hspace{0.17em}}{B}_{1}=\left[\begin{array}{cc}1.5& 1.5\\ 1.5& 1.5\end{array}\right],\text{ }\text{ }\text{ }\text{respectively}\text{ },$

so that $\text{PCT}\left({X}^{\left(1\right)},1\right)\left(u,\omega \right)=2{r}_{1}\left(u\right){r}_{1}\left(\omega \right)$ and $\text{PCT}\left({X}^{\left(2\right)},1\right)\left(u,\omega \right)=1.5\left({r}_{1}\left(u\right)+{r}_{2}\left(u\right)\right)\left({r}_{1}\left(\omega \right)+{r}_{2}\left(\omega \right)\right),$ which both are operators with an 1-dimensional image. However, their sum

$3.5{r}_{1}\left(u\right){r}_{1}\left(\omega \right)+1.5{r}_{1}\left(u\right){r}_{2}\left(\omega \right)+1.5{r}_{2}\left(u\right){r}_{1}\left(\omega \right)+1.5{r}_{2}\left(u\right){r}_{2}\left(\omega \right)$ (42)

has a 2-dimensional image, as its representation in the basis $\left\{{r}_{1},{r}_{2}\right\}$ is given by the non-singular matrix $A=\left[\begin{array}{cc}3.5& 1.5\\ 1.5& 1.5\end{array}\right]$ . Therefore $\text{PCT}\left({X}^{\left(1\right)},1\right)+\text{PCT}\left({X}^{\left(2\right)},1\right)$ cannot coincide with any version of $\text{PCT}\left(X,1\right)$ .

3.2. PCT Is Continuous

Let us consider a sequence of parametrized, square integrable functions ${x}^{\left(n\right)}:\text{ }U×\Omega \to ℝ$ .

Theorem 4. Let $k\in ℕ$ and $\text{dist}\text{ }\text{ }\left({x}^{\left(n\right)},x\right)\to 0\text{\hspace{0.17em}}\left(n\to \infty \right)$ for some parame- trized, square integrable functions ${x}^{\left(n\right)},x:\text{ }U×\Omega \to ℝ$ . Then for any version ${x}_{k}=PCT\left(x,k\right)$ there are versions ${x}_{k}^{\left(n\right)}=\text{PCT}\left({x}^{\left(n\right)},k\right)$ such that

$\text{ }\text{dist}\text{ }\text{ }\left({x}_{k}^{\left(n\right)},{x}_{k}\right)\to 0,\text{ }n\to \infty$ (43)

Proof. Let $H={L}^{2}\left(U\right)$ , $K={L}^{2}\left(\Omega \right)$ . We define the compact linear integral operators ${X}^{\left(n\right)},\text{ }X:\text{ }H\to K$ using the kernels ${x}^{\left(n\right)}$ , respectively. By the definition of the dist we immediately get that $‖{X}^{\left(n\right)}-X‖\to 0,\text{ }n\to \infty .$

Let ${p}_{i},\text{}i=1,\cdots ,k$ be the normalized, mutually orthogonal eigenfunctions of the operator ${X}^{*}X$ corresponding to its first $k$ eigenvalues ${\sigma }_{1}^{2}\ge {\sigma }_{2}^{2}\ge \cdots \ge {\sigma }_{k}^{2}$ . Since ${X}^{\left(n\right)}$ converges to the operator $X$ in norm, we can always choose a sequence of the eigenfunctions ${p}_{i}^{\left(n\right)}$ such that

${‖{p}_{i}^{\left(n\right)}-{p}_{i}‖}_{H}\to 0,\text{ }n\to \infty ,\text{\hspace{0.17em}}i=1,\cdots ,k$ (44)

In this case

${t}_{i}^{\left(n\right)}={X}^{\left(n\right)}{p}_{i}^{\left(n\right)}\to {t}_{i}=X{p}_{i},\text{ }n\to \infty ,\text{\hspace{0.17em}}i=1,\cdots ,k$ (45)

Therefore $‖{X}_{k}^{\left(n\right)}-{X}_{k}‖\to 0,\text{ }n\to \infty ,$ which implies

$\text{dist}\text{ }\text{ }\left({x}_{k}^{\left(n\right)},{x}_{k}\right)\to 0,\text{ }n\to \infty$ (46)

The above theorem can be reformulated in terms of robustness of PCT.

Corollary 1. Let $k\in ℕ$ and $x:\text{ }\text{}U×\Omega \to ℝ$ be a parametrized, square inte- grable function and $k\in ℕ$ . Then given an $\epsilon >0$ there is a $\epsilon >0$ such that for every parametrized, square integrable function ${x}^{\prime }:\text{ }\text{}U×\Omega \to ℝ$ the follow- ing holds true:

$\text{dist}\text{ }\text{ }\left({x}^{\prime },x\right)<\delta \text{\hspace{0.17em}}⇒\text{}\text{ }\text{dist}\text{ }\left(\text{PCT}\left({x}^{\prime },k\right)-\text{PCT}\left(x,k\right)\right)<\epsilon$ (47)

for some suitable versions of PCT.

3.3. Discretization of Functions

In the papers   , which are aimed at applying the metamodeling approach to gene regulatory networks, the approximations of the parametrized sigmoidal functions are performed numerically by using discretization and SVD of the resulting matrices. The continuity of PCT, proved in the previous subsection, can now be used to justify this analysis and, in particular, the results on the number of the principal components $k$ ensuring the prescribed precision.

In this subsection we suppose that all functions are continuous, which is sufficient for most applications. The general case is, however, unproblematic as well if we slightly adjust the approximation procedure.

Let $x$ be a continuous function on a compact set $D\subset {ℝ}^{N+M},\text{\hspace{0.17em}}D=U×\Omega ,$ where $s=\left(u,\omega \right).$

For all $n\in ℕ,$ $D$ is divided into $n$ measurable subsets ${D}_{i}^{\left(n\right)}$ :

$D=\underset{i=1}{\overset{n}{\cup }}{D}_{i}$ (48)

We define the sequence of the functions ${x}_{n}\left(s\right)$ as follows:

${x}^{\left(n\right)}\left(s\right)=x\left({s}_{i}^{\left(n\right)}\right),\text{\hspace{0.17em}}s\in {D}_{i}^{\left(n\right)},$ (49)

where ${s}_{i}^{\left(n\right)}$ is an arbitrary point in ${D}_{i}^{\left(n\right)}.$

Lemma 1. Let $x$ be a continuous function on $D$ . Then

$\text{dist}\text{ }\text{ }\left({x}^{\left(n\right)},x\right)\to 0,\text{ }n\to \infty$ (50)

provided that $\text{ }\underset{1\le i\le n}{\mathrm{max}}\text{diam}\text{ }\text{ }{D}_{i}^{\left(n\right)}\to 0$ as $n\to \infty$ .

Proof. The function $x$ is continuous on the compact set $D$ , therefore $x\left(s\right)$ is uniformly continuous on $D$ . Then for all $\epsilon >0$ there is $\delta >0$ such that

$|s-{s}^{\prime }|<\delta \text{\hspace{0.17em}}\text{\hspace{0.17em}}⇒\text{\hspace{0.17em}}\text{\hspace{0.17em}}|x\left(s\right)-x\left({s}^{\prime }\right)|<\epsilon$ (51)

On the other hand, there is a number $N$ for which $\underset{1\le i\le n}{\mathrm{max}}\text{diam}\text{ }\text{ }{D}_{i}^{\left(n\right)}<\epsilon$ as long as $n>N$ . Let $s$ be an arbitrary point from $D$ . Then for any $n$ there is ${D}_{i}^{\left(n\right)}$ such that $s\in {D}_{i}^{\left(n\right)}$ . Taking now an arbitrary $n>N$ we obtain

$|{x}^{\left(n\right)}\left(s\right)-x\left(s\right)|=|x\left({s}_{i}^{\left(n\right)}\right)-x\left(s\right)|<\epsilon ,$ (52)

so that $\text{dist}\text{ }\text{ }\left({x}^{\left(n\right)},x\right)\le C\epsilon$ , where ${C}^{2}$ is the Lebesgue measure of the set $D$ .

Hence $\text{dist}\text{ }\text{ }\left({x}^{\left(n\right)},x\right)\to 0,\text{ }n\to \infty .$

Corollary 2. Let $k\in ℕ$ and $x:\text{ }\text{}U×\Omega \to ℝ$ be a parametrized, continuous function, $\left\{{x}^{\left(n\right)}\right\}$ be a sequence of discrete approximations satisfied the assump- tions of Lemma 1. Then for any version ${x}_{k}=\text{PCT}\left(x,k\right)$ there are versions ${x}_{k}^{\left(n\right)}=\text{PCT}\left({x}^{\left(n\right)},k\right)$ such that $\text{dist}\text{ }\text{ }\left({x}_{k}^{\left(n\right)},{x}_{k}\right)\to 0,\text{ }n\to \infty .$

Finally, we observe that if ${D}_{i}^{\left(n\right)}$ are defined as ${U}_{j}^{\left(n\right)}×{\Omega }_{l}^{\left(n\right)}$ , where for any $n$ $\left\{{U}_{j}^{\left(n\right)}\right\}$ and $\left\{{\Omega }_{l}^{\left(n\right)}\right\}$ are measurable partitions of $U$ and $\Omega$ , respectively, and

$\text{ }i=\left(j,l\right)$ , then PCT of the discrete functions ${x}^{\left(n\right)}$ coincide with the $k$ - truncated SVD of the matrix $\left[{x}^{\left(n\right)}\left({s}_{\left(j,l\right)}\right)\right]$ . In the next subsection we provide an example of such approximation stemming from the biochemical systems theory.

3.4. Examples of Discrete Approximations

In this subsection we study the parametrized power function $x\left(u,\omega \right)={u}^{\omega }$ defined on the interval $\left[{u}_{1},{u}_{n}\right],\text{\hspace{0.17em}}{u}_{1},{u}_{n}\in ℝ,{u}_{1}>0$ with the parameter values $\omega \in \left[{\omega }_{1},{\omega }_{m}\right].$ To approximate this function we construct a matrix $\stackrel{˜}{X}$ as follows: we divide $\left[{u}_{1},{u}_{n}\right]$ into $n-1$ parts: ${u}_{1}<{u}_{2}<\cdots <{u}_{n}.$ Similarly, we divide the interval $\left[{\omega }_{0},{\omega }_{m}\right]$ into $m-1$ parts. Every entry of the matrix $\stackrel{˜}{X}$ will be given by the values ${u}_{i}^{{\omega }_{j}}\text{}\left(1\le i\le n,\text{\hspace{0.17em}}1\le j\le m\right)$ :

$\stackrel{˜}{X}=\left[\begin{array}{cccc}{u}_{1}^{{\omega }_{1}}& {u}_{2}^{{\omega }_{1}}& ...& {u}_{n}^{{\omega }_{1}}\\ {u}_{1}^{{\omega }_{2}}& {u}_{2}^{{\omega }_{2}}& ...& {u}_{n}^{{\omega }_{2}}\\ ...& ...& ...& ...\\ {u}_{1}^{{\omega }_{m}}& {u}_{2}^{{\omega }_{m}}& ...& {u}_{n}^{{\omega }_{m}}\end{array}\right]$ (53)

The corresponding discretization of $\text{PCT}\text{ }\left(x,k\right)$ will be then given by the matrix

$\underset{i=1}{\overset{k}{\sum }}{\stackrel{˜}{t}}_{i}{\stackrel{˜}{p}}_{i}^{*},\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\stackrel{˜}{t}}_{i}\in ℝ,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\stackrel{˜}{p}}_{i}\in {ℝ}^{n}$ (54)

The vectors ${\stackrel{˜}{p}}_{i}$ and ${\stackrel{˜}{t}}_{i}$ can be obtained from the singular value decompo- sition of the matrix $\stackrel{˜}{X}$

$\stackrel{˜}{X}={U}_{m×m}{S}_{m×n}{P}_{n×n}^{*},$ (55)

where the rows of the scores matrix $T=US$ consists of the numbers ${\stackrel{˜}{t}}_{i}$ and the columns of the loadings matrix $P$ are the vectors ${\stackrel{˜}{p}}_{i}$ . As an example, let us consider the case $k=4$ , $\left[{u}_{0},{u}_{n}\right]=\left[0.5,1.5\right]$ , $\left[{\omega }_{0},{\omega }_{m}\right]=\left[-1,2\right]$ , $n=m=50$ . Then

$\stackrel{˜}{X}=\left[\begin{array}{cccc}{u}_{1}^{{\omega }_{1}}& {u}_{1}^{{\omega }_{1}}& ...& {u}_{50}^{{\omega }_{1}}\\ {u}_{1}^{{\omega }_{2}}& {u}_{2}^{{\omega }_{2}}& ...& {u}_{50}^{{\omega }_{2}}\\ ...& ...& ...& ...\\ {u}_{1}^{{\omega }_{50}}& {u}_{2}^{{\omega }_{50}}& ...& {u}_{50}^{{\omega }_{50}}\end{array}\right],\text{\hspace{0.17em}}\text{\hspace{0.17em}}T=\left[\begin{array}{cccc}{t}_{11}& {t}_{12}& {t}_{13}& {t}_{14}\\ {t}_{21}& {t}_{22}& {t}_{23}& {t}_{24}\\ ...& ...& ...& ...\\ {t}_{m1}& {t}_{m2}& {t}_{m3}& {t}_{m4}\end{array}\right],\text{\hspace{0.17em}}\text{\hspace{0.17em}}P=\left[\begin{array}{cccc}{p}_{11}& {p}_{12}& ...& {p}_{1n}\\ {p}_{21}& {p}_{22}& ...& {p}_{2n}\\ {p}_{31}& {p}_{32}& ...& {p}_{3n}\\ {p}_{41}& {p}_{42}& ...& {p}_{4n}\end{array}\right],$ (56)

so that the Expression (54) becomes

${t}_{1}{p}_{1}^{*}+{t}_{2}{p}_{2}^{*}+{t}_{3}{p}_{3}^{*}+{t}_{4}{p}_{4}^{*}$ (57)

Assume now that $\omega =0.5$ . This value corresponds to row $s$ in the matrix $T$ . We find a number $s$ as follows:

$s\approx m\frac{\omega -{\omega }_{0}}{{\omega }_{m}-{\omega }_{0}}=50\frac{0.5-\left(-1\right)}{0.5-\left(-1\right)}=25$ (58)

This yields

$\begin{array}{cccc}{t}_{1}={t}_{s1}=-7.0579& {t}_{2}={t}_{s2}=-0.0089& {t}_{3}={t}_{s3}=0.2400& {t}_{4}={t}_{s4}=0.0016\end{array}$

and hence

${u}^{0.5}\approx -7.0579{p}_{1}^{*}\left(u\right)-0.0089{p}_{2}^{*}\left(u\right)+0.2400{p}_{3}^{*}\left(u\right)+0.0016{p}_{4}^{*}\left(u\right)$ (59)

where ${p}_{i}^{*}\left(x\right)\in {ℝ}^{50},i=1,2,3,4$ are the columns in the loadings matrix $P$ , see Figure 1.

The Figure 1 depicts the power function ${u}^{\omega }$ vs. its PCT with 4 components; $u\in \left[0.5,1.5\right],\text{\hspace{0.17em}}\omega \in \left[-1,2\right]$ ; the error is estimated as $\frac{{\sigma }_{5}}{{\sigma }_{1}}=0.0001$ and the Hill function $\frac{{u}^{1/q}}{{u}^{1/q}+{2.2}^{1/q}}$ vs. its PCT with 12 components; $u\in \left[1,3.5\right],\text{\hspace{0.17em}}q\in \left[0.05,10\right],\text{\hspace{0.17em}}\theta \in \left[0.01,5\right]$ ; the error is estimated as $\frac{{\sigma }_{13}}{{\sigma }_{1}}=0.0013$ . The Figure 2 depicts the cumulative normal distribution function $\frac{1}{2}\left(1+\text{erf}\left(\frac{u-\mu }{\theta \sqrt{2}}\right)\right)$ vs. its PCT with 27 components and $u\in \left[-2,2\right],\text{\hspace{0.17em}}\mu \in \left[0.01,0.99\right],\text{\hspace{0.17em}}\theta \in \left[0.1,0.7\right]$ ; the error is estimated as $\frac{{\sigma }_{28}}{{\sigma }_{1}}=0.0019$ and the normal distribution function $x\left(u\right)=\frac{1}{\sqrt{2{\theta }^{2}\text{π}}}\text{\hspace{0.17em}}{\text{e}}^{-\frac{{\left(u-\mu \right)}^{2}}{2{\theta }^{2}}}$ vs. its PCT with 25 PCs; $u\in \left[-2.5,1.5\right],\text{\hspace{0.17em}}\mu \in \left[-1.5,0.5\right],\text{\hspace{0.17em}}\theta \in \left[0.1,1\right]$ ; the error is estimated as $\frac{{\sigma }_{26}}{{\sigma }_{1}}=0.0029$ .

(a) (b)

Figure 1. (a) The power function and its PCT; (b) The Hill function and its PCT.

(a) (b)

Figure 2. (a) The cumulative normal distribution function and its PCT; (b) The normal distribution function and its PCT.

4. PCT of Products of Functions

To calculate PCT of products of parametrized functions we need to apply the theory of tensor products of Hilbert spaces and compacts operators. Appendix 5.2 includes all the necessary details we need in this section.

Below we use the following notation (where $\tau =1,2$ ):

${U}_{\tau }\subset {ℝ}^{N}$ , ${\Omega }_{\tau }\subset {ℝ}^{M}$ are compact sets;

$U={U}_{1}×{U}_{2}$ , $\Omega ={\Omega }_{1}×{\Omega }_{2}$ ;

${H}_{\tau }={L}^{2}\left({U}_{\tau }\right)$ , ${K}_{\tau }={L}^{2}\left({\Omega }_{\tau }\right)$ , $H={L}^{2}\left(U\right)$ , $K={L}^{2}\left(\Omega \right)$ ;

${x}^{\left(\tau \right)}\left({u}_{\tau },{\omega }_{\tau }\right)$ , ${u}_{\tau }\in {U}_{\tau }$ , ${\omega }_{\tau }\in {\Omega }_{\tau }$ are square integrable functions and $x\left(u,\omega \right)=x\left({u}_{1},{\omega }_{1}\right)\text{ }x\left({u}_{2},{\omega }_{2}\right);$

$\left({X}^{\left(\tau \right)}{h}_{\tau }\right)\left({\omega }_{\tau }\right)=\underset{{U}_{\tau }}{\int }{x}^{\left(\tau \right)}\left({u}_{\tau },{\omega }_{\tau }\right){h}_{\tau }\left({u}_{\tau }\right)\text{d}{u}_{\tau }$ so that ${X}^{\left(\tau \right)}:\text{}\text{ }{H}_{\tau }\to {K}_{\tau }$ ;

$\left(Xh\right)\left(\omega \right)=\underset{U}{\int }x\left(u,\omega \right)h\left(u\right)\text{d}u$ so that $X:\text{ }\text{}H\to K$ .

4.1. Products of Parametrized Functions

Theorem 5. In the above notation:

$H={H}_{1}\otimes {H}_{2}$ , $K={K}_{1}\otimes {K}_{2}$

$X={X}^{\left(1\right)}\otimes {X}^{\left(2\right)}$

Proof. We use the definition of the tensor product from Appendix 5.2.

Let ${H}_{\tau }={L}^{2}\left({U}_{\tau }\right)$ have an orthonormal basis $\left\{{e}_{1}^{\left(\tau \right)},{e}_{2}^{\left(\tau \right)},\cdots ,{e}_{i}^{\left(\tau \right)},\cdots \right\},$ so that any ${h}_{\tau }\in {H}_{\tau }$ can be represented as

${h}_{\tau }=\underset{i=1}{\overset{\infty }{\sum }}{c}_{i}^{\left(\tau \right)}{e}_{i}^{\left(\tau \right)}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left(\tau =1,2\right),$ (60)

where $\underset{i=1}{\overset{\infty }{\sum }}{|{c}_{i}^{\left(\tau \right)}|}^{2}<\infty .$

We prove now that the set $E\equiv \left\{{e}_{i}^{\left(1\right)}{e}_{j}^{\left(2\right)},\text{\hspace{0.17em}}i,j\in ℕ\right\}$ is an orthonormal basis in the space $H={L}^{2}\left(U\right)$ . Its orthonormality follows directly from its definition. It remains therefore to check that the set of all linear combinations of the elements from $E$ is dense in $H$ . Indeed, the set of continuous functions, and hence the set $P$ of polynomials $P\left(u\right)$ , on $U$ is dense in $H$ . On the other hand, the set $\stackrel{^}{P}$ of polynomials of the form ${P}^{\left(1\right)}\left({u}^{\left(1\right)}\right){P}^{\left(2\right)}\left({u}^{\left(2\right)}\right)$ spans the set $P$ and, finally, the set $E$ spans the set $\stackrel{^}{P}$ . Thus, $E$ spans $H$ and we have proved that any $h\in H$ can be represented as the ${L}^{2}$ -convergent series

$h=\underset{i,j=1}{\overset{\infty }{\sum }}{c}_{ij}{e}_{i}^{\left(1\right)}{e}_{j}^{\left(2\right)}$ (61)

for some set $\begin{array}{c}{c}_{ij}\end{array}$ satisfying

$\underset{i,j=1}{\overset{\infty }{\sum }}{c}_{ij}^{2}<\infty$ (62)

Defining

$\left({e}_{i}^{\left(1\right)}\otimes {e}_{j}^{\left(2\right)}\right)\left(u\right)\equiv {e}_{i}^{\left(1\right)}\left({u}_{1}\right){e}_{j}^{\left(2\right)}\left({u}_{2}\right)$ (63)

and comparing the Representation (61) with the Formula (94) proves the equality $H={H}_{1}\otimes {H}_{2}$ . The equality $K={K}_{1}\otimes {K}_{2}$ can be checked similarly.

Let us now prove the last formula of the theorem. First of all, we remark that the Definition (63) implies

${g}_{1}\left({\omega }_{1}\right){g}_{2}\left({\omega }_{2}\right)=\left({g}_{1}\otimes {g}_{2}\right)\left(\omega \right)$ (64)

for any ${g}_{\tau }\in {H}_{\tau },\text{}\tau =1,2$ .

By the assumptions on the kernels, the operators in this equality are linear and bounded. Therefore, it is sufficient to check the equality for $h={h}_{1}\otimes {h}_{2}$ (see Appendix 5.2).

$\begin{array}{c}\left(Xh\right)\left(\omega \right)=\underset{U}{\int }x\left(u,\omega \right)h\left(u\right)\text{d}u\\ =\underset{{U}_{1}×{U}_{2}}{\int }{x}^{\left(1\right)}\left({u}_{1},{\omega }_{1}\right){x}^{\left(2\right)}\left({u}_{2},{\omega }_{2}\right){h}_{1}\left({u}_{1}\right){h}_{2}\left({u}_{2}\right)\text{d}{u}_{1}\text{d}{u}_{2}\\ =\underset{{U}_{1}}{\int }{x}^{\left(1\right)}\left({u}_{1},{\omega }_{1}\right){h}_{1}\left({u}_{1}\right)\text{d}{u}_{1}\underset{{U}_{2}}{\int }{x}^{\left(2\right)}\left({u}_{2},{\omega }_{2}\right){h}_{2}\left({u}_{2}\right)\text{d}{u}_{2}\\ =\left({X}^{\left(1\right)}{h}_{1}\right)\left({\omega }_{1}\right)\left({X}^{\left(2\right)}{h}_{2}\right)\left({\omega }_{2}\right)=\left(\left({X}^{\left(1\right)}{h}_{1}\right)\otimes \left({X}^{\left(2\right)}{h}_{2}\right)\right)\left(\omega \right)\end{array}$ (65)

due to (64). Hence $Xh=X\left({h}_{1}\otimes {h}_{2}\right)=\left({X}^{\left(1\right)}{h}_{1}\right)\otimes \left({X}^{\left(2\right)}{h}_{2}\right)$ . Comparing this for- mula with the Definition (100) completes the proof of the theorem.

4.2. PCT Preserves Tensor Products

The main theoretical result of this subsection is the following theorem:

Theorem 6.

$\text{PCT}\text{ }\text{ }\left({X}^{\left(1\right)}\otimes {X}^{\left(2\right)}\right)\stackrel{˙}{=}\text{PCT}\text{ }\text{ }\left({X}^{\left(1\right)}\right)\otimes \text{ }\text{PCT}\text{ }\text{ }\left({X}^{\left(2\right)}\right)$ (66)

Proof. For $\tau =1,2$ we have by definition

$\text{PCT}\text{ }\left({X}^{\left(\tau \right)}\right)\alpha =\underset{i=1}{\overset{\infty }{\sum }}\left(\alpha ,{p}_{i}^{\left(\tau \right)}\right){t}_{i}^{\left(\tau \right)},$ (67)

where ${p}_{i}^{\left(\tau \right)}$ are normalized, mutually orthogonal eigenvectors of the operator ${\left({X}^{\left(\tau \right)}\right)}^{*}{X}^{\left(\tau \right)}$ corresponding to the eigenvalues ${\left({\sigma }^{\left(\tau \right)}\right)}^{2}$ and ${t}_{i}^{\left(\tau \right)}=\left({X}^{\left(\tau \right)}\right){p}_{i}^{\left(\tau \right)}$ .

Put $X={X}^{\left(1\right)}\otimes {X}^{\left(2\right)}$ and ${p}_{ij}={p}_{i}^{\left(1\right)}\otimes {p}_{j}^{\left(2\right)}$ . Using the properties of the tensor product listed in Appendix 5.2 we obtain

$\begin{array}{c}\left({X}^{*}X\right){p}_{ij}={\left({X}^{\left(1\right)}\otimes {X}^{\left(2\right)}\right)}^{*}\left({X}^{\left(1\right)}\otimes {X}^{\left(2\right)}\right)\left({p}_{i}^{\left(1\right)}\otimes {p}_{j}^{\left(2\right)}\right)\\ =\left({\left({X}^{\left(1\right)}\right)}^{*}{X}^{\left(1\right)}\right)\otimes \left({\left({X}^{\left(2\right)}\right)}^{*}{X}^{\left(2\right)}\right)\left({p}_{i}^{\left(1\right)}\otimes {p}_{j}^{\left(2\right)}\right)\\ =\left(\left({\left({X}^{\left(1\right)}\right)}^{*}{X}^{\left(1\right)}\right){p}_{i}^{\left(1\right)}\right)\otimes \left(\left({\left({X}^{\left(2\right)}\right)}^{*}{X}^{\left(2\right)}\right){p}_{i}^{\left(2\right)}\right)\\ =\left({\left({\sigma }_{i}^{\left(1\right)}\right)}^{2}{p}_{i}^{\left(1\right)}\right)\otimes \left({\left({\sigma }_{j}^{\left(2\right)}\right)}^{2}{p}_{i}^{\left(2\right)}\right)=\left({\sigma }_{i}^{\left(1\right)}{\sigma }_{j}^{\left(2\right)}\right){p}_{ij},\end{array}$ (68)

where

$\begin{array}{c}\left({p}_{ij},{p}_{lm}\right)=\left(\left({p}_{i}^{\left(1\right)}\otimes {p}_{j}^{\left(2\right)}\right),\left({p}_{l}^{\left(1\right)}\otimes {p}_{m}^{\left(2\right)}\right)\right)=\left({p}_{i}^{\left(1\right)},{p}_{l}^{\left(1\right)}\right)\left({p}_{j}^{\left(2\right)},{p}_{m}^{\left(2\right)}\right)\\ =1\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }\text{if}\text{ }\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=l,j=m\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }\text{and}\text{ }\text{}=0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }\text{otherwise}\text{ }\end{array}$ (69)

This proves that ${p}_{ij}$ are normalized, mutually orthogonal eigenvectors of the operator ${X}^{*}X$ corresponding to the eigenvalues ${\sigma }_{i}^{\left(1\right)}{\sigma }_{j}^{\left(2\right)}$ .

On the other hand,

$\begin{array}{l}X{p}_{ij}=\left({X}^{\left(1\right)}\otimes {X}^{\left(2\right)}\right)\left({p}_{i}^{\left(1\right)}\otimes {p}_{j}^{\left(2\right)}\right)\\ \left({X}^{\left(1\right)}{p}_{i}^{\left(1\right)}\right)\otimes \left({X}^{\left(2\right)}{p}_{i}^{\left(2\right)}\right)={t}_{i}^{\left(1\right)}\otimes {t}_{j}^{\left(2\right)}\equiv {t}_{ij}\end{array}$ (70)

Therefore,

$\begin{array}{c}\left(\text{PCT}\text{ }\text{ }\left({X}^{\left(1\right)}\otimes {X}^{\left(2\right)}\right)\right)\left({\alpha }_{1}\otimes {\alpha }_{2}\right)\stackrel{˙}{=}\underset{i=1}{\overset{\infty }{\sum }}\underset{j=1}{\overset{\infty }{\sum }}\left({p}_{ij},{\alpha }_{1}\otimes {\alpha }_{2}\right){t}_{ij}\\ =\left(\underset{i=1}{\overset{\infty }{\sum }}\left({p}_{i}^{\left(1\right)},{\alpha }_{1}\right){t}_{i}^{\left(1\right)}\right)\otimes \left(\underset{j=1}{\overset{\infty }{\sum }}\left({p}_{i}^{\left(2\right)},{\alpha }_{1}\right){t}_{i}^{\left(2\right)}\right)\\ \stackrel{˙}{=}\text{ }\text{PCT}\text{ }\text{ }\left({X}^{\left(1\right)}\right)\otimes \text{ }\text{PCT}\text{ }\text{ }\left({X}^{\left(2\right)}\right),\end{array}$ (71)

which proves the theorem. $\square$

Remark 3. Theorem 6 is only valid for the full PCT. The truncated versions of PCT are not necessarily valid, as the order of the singular values ${\sigma }_{ij}={\sigma }_{i}^{\left(1\right)}{\sigma }_{j}^{\left(2\right)}$ depends on the magnitude of the eigenvales ${\sigma }_{i}^{\left(1\right)}$ and ${\sigma }_{j}^{\left(2\right)}$ .

4.3. Examples of Products of Parametrized Functions

In this subsection we describe the kernels of the integral operators related to products of parametrized functions from Subsection 0. These examples are of importance in systems biology.

Example 1. Consider the following function

$x\left({u}_{1},{u}_{2},{\omega }_{1},{\omega }_{2}\right)={u}_{1}^{{\omega }_{1}}{u}_{2}^{{\omega }_{2}},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{u}_{1},{u}_{2}\in U,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\omega }_{1},{\omega }_{2}\in \Omega$ (72)

Assume that $U=\left[a,b\right],\text{}a,b\in ℝ,\text{}a>0,\text{}\Omega =\left[0,1\right].$ Then, using Formulas (14) and (15), we obtain the following representations of the kernels $\gamma$ and $\delta$

$\begin{array}{c}\gamma \left({u}_{1},{u}_{2},{v}_{1},{v}_{2}\right)=\underset{\Omega }{\iint }{u}_{1}^{{\omega }_{1}}{u}_{2}^{{\omega }_{2}}{v}_{1}^{{\omega }_{1}}{v}_{2}^{{\omega }_{2}}\text{d}{\omega }_{1}\text{d}{\omega }_{2}\\ =\underset{0}{\overset{1}{\int }}\underset{0}{\overset{1}{\int }}{\left({u}_{1}{v}_{1}\right)}^{{\omega }_{1}}{\left({u}_{2}{v}_{2}\right)}^{{\omega }_{2}}\text{d}{\omega }_{1}\text{d}{\omega }_{2}\\ =\underset{0}{\overset{1}{\int }}{\left({u}_{2}{v}_{2}\right)}^{{\omega }_{2}}\text{d}{\omega }_{2}\underset{0}{\overset{1}{\int }}{\left({u}_{1}{v}_{1}\right)}^{{\omega }_{1}}\text{d}{\omega }_{1}\\ =\frac{{u}_{1}{v}_{1}-1}{\mathrm{ln}\left({u}_{1}{v}_{1}\right)}\cdot \frac{{u}_{2}{v}_{2}-1}{\mathrm{ln}\left({u}_{2}{v}_{2}\right)},\end{array}$

$\begin{array}{c}\delta \left({\omega }_{1},{\omega }_{2},{\xi }_{1},{\xi }_{2}\right)=\underset{U}{\iint }{u}_{1}^{{\omega }_{1}}{u}_{2}^{{\omega }_{2}}{u}_{1}^{{\xi }_{1}}{u}_{2}^{{\xi }_{2}}\text{d}{u}_{1}\text{d}{u}_{2}\\ =\underset{U}{\iint }{u}_{1}^{{\omega }_{1}+{\xi }_{1}}{u}_{2}^{{\omega }_{2}+{\xi }_{2}}\text{d}{u}_{1}\text{d}{u}_{2}\\ =\underset{a}{\overset{b}{\int }}{u}_{1}^{{\omega }_{1}+{\xi }_{1}}\text{d}{u}_{1}\underset{a}{\overset{b}{\int }}{u}_{2}^{{\omega }_{2}+{\xi }_{2}}\text{d}{u}_{2}\\ =\frac{{b}^{{\omega }_{1}+{\xi }_{1}+1}-{a}^{{\omega }_{1}+{\xi }_{1}+1}}{{\omega }_{1}+{\xi }_{1}+1}\cdot \frac{{b}^{{\omega }_{1}+{\xi }_{1}+1}-{a}^{{\omega }_{2}c+{\xi }_{2}+1}}{{\omega }_{2}+{\xi }_{2}+1}\end{array}$

Example 2. Consider the function

$x\left({u}_{1},{u}_{2},{\omega }_{1},{\omega }_{2}\right)={e}^{-{\omega }_{1}|{u}_{1}|}\cdot {e}^{-{\omega }_{2}|{u}_{2}|},\text{}{u}_{1},{u}_{2}\in U,\text{}{\omega }_{1},{\omega }_{2}\in \Omega$ (73)

Assume that $U=\left[-c,c\right],\text{}c\in ℝ,\text{}c>0,\text{}\Omega =\left[a,b\right],\text{}a,b\in ℝ,\text{}a>0.$ Then, using Formulas (14) and (15), we obtain the following representations of the kernels $\gamma$ and $\delta$

$\begin{array}{c}\gamma \left({u}_{1},{u}_{2},{v}_{1},{v}_{2}\right)=\underset{\Omega }{\iint }{e}^{-{\omega }_{1}|{u}_{1}|}{e}^{-{\omega }_{2}|{u}_{2}|}{e}^{-{\omega }_{1}|{v}_{1}|}{e}^{-{\omega }_{2}|{v}_{2}|}\text{d}{\omega }_{1}\text{d}{\omega }_{2}\\ =\underset{\Omega }{\iint }{e}^{-{\omega }_{1}\left(|{u}_{1}|+||{v}_{1}|\right)}{e}^{-{\omega }_{2}\left(|{u}_{2}|+|{v}_{2}|\right)}\text{d}{\omega }_{1}\text{d}{\omega }_{2}\\ =\underset{a}{\overset{b}{\int }}{e}^{-{\omega }_{1}\left(|{u}_{1}|+|{v}_{1}|\right)}\text{d}{\omega }_{1}\underset{a}{\overset{b}{\int }}{e}^{-{\omega }_{2}\left(|{u}_{2}|+|{v}_{2}|\right)}\text{d}{\omega }_{2}\\ =\frac{1}{-|u|-|v|}\left({e}^{-b\left(|u|+|v|\right)}-{e}^{-a\left(|u|+|v|\right)}\right),\end{array}$

$\begin{array}{c}\delta \left({\omega }_{1},{\omega }_{2},{\xi }_{1},{\xi }_{2}\right)=\underset{U}{\iint }{e}^{-{\omega }_{1}|{u}_{1}|}{e}^{-{\omega }_{2}|{u}_{2}|}{e}^{-{\xi }_{1}|{u}_{1}|}{e}^{-{\xi }_{2}|{u}_{2}|}\text{d}{u}_{1}\text{d}{u}_{2}\\ =\underset{U}{\iint }{e}^{-|{u}_{1}|\left({\omega }_{1}+{\xi }_{1}\right)}{e}^{-|{u}_{2}|\left({\omega }_{2}+{\xi }_{2}\right)}\text{d}{u}_{1}\text{d}{u}_{2}\\ =\underset{-c}{\overset{c}{\int }}{e}^{-|{u}_{1}|\left({\omega }_{1}+{\xi }_{1}\right)}\text{d}{u}_{1}\underset{-c}{\overset{c}{\int }}{e}^{-|{u}_{2}|\left({\omega }_{2}+{\xi }_{2}\right)}\text{d}{u}_{2}\\ =\frac{1}{\left({\omega }_{1}+{\xi }_{1}\right)\left({\omega }_{2}+{\xi }_{2}\right)}\left({e}^{-{u}_{1}\left({\omega }_{1}+{\xi }_{1}\right)}-{e}^{{u}_{1}\left({\omega }_{1}+{\xi }_{1}\right)}\right)\cdot \left({e}^{-{u}_{2}\left({\omega }_{2}+{\xi }_{2}\right)}-{e}^{{u}_{2}\left({\omega }_{2}+{\xi }_{2}\right)}\right)\end{array}$

Example 3. For the Hill function we obtain

$x\left({u}_{1},{u}_{2},{\omega }_{1},{\omega }_{2}\right)=\frac{{u}_{1}^{{q}_{1}}}{{u}_{1}^{{q}_{1}}+{\theta }_{1}^{{q}_{1}}}\frac{{u}_{2}^{{q}_{2}}}{{u}_{2}^{{q}_{2}}+{\theta }_{2}^{{q}_{2}}}$ (74)

Assume that

${u}_{i}\in U,\text{\hspace{0.17em}}U=\left[a,b\right],\text{\hspace{0.17em}}a,b\in ℝ,\text{\hspace{0.17em}}a>0,$

$\text{\hspace{0.17em}}{\omega }_{i}=\left({q}_{i},{\theta }_{i}\right),\text{\hspace{0.17em}}\text{}{q}_{i}\in \left[{q}_{0},{q}_{m}\right],\text{\hspace{0.17em}}{q}_{0},{q}_{m}\in ℝ,\text{\hspace{0.17em}}\text{}{q}_{0}>0,\text{\hspace{0.17em}}$

${\theta }_{i}\in \left[{\theta }_{0},{\theta }_{m}\right],\text{\hspace{0.17em}}{\theta }_{0},{\theta }_{m}\in ℝ,\text{\hspace{0.17em}}\text{}{\theta }_{0}>0,\text{}\text{\hspace{0.17em}}i=1,2.$

Putting $\Omega =\left[{q}_{0},{q}_{m}\right]×\left[{\theta }_{0},{\theta }_{m}\right]$ and ${\xi }_{i}=\left({{q}^{\prime }}_{i},{{\theta }^{\prime }}_{i}\right),\text{\hspace{0.17em}}i=1,2.$ Then, using Formu- las (14) and (15), we obtain the following representations of the kernels $\gamma$ and $\delta$

$\gamma \left({u}_{1},{u}_{2},{v}_{1},{v}_{2}\right)=\underset{\Omega }{\int }\underset{\Omega }{\int }\frac{{u}_{1}^{{q}_{1}}}{{u}_{1}^{{q}_{1}}+{\theta }_{1}^{{q}_{1}}}\frac{{u}_{2}^{{q}_{2}}}{{u}_{2}^{{q}_{2}}+{\theta }_{2}^{{q}_{2}}}\frac{{v}_{1}^{{q}_{1}}}{{v}_{1}^{{q}_{1}}+{\theta }_{1}^{{q}_{1}}}\frac{{v}_{2}^{{q}_{2}}}{{v}_{2}^{{q}_{2}}+{\theta }_{2}^{{q}_{2}}}\text{d}{\omega }_{1}\text{d}{\omega }_{2},$ (75)

$\delta \left({\omega }_{1},{\omega }_{2},{\xi }_{1},{\xi }_{2}\right)=\underset{U}{\iint }\frac{{u}_{1}^{{q}_{1}}}{{u}_{1}^{{q}_{1}}+{\theta }_{1}^{{q}_{1}}}\frac{{u}_{2}^{{q}_{2}}}{{u}_{2}^{{q}_{2}}+{\theta }_{2}^{{q}_{2}}}\frac{{u}_{1}^{{{q}^{\prime }}_{1}}}{{u}_{1}^{{{q}^{\prime }}_{1}}+{{\theta }^{\prime }}_{1}^{{{q}^{\prime }}_{1}}}\frac{{u}_{2}^{{{q}^{\prime }}_{2}}}{{u}_{2}^{{{q}^{\prime }}_{2}}+{{\theta }^{\prime }}_{2}^{{{q}^{\prime }}_{2}}}\text{d}{u}_{1}\text{d}{u}_{2}$ (76)

Remark 4. The eigenfunctions of the integral operators with the kernels that are products of parametrized functions are, according to Subsection 5.2, also products of the respective eigenfunctions of the factors.

5. Conclusions

The main results of the paper can be summarized as follows. We defined the distance in the space of parameterized functions. We defined the $k$ -th Principal Component Transform (PCT) and the Full Principal Component Transform of functions $x\in {L}^{2}\left(U×\Omega \right)$ . The kth PCT is the best approximation of the given function, i.e. it minimizes $\text{dist}\left(\cdot ,\cdot \right)$ . We proved that if the sequence of functions ${x}^{\left(n\right)}\left(s\right)$ converge to the continuous function $x\left(s\right)$ , then the sequence of the PCT of ${x}^{\left(n\right)}\left(s\right)$ will converge to the PCT of $x\left(s\right)$ . Some properties of PCT were considered. These results can also serve as theoretical background for the design of some metamodels. Using the theory of the tensor product of Hilbert spaces and compact operators we calculated the PCT of products of functions. We provided several examples of the discrete approximations and products of the parametrized functions.

We will emphasize that our study is related to systems biology. In future works we aim to investigate the problem of “sloppiness” in nonlinear models  and create an effective parameter estimation method for the “S-systems” (  , Chapter 2, p. 51).

Acknowledgements

The work of the second author has been partially supported by the Norwegian Research Council, grant 239070.

Appendix

1. Allahverdiev’s theorem

Let $K$ and $K$ be two real separable Hilbert spaces, equipped with the scalar products ${\left(\cdot ,\cdot \right)}_{H}$ and ${\left(\cdot ,\cdot \right)}_{K}$ and the corresponding norms ${‖\cdot ‖}_{H}$ and ${‖\cdot ‖}_{K}$ , respectively. Assume that $X:\text{ }H\to K$ is a linear compact operator. Its norm is

defined as $‖X‖=\underset{{‖\alpha ‖}_{H}\le 1}{\mathrm{sup}}{‖X\alpha ‖}_{K}$ .

Put

${\mathcal{L}}_{k}\left(H,K\right)=\left\{Y\text{\hspace{0.17em}}\text{isalinearboundedoperatorfrom}\text{\hspace{0.17em}}H\text{\hspace{0.17em}}\text{to}\text{\hspace{0.17em}}K\text{\hspace{0.17em}}\text{suchthat}\text{\hspace{0.17em}}\text{dim}\left(\text{Im}Y\right)\right\}\le k.$

We want to find an operator ${X}_{k}\in {\mathcal{L}}_{k}\left(H,K\right)$ for which $‖X-{X}_{k}‖\to \text{min}$ . This construction is very close to the finite dimensional singular value decomposition.

Assume that ${X}^{*}:H\to K$ is the adjoint of $X$ . Then the linear compact operators ${X}^{*}X:H\to H,$ $X{X}^{*}:K\to K$ are self-adjoint and positive-definite. Let ${\sigma }_{1}^{2}\ge {\sigma }_{2}^{2}\ge {\sigma }_{3}^{2}\ge \cdots \to 0$ , ${\sigma }_{i}>0$ be all positive eigenvalues of the operator ${X}^{*}X$ , the associated normalized eigenvectors being ${p}_{1},{p}_{2},{p}_{3},\cdots \in H$ , respectively:

${X}^{*}X{p}_{i}={\sigma }_{i}^{2}{p}_{i},\text{ }‖{p}_{i}‖=1,\text{ }i\in ℕ.$ (77)

It is well-known that ${p}_{i}$ can always be chosen to be orthogonal: ${p}_{i}\perp {p}_{j},i\ne j.$ By the Hilbert-Schmidt theorem, for any $\alpha \in H$ there is a

unique set ${c}_{i}\in ℝ$ , $i\in ℕ$ and a unique ${p}_{0}\in \text{Null}\left({X}^{*}X\right)$ for which $\alpha ={p}_{0}+\underset{i=1}{\overset{\infty }{\sum }}{c}_{i}{p}_{i}$ and, moreover, ${‖\alpha ‖}_{H}^{2}={‖{p}_{0}‖}_{H}^{2}+\underset{i=1}{\overset{\infty }{\sum }}{c}_{i}^{2}.$ Thus, the operator $X$ can be represented as

$X\alpha =\underset{i=1}{\overset{\infty }{\sum }}{\left(\alpha ,{p}_{i}\right)}_{H}{t}_{i}=\underset{i=1}{\overset{\infty }{\sum }}{c}_{i}{t}_{i},$ (78)

where ${t}_{i}=X{p}_{i}$ , and the convergence is understood in the sense of the norm in the space $K$ . We define the linear bounded operators ${X}_{k}\in {\mathcal{L}}_{k}\left(H,K\right)$ by

${X}_{k}\alpha =\underset{i=1}{\overset{k}{\sum }}{\left(\alpha ,{p}_{i}\right)}_{H}{t}_{i}=\underset{i=1}{\overset{k}{\sum }}{c}_{i}{t}_{i}$ (79)

The following result is known as Allahverdiev’s theorem, see e.g. :

Proposition 7. For any linear compact operator $X:\text{ }H\to K$

$\underset{Y\in {\mathcal{L}}_{k}\left(H,K\right)}{\mathrm{min}}‖X-Y‖=‖X-{X}_{k}‖={\sigma }_{k+1}$ (80)

Proof. First of all, we prove that $‖X-{X}_{k}‖={\sigma }_{k+1}$ . By definition,

${‖X-{X}_{k}‖}^{2}=\underset{{‖\alpha ‖}_{H\le 1}}{\mathrm{sup}}{‖\left(X-{X}_{k}\right)\alpha ‖}_{K}^{2}$ (81)

From (79) and (78) we get

$\left(X-{X}_{k}\right)\alpha =X\alpha -{X}_{k}\alpha =\underset{i=1}{\overset{k}{\sum }}{c}_{i}{t}_{i}-\underset{i=1}{\overset{\infty }{\sum }}{c}_{i}{t}_{i}=\underset{i=k+1}{\overset{\infty }{\sum }}{c}_{i}{t}_{i}$ (82)

We calculate the norm of $X-{X}_{k}$ using (81), (82):

${‖X-{X}_{k}‖}^{2}=\underset{{‖\alpha ‖}_{H\le 1}}{\mathrm{sup}}{‖\underset{i=k+1}{\overset{\infty }{\sum }}{c}_{i}{t}_{i}‖}_{H}^{2}=\underset{{‖\alpha ‖}_{H\le 1}}{\mathrm{sup}}\underset{i=k+1}{\overset{\infty }{\sum }}{c}_{i}{‖{t}_{i}‖}_{K}^{2}=\underset{{‖\alpha ‖}_{H\le 1}}{\mathrm{sup}}\underset{i=k+1}{\overset{\infty }{\sum }}{c}_{i}{\sigma }_{i}^{2},$ (83)

because

$\begin{array}{c}{‖{t}_{i}‖}_{K}^{2}={\left({t}_{i},{t}_{i}\right)}_{K}={\left(X{p}_{i},X{p}_{i}\right)}_{K}={\left({X}^{*}X{p}_{i},{p}_{i}\right)}_{H}\\ ={\left({\sigma }_{i}^{2}{p}_{i},{p}_{i}\right)}_{H}={\sigma }_{i}^{2}{\left({p}_{i},{p}_{i}\right)}_{H}={\sigma }_{i}^{2}{‖{p}_{i}‖}_{H}={\sigma }_{i}^{2}\end{array}$ (84)

and

${\left({t}_{i},{t}_{j}\right)}_{H}=\left(X{p}_{i},X{p}_{j}\right)=\left({X}^{*}X{p}_{i},{p}_{j}\right)=\left({\sigma }_{i}^{2}{p}_{i},{p}_{j}\right)=0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }\text{if}\text{ }\text{\hspace{0.17em}}\text{\hspace{0.17em}}i\ne j$ (85)

As $\alpha ={p}_{0}+\underset{i=1}{\overset{\infty }{\sum }}{c}_{i}{p}_{i}$ , ${p}_{0}\perp {p}_{i}$ ( $i\in ℕ$ ) and ${‖\alpha ‖}_{H}^{2}={‖{p}_{0}‖}^{2}+\underset{i=1}{\overset{\infty }{\sum }}{c}_{i}^{2}\le 1$ , we obtain $\underset{i=1}{\overset{\infty }{\sum }}{c}_{i}^{2}\le 1$ . As ${\sigma }_{k+1}\ge {\sigma }_{i}$ for all $i>k+1$ ,

$\underset{i=k+1}{\overset{\infty }{\sum }}{c}_{i}{\sigma }_{i}^{2}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\to \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }\mathrm{max}\text{ }={\sigma }_{k+1},$ (86)

if ${c}_{k+1}=1,{c}_{k+2}={c}_{k+3}=\cdots =0$ and ${p}_{0}=0$ .

Hence,

$‖X-{X}_{k}‖={\sigma }_{k+1}$ (87)

Secondly, we prove that

${‖X-Y‖}_{H}\ge {‖X-{X}_{k}‖}_{H}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }\text{for}\text{\hspace{0.17em}}\text{all}\text{ }\text{\hspace{0.17em}}\text{\hspace{0.17em}}Y\in {\mathcal{L}}_{k}\left(H,K\right)$ (88)

Let ${y}_{1},\cdots ,{y}_{k}$ be a basis in $\text{Im}\text{ }Y$ . Then there exist some ${z}_{1},\cdots ,{z}_{k}$ from H such that

$Y\alpha =\underset{i=1}{\overset{k}{\sum }}{\left(\alpha ,{z}_{i}\right)}_{H}{y}_{i}$ (89)

We want to prove that

$\text{span}{\left\{{z}_{1},\cdots ,{z}_{k}\right\}}^{\perp }\cap \text{span}\left\{{p}_{1},\cdots ,{p}_{k+1}\right\}\ne \left\{0\right\}$ (90)

If $\alpha \in \text{span}{\left\{{z}_{1},\cdots ,{z}_{k}\right\}}^{\perp },$ then $Y\alpha =0.$

If $\alpha \in \text{span}\left\{{p}_{1},\cdots ,{p}_{k+1}\right\},$ then $\alpha ={\alpha }_{1}{p}_{1}+\cdots +{\alpha }_{k+1}{p}_{k+1},{\alpha }_{i}\in ℝ,1\le i\le k+1.$

Therefore

$\text{span}\text{ }{\left\{{z}_{1},\cdots ,{z}_{k}\right\}}^{\perp }\cap \text{span}\text{ }\left\{{p}_{1},\cdots ,{p}_{k+1}\right\}\ne \left\{0\right\}\text{\hspace{0.17em}}⇔\text{\hspace{0.17em}}\exists {z}_{1},\cdots ,{z}_{k}\text{\hspace{0.17em}}\text{suchthatthesystem}$

${\alpha }_{1}\left({p}_{1},{z}_{i}\right)+\cdots +{\alpha }_{k+1}\left({p}_{k+1},{z}_{i}\right)=0,\text{ }1\le i\le k\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{hasnon}-\text{trivialsolutions}.$ (91)

This homogeneous system has $k+1$ unknowns and $k$ equations, so that there is $\alpha =\underset{i=1}{\overset{k+1}{\sum }}{c}_{i}{p}_{i}$ such that $\underset{i=1}{\overset{k+1}{\sum }}{c}_{i}^{2}=1$ and $Y\alpha =0$ . Therefore

${‖X-Y‖}^{2}\ge {‖\left(X-Y\right)\alpha ‖}_{K}^{2}={‖\underset{i=1}{\overset{k+1}{\sum }}{c}_{i}{t}_{i}‖}_{K}^{2}=\underset{i=1}{\overset{k+1}{\sum }}{c}_{i}^{2}{‖{t}_{i}‖}_{K}^{2}\ge {\sigma }_{k+1}^{2},$ (92)

as ${‖{t}_{i}‖}_{K}={\sigma }_{i}\ge {\sigma }_{k+1}={‖{t}_{k+1}‖}_{K}$ for $i>k+1.$

$\square$

2. Tensor product of operators in Hilbert spaces

Let ${H}_{1},{H}_{2}$ and ${K}_{1},{K}_{2}$ be real separable Hilbert spaces, where

${H}_{1}$ has an orthonormal basis $\left\{{e}_{1}^{\left(1\right)},{e}_{2}^{\left(1\right)},\cdots ,{e}_{i}^{\left(1\right)},\cdots \right\}.$

${H}_{2}$ has an orthonormal basis $\left\{{e}_{1}^{\left(2\right)},{e}_{2}^{\left(2\right)},\cdots ,{e}_{j}^{\left(2\right)},\cdots \right\}.$

${K}_{1}$ has an orthonormal basis $\left\{{\stackrel{^}{e}}_{1}^{\left(1\right)},{\stackrel{^}{e}}_{2}^{\left(1\right)},\cdots ,{\stackrel{^}{e}}_{i}^{\left(1\right)},\cdots \right\}.$

${K}_{2}$ has an orthonormal basis $\left\{{\stackrel{^}{e}}_{1}^{\left(2\right)},{\stackrel{^}{e}}_{2}^{\left(2\right)},\cdots ,{\stackrel{^}{e}}_{j}^{\left(2\right)},\cdots \right\}.$

Let

${h}_{\tau }=\underset{i=1}{\overset{\infty }{\sum }}{c}_{i}^{\left(\tau \right)}{e}_{i}^{\left(\tau \right)},\text{ }{c}_{i}^{\left(\tau \right)}\in ℝ,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\tau =1,2$ (93)

Now, we define the tensor product $H={H}_{1}\otimes {H}_{2}$ of the spaces ${H}_{1}$ and ${H}_{2}$ as the real separable Hilbert space, which has the basis ${e}_{ij}$ consisting of all ordered pairs $\left({e}_{i}^{\left(1\right)},{e}_{j}^{\left(2\right)}\right)$ , and we put ${e}_{ij}\equiv {e}_{i}^{\left(1\right)}\otimes {e}_{j}^{\left(2\right)}.$ By definition, any $h\in H$ can be uniquely represented as

$h=\underset{i,j=1}{\overset{\infty }{\sum }}{c}_{ij}{e}_{ij},\text{ }\underset{i,j=1}{\overset{\infty }{\sum }}{c}_{ij}^{2}<\infty$ (94)

Definition 2. The scalar product $\left(\cdot ,\cdot \right)$ in $H$ is defined as

$\left(g,h\right)=\underset{i,j=1}{\overset{\infty }{\sum }}{c}_{ij}{d}_{ij},$ (95)

where $g=\underset{i,j=1}{\overset{\infty }{\sum }}{c}_{ij}{e}_{ij}\in H,\text{\hspace{0.17em}}h=\underset{i,j=1}{\overset{\infty }{\sum }}{d}_{ij}{e}_{ij}\in H$ .

Evidently, the set ${e}_{i}\otimes {e}_{j}$ is an orthonormal basis of the space ${H}_{1}\otimes {H}_{2}$ and therefore

${‖h‖}^{2}=\underset{i,j=1}{\overset{\infty }{\sum }}{|\left(h,{e}_{ij}\right)|}^{2}=\underset{i,j=1}{\overset{\infty }{\sum }}{|{c}_{ij}|}^{2}$ (96)

is the norm on $H$ . The series

$\underset{i,j=1}{\overset{\infty }{\sum }}{c}_{ij}{e}_{ij},\text{ }{c}_{ij}\in ℝ$

converges in this norm. It is also straightforward to check that

$‖{h}_{1}\otimes {h}_{2}‖={‖{h}_{1}‖}_{{H}_{1}}{‖{h}_{2}‖}_{{H}_{2}}$ (97)

for all ${h}_{1}\in {H}_{1}$ , ${h}_{2}\in {H}_{2}$ .

Let us consider two compact linear operators

${X}^{\left(1\right)}:{H}_{1}\to {K}_{1},\text{ }{X}^{\left(2\right)}:{H}_{2}\to {K}_{2}$ (98)

For all ${h}_{1}\in {H}_{1},{h}_{2}\in {K}_{2}$ we have

${X}^{\left(1\right)}{h}_{1}=\underset{i=1}{\overset{\infty }{\sum }}{c}_{i}^{\left(1\right)}{X}^{\left(1\right)}{e}_{i}^{\left(1\right)},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{X}^{\left(2\right)}{h}_{2}=\underset{i=1}{\overset{\infty }{\sum }}{c}_{i}^{\left(2\right)}{X}^{\left(2\right)}{e}_{i}^{\left(2\right)}$ (99)

We define the tensor product ${X}^{\left(1\right)}\otimes {X}^{\left(2\right)}:{H}_{1}\otimes {H}_{2}\to {K}_{1}\otimes {K}_{2}$ of ${X}^{\left(1\right)}$ and ${X}^{\left(2\right)}$ as

$Xh=\underset{i,j=1}{\overset{\infty }{\sum }}{c}_{ij}\left({X}^{\left(1\right)}{e}_{i}^{\left(1\right)}\otimes {X}^{\left(2\right)}{e}_{j}^{\left(2\right)}\right),$ (100)

where $h\in H$ is given by (94).

Proposition 8. If ${X}^{\left(1\right)}:{H}_{1}\to {K}_{1},\text{ }{X}^{\left(2\right)}:{H}_{2}\to {K}_{2}$ are linear compact ope- rators, then so is the operator ${X}^{\left(1\right)}\otimes {X}^{\left(2\right)}:{H}_{1}\otimes {H}_{2}\to {K}_{1}\otimes {K}_{2}$ .

Proof. Linearity of $X\equiv {X}^{\left(1\right)}\otimes {X}^{\left(2\right)}$ follows directly from the definition. Taking an arbitrary $h\in H$ satisfying (94) we obtain

$\begin{array}{c}{‖Xh‖}^{2}={‖\underset{i,j=1}{\overset{\infty }{\sum }}\left({c}_{ij}\left({X}^{\left(1\right)}{e}_{i}^{\left(1\right)}\otimes {X}^{\left(2\right)}{e}_{j}^{\left(2\right)}\right)\right)‖}^{2}\le \underset{i,j=1}{\overset{\infty }{\sum }}{|{c}_{ij}|}^{2}{‖{X}^{\left(1\right)}{e}_{i}^{\left(1\right)}\otimes {X}^{\left(2\right)}{e}_{j}^{\left(2\right)}‖}^{2}\\ \le \underset{i,j=1}{\overset{\infty }{\sum }}{|{c}_{ij}|}^{2}{‖{X}^{\left(1\right)}‖}^{2}{‖{X}^{\left(2\right)}‖}^{2}={‖{X}^{\left(1\right)}‖}^{2}{‖{X}^{\left(2\right)}‖}^{2}{‖h‖}^{2}\end{array}$ (101)

Therefore $X$ is bounded, and in particular,

$‖X‖\le ‖{X}^{\left(1\right)}‖‖{X}^{\left(2\right)}‖$ (102)

To prove compactness we choose an arbitrary $\epsilon >0$ and linear bounded finite dimensional operators ${Y}_{\tau }:{H}_{\tau }\to {K}_{\tau }$ for which $‖{X}_{\tau }-{Y}_{\tau }‖<\epsilon \left(\tau =1,2\right)$ .

Evidently,

$\begin{array}{c}{X}^{\left(1\right)}\otimes {X}^{\left(2\right)}-{Y}^{\left(1\right)}\otimes {Y}^{\left(2\right)}=\left({X}^{\left(1\right)}-{Y}^{\left(1\right)}\right)\otimes \left({X}^{\left(2\right)}-{Y}^{\left(2\right)}\right)\\ \text{}+\left({X}^{\left(1\right)}-{Y}^{\left(1\right)}\right)\otimes {Y}^{\left(2\right)}+{Y}^{\left(1\right)}\otimes \left({X}^{\left(2\right)}-{Y}^{\left(2\right)}\right)\end{array}$ (103)

Using (102) we obtain

$\begin{array}{c}‖{X}^{\left(1\right)}\otimes {X}^{\left(2\right)}\text{​}-\text{​}{Y}^{\left(1\right)}\otimes {Y}^{\left(2\right)}‖\le ‖{X}^{\left(1\right)}\text{​}-\text{​}{Y}^{\left(1\right)}‖‖{X}^{\left(2\right)}\text{​}-\text{​}{Y}^{\left(2\right)}‖\\ \text{}+‖{X}^{\left(1\right)}-\text{​}{Y}^{\left(1\right)}‖‖{Y}^{\left(2\right)}‖+‖{Y}^{\left(1\right)}‖‖{X}^{\left(2\right)}\text{​}-\text{​}{Y}^{\left(2\right)}‖\\ <{\epsilon }^{2}+\epsilon \left(‖{X}^{\left(1\right)}‖+\epsilon \right)+\epsilon \left(‖{X}^{\left(2\right)}‖+\epsilon \right)\end{array}$ (104)

Therefore, the operator ${X}^{\left(1\right)}\otimes {X}^{\left(2\right)}$ can be approximated in norm by finite dimensional operators of the form ${Y}^{\left(1\right)}\otimes {Y}^{\left(2\right)}$ with an arbitrary precision. Thus, ${X}^{\left(1\right)}\otimes {X}^{\left(2\right)}$ is compact.

$\square$

Proposition 9. For all linear compact operators ${X}^{\left(1\right)}:{H}_{1}\to {K}_{1}$ and ${X}^{\left(2\right)}:{H}_{2}\to {K}_{2}$ we have

${\left({X}^{\left(1\right)}\otimes {X}^{\left(2\right)}\right)}^{*}={\left({X}^{\left(1\right)}\right)}^{*}\otimes {\left({X}^{\left(2\right)}\right)}^{*}$ (105)

Proof. The set of linear combinations ${\sum }_{i}{f}_{1}\otimes {f}_{2}$ is dense in ${H}_{1}\otimes {H}_{2}$ , i.e. for all $h\in {H}_{1}\otimes {H}_{2}$ there is a sequence of linear combinations of ${h}_{1}\otimes {h}_{2}$ which converges to $h$ in the norm. As the operators ${X}^{\left(1\right)}$ and ${X}^{\left(2\right)}$ are linear and bounded, it is sufficient to prove the equality in the lemma for the special case of $h={h}_{1}\otimes {h}_{2}\in {H}_{1}\otimes {H}_{2}$ , where we by definition have the formula

$\left({X}^{\left(1\right)}\otimes {X}^{\left(2\right)}\right)\left({h}_{1}\otimes {h}_{2}\right)=\left({X}^{\left(1\right)}{h}_{1}\right)\otimes \left({X}^{\left(2\right)}{h}_{2}\right)$ (106)

Let $\alpha ={\alpha }_{1}\otimes {\alpha }_{2},\beta ={\beta }_{1}\otimes {\beta }_{2}$ . where ${\alpha }_{1},{\alpha }_{2}\in {H}_{1}$ and ${\beta }_{1},{\beta }_{2}\in {H}_{2}$ . Then

$\begin{array}{c}\left(\left({X}^{\left(1\right)}\otimes {X}^{\left(2\right)}\right)\alpha ,\beta \right)=\left({X}^{\left(1\right)}{\alpha }_{1}\otimes {X}^{\left(2\right)}{\alpha }_{2},{\beta }_{1}\otimes {\beta }_{2}\right)=\left({X}^{\left(1\right)}{\alpha }_{1},{\beta }_{1}\right)\left({X}^{\left(2\right)}{\alpha }_{2},{\beta }_{2}\right)\\ =\left({\alpha }_{1},{\left({X}^{\left(1\right)}\right)}^{*}{\beta }_{1}\right)\left({\alpha }_{2},{\left({X}^{\left(2\right)}\right)}^{*}{\beta }_{2}\right)=\left({\alpha }_{1}\otimes {\alpha }_{2},{\left({X}^{\left(1\right)}\right)}^{*}{\beta }_{1}\otimes {\left({X}^{\left(2\right)}\right)}^{*}{\beta }_{2}\right)\\ =\left(\alpha ,{\left({X}^{\left(1\right)}\right)}^{*}\otimes {\left({X}^{\left(2\right)}\right)}^{*}\beta \right)\end{array}$ (107)

Hence ${\left({X}^{\left(1\right)}\otimes {X}^{\left(2\right)}\right)}^{*}={\left({X}^{\left(1\right)}\right)}^{*}\otimes {\left({X}^{\left(2\right)}\right)}^{*}$ .

$\square$

Proposition 10. If $\left({\lambda }^{\left(\tau \right)},\text{ }{q}^{\left(\tau \right)}\right)$ is the eigenpair of the operator ${X}^{\left(\tau \right)};{H}_{\tau }\to {K}_{\tau }$ ( $\tau =1,2$ ), then $\left({\lambda }^{\left(1\right)}{\lambda }^{\left(2\right)},\text{ }{q}^{\left(1\right)}\otimes {q}^{\left(2\right)}\right)$ is the eigenpair of the operator ${X}^{\left(1\right)}\otimes {X}^{\left(2\right)}$ .

Proof.

$\begin{array}{c}\left({X}^{\left(1\right)}\otimes {X}^{\left(2\right)}\right)\left({q}^{\left(1\right)}\otimes {q}^{\left(2\right)}\right)=\left({X}^{\left(1\right)}{q}^{\left(1\right)}\right)\otimes \left({X}^{\left(2\right)}{q}^{\left(2\right)}\right)\\ =\left({\lambda }^{\left(1\right)}{q}^{\left(1\right)}\right)\otimes \left({\lambda }^{\left(2\right)}{q}^{\left(2\right)}\right)\\ ={\lambda }^{\left(1\right)}{\lambda }^{\left(2\right)}{q}^{\left(1\right)}\otimes {q}^{\left(2\right)}\end{array}$ (108)

$\square$ Submit or recommend next manuscript to SCIRP and we will provide best service for you:

A wide selection of journals (inclusive of 9 subjects, more than 200 journals)

Providing 24-hour high-quality service

User-friendly online submission system

Fair and swift peer-review system

Display of the result of downloads and visits, as well as the number of cited articles

Maximum dissemination of your research work

Or contact am@scirp.org

Cite this paper: Zabrodskii, I. and Ponosov, A. (2017) The Principal Component Transform of Parametrized Functions. Applied Mathematics, 8, 453-475. doi: 10.4236/am.2017.84037.
References

   Tafintseva, V., Tøndel, K., Ponosov, A. and Martens, H. (2014) Global Structure of Sloppiness in a Nonlinear Model. Journal of Chemometrics, 28, 645-655.
https://doi.org/10.1002/cem.2651

   Tøndel, K., Gjuvsland, A., Mage, I. and Martens, H. (2010) Screening Design for Computer Experiments: Metamodelling of a Deterministic Mathematical Model of the Mammalian Circadian Clock. Journal of Chemometrics, 24, 738-747.
https://doi.org/10.1002/cem.1363

   Martens, H., Mage, I., Tøndel, K., Isaeva, J., Gjuvsland, A., Høy, M. and Saebø, S. (2010) Multi-Level Binary Replacement (MBR) Design for Computer Experiments in High-Dimensional Nonlinear Systems. Journal of Chemometrics, 24, 748-756.
https://doi.org/10.1002/cem.1366

   Martens, H., Veflingstad, S.R., Plahte, E., Martens, M., Bertrand, D. and Omholt, S.W. (2009) The Genotype-Phenotype Relationship in Multicellular Pattern-Generating Models—The Neglected Role of Pattern Descriptors. BMC Systems Biology, 3, 87.
http://bmcsystbiol.biomedcentral.com/articles/10.1186/1752-0509-3-87

   Isaeva, J., Saebø, S., Wyller, J.A., Liland, K.H., Faergestad, E.M., Bro, R. and Martens, H. (2010) Using GEMANOVA to Explore the Pattern Generating Properties of the Delta-Notch Model. Journal of Chemometrics, 24, 626-634.
https://doi.org/10.1002/cem.1348

   Isaeva, J., Saebø, S., Wyller, J.A., Wolkenhauer, O. and Martens, H. (2012) Nonlinear Modelling of Curvature by Bi-Linear Metamodeling. Chemometrics and Intelligent Laboratory Systems, 117, 2-12.

   Konevskikh, T., Blümel, R., Lukacs, R., Ponosov, A. and Kohler, A. (2015) Fringes in FTIR Spectroscopy Revisited: Understanding and Modelling Fringes in Infrared Spectroscopy of Thin Films. Analyst, 140, 3969-3980.
https://doi.org/10.1039/C4AN02343A

   Gohberg, I.C. and Krein, M.G. (1969) Introducion to the Theory of Linear Nonselfadjoint Operators in Hilbert Space. American Mathematical Society, Providence.

   Hutson, V., Pym, J.S. and Cloud, M.J. (2005) Applications of Functional Analysis and Operator Theory. Elsevier Science, Amsterdam.

   Voit, E.O. (2000) Computational Analysis of Biochemical Systems. A Practical Guide for Biochemists and Molecular Biologists. Cambridge University Press, Cambridge.

   Burlakov, E., Ponosov, A., Wyller, J. and Zhukovskii, E. (2015) Existence, Uniqueness and Continuous Dependence on Parameters of Solutions to Neural Field Equations. Memoirs on Differential Equations and Mathematical Physics, 65, 35-55.

   Mestl, T., Plahte, E. and Omholt, S.W. (1995) A Mathematical Framework for Describing and Analysing Gene Regulatory Networks. Journal of Theoretical Biology, 176, 291-300.

Top