This work is a statement in English of the results published by us in Russian in journal articles   . 1976 can be designated as the beginning of this work when we were forced to begin profoundly studying the tensor calculus and its applications. Since then we stopped never addressing tensors. One of results of this close communication with tensors was writing of the book “Applied Tensor Calculus”  .
In well-known fundamental literature  -  , properties of pseudoeuclidean space are studied and mathematical models of physical processes in it are under construction. However, the question of applicability of differential and integral calculus of Euclidean space in pseudoeuclidean space is not considered at all, or is considered in special cases of studied problems, but generally remains unexplored. It, owing to the big authority of the called literature authors, can generate confidence that such problem does not exist. The executed research by us gives the grounds to assert that the problem of correct application of mathematics of Euclidean space in the vector space with pseudoeuclidean metrics exists and that attempts to ignore this problem can lead to the mathematical models which are not adequately describing processes and the phenomena of the material world.
2. Comparative Definitions of Euclidean and Pseudoeuclidean Spaces
This part of article contains elements of algebra and math analysis which with sufficient completeness are explained in textbook  . Requirement of inclusion of elementary details of this material in article is dictated by need to reasonably show the common properties of Euclidean and pseudoeuclidean spaces, to show initial distinction of these spaces, to clearly recognize the stage of creation of the linearly vector space when this distinction is entered, and as it influences further creation of space, to study how behave algebraic forms and differential calculus at the same time. Even the insignificant ambiguity from all this can generate doubt in reliability of results and therefore nothing from written below it can be excluded as the partial fact, for reduction of volume of article.
We will define the vector space of dimension of real vectors and its basis traditionally how we may read about it in the textbooks for students. For visual demonstration we will determine the vectors as the rank- order sets of real numbers, representing them either matrixes columns, or matrixes lines.
Vectors of affine basis we will designate . Let’s agree that the Greek indexes accept values from 0 to n, and Latin indexes―from 1 to n. Therefore vector decomposition on basis with use of the rule of toting on twice repeating index it is possible to write down doubly:
As basis vectors we will accept matrixes lines
In such basis elements of any vector will be components of decomposition of this vector on basis , i.e. will be .
The concept of a point of the vector space and communication of points with vectors of this vector space is established axiomatically: to each couple of points of the vector space, the given in a particular order, one is put in compliance and only one vector of this space.
Let’s construct a radius vector of the vector space. For this purpose we will take a zero vector (at it all elements are equal to zero) and we will call it an origin point of coordinates that we will agree to write down in a look . Let ―the current point of the vector space. According to an axiom of points of the vector space, each couple of points the vector of this space unambiguously answers. Let it will be vector, i.e. . We will call a set of numbers affine coordinates of a point that we will agree to write down . We will call a vector a position vector of a point , we will designate a position vector and we will write . Let’s remind that is affine basis.
The position vector of the current point of the vector space is a variable vector. Therefore it is vector function of scalar arguments .
Let’s take a set of functions , where , independent from each other variables . Let’s believe that these functions one-to-one establish connection between affine coordinates and variables . When replacing by aid of these functions of variables by variables the position vector becomes function of variables , i.e. . We will call variables the curvilinear coordinates of the current point of space , and a set of functions , where , we will call a transformation of coordinates.
Further everywhere in the vector space we will use curvilinear coordinates which have coordinate lines (lines on which ) are parallel straight lines with the directing vector , and all other coordinate lines are, generally speaking, curves. In such coordinates the position vector is function of a look and , , .
In each point we will enter the basis called by local basis on a formula
It is necessary attention that formulas (1) define vectors of local basis in form of decomposition on affine basis . Considering that is the composite function with the intermediate arguments and noncontiguous variables , we will write down these decompositions:
Let’s assume that curvilinear coordinates are locally the linear coordinates, i.e. such that in any point of space the local basis can be used as affine basis of rather small vicinity of this point.
Both in Euclidean and in pseudoeuclidean spaces the vector norm is formulated equally: the vector norm is equal to a square root from a scalar square of a vector: . However scalar squares of vectors in Euclidean and in pseudoeuclidean spaces are different with each other as scalar products of vectors are defined in different ways. Therefore vector norms―metrics of spaces―are different. Differently injected scalar multiplication of vectors is a boundary at which there is a branching of the vector space on Euclidean and on pseudoeuclidean.
Let’s define a scalar product of vectors of the vector space with Euclidean metrics. We will enter the definition of vectors multiplication, i.e. we will formulate the law under which to each couple of vectors this law sets the particular value of real number which we will designate . At the same time operation of a scalar multiplication of vectors has to satisfy to axioms of commutation, association, distribution and a positive sign of a scalar square of vectors.
In Euclidean space it is axiomatically claimed that the scalar square of a vector is positive definite:
when , when .
The scalar multiplication of vectors becomes the given if to define―to set the law of a scalar multiplication of vectors of affine basis.
For the vector space with affine basis and with Euclidean metric we will determine scalar multiplication by the following rule: the scalar product of any couple of vectors of affine basis is equal to the sum of products of their corresponding components. This rule is expressed by formulas:
where ―unit matrix (Kronecker delta).
Having multiplied vectors with use of formulas (3) and with application of axioms of commutation, association and distribution, we will receive a formula of a scalar multiplication in Euclidean space of the vectors set by coordinates in affine basis ,
On this formula we receive a scalar square of a vector
By this formula we conclude that the rule of a scalar multiplication of vectors of affine basis (3) is guarantee of realization of an axiom of a positive determination of a scalar square of a vector in Euclidean space.
Follows from this formula also that in Euclidean space the norm of the vector preset by coordinates in affine basis is defined by a formula
Vectors of all set determined by formulas (1) are linearly independent and therefore they form basis at any choice of curvilinear coordinates (locally the linear coordinates). Vectors , apparently from (2), are defined in affine basis. Therefore their scalar products at each other and on themselves can be calculated on formulas (4). The set of these scalar products forms the square matrix of order called by the covariant Gram matrix representing a covariant tensor of the second rank.
Let’s make a scalar multiplication of vectors , (the vectors defined so agreed to call contravariant) and we use for this purpose a covariant Gram matrix at multiplication. As a result we will receive a formula of a scalar multiplication of contravariant vectors in Euclidean space
Let’s define other local basic system. Vectors of this basic system agreed to number the top indexes and to call vectors manual vectors to vectors . We will accept that scalar product of vectors of basic system on vectors of basic system have to satisfy equality
where ―unit matrix (Kronecker delta). This equation agreed to call the mutuality equation of basic systems , and basic systems agreed to call the mutual basic systems.
The vectors preset by decomposition on basis agreed to call covariant.
Let’s make scalar multiplication of a contravariant vector on a covariant and write down scalar product with use of the mutuality Equation (7). In result we will receive in Euclidean space a formula of a scalar multiplication of the vectors reset by components in the mutual basic systems and
Scalar products of basis vectors at each other and on themselves form a contravariant Gram matrix which represents a contravariant tensor of the second rank. Using this matrix at a scalar multiplication of vectors , , we will receive a formula of a scalar multiplication of covariant vectors in Euclidean space
Formulas (1) give us , formulas (3) give us , and from the equation (7) we receive . These three equalities give us
Let’s pay attention (it is extremely important for a comprehension of the further text) that vectors of the mutual basic system do not depend on the accepted rule of a scalar multiplication of vectors in Euclidean space. It can seem not so as the mutuality Equation (7) represents a scalar multiplication of vectors in Euclidean space. Let’s prove validity of the made statement. In any point of the vector space there is the infinite set of local bases. We choose from this infinite set (we do not calculate, we do not use the rule of a scalar multiplication, we choose) such local basis which satisfies to the mutuality Equation (7). The chosen basis in the infinite set cannot absent. Therefore the statement is proved.
Now we will construct the vector space with a pseudoeuclidean metric. At the same time linear space , its affine basis , curvilinear coordinates and the mutual basic systems we will keep former by what they were accepted in Euclidean space. The basis for such decision is that all called elements of the vector space are taken without use of a scalar multiplication of vectors in Euclidean space and therefore the scalar multiplication of vectors in Euclidean space studied above will not be bound with the rule of a scalar multiplication of vectors which we will construct below in pseudoeuclidean space.
Let’s define a scalar multiplication of vectors of affine basis of the vector space with a pseudoeuclidean metric the following rule:
The vector space with such scalar multiplication of basis vectors is called pseudoeuclidean space of a zero index.
Having multiplied vectors , with use of formulas (11) and with application of axioms of a commutation, association and distribution, we will receive a formula of a scalar multiplication in pseudoeuclidean space of the vectors preset by coordinates in affine basis ,
On this formula we receive a scalar square of a vector in pseudoeuclidean space
This formula allows us to conclude that the rule of a scalar multiplication of vectors in pseudoeuclidean space (11) does not lead to a positive determination of a scalar square of a vector. From this formula we conclude that in pseudoeuclidean space the scalar square of a nonzero vector can be positive, negative and equal to zero. In pseudoeuclidean space of zero index the formula
defines the norm of the vector preset by decomposition in affine basis . Follows from this formula that in the vector space of real vectors with a pseudoeuclidean metric there is a set of nonzero vectors for which the norm is not defined.
From formulas (11) we have . Considering it and using equalities (10) we conclude that in pseudoeuclidean space of a zero index
In pseudoeuclidean space of a zero index the scalar products of other basis vectors at each other and on themselves remain the same by what they were received in Euclidean space
Therefore to receive formulas of a scalar multiplication in pseudoeuclidean space of a zero index of the vectors preset by coordinates in local basic systems it is enough the summands in formulas ((6), (8) and (9)) to take with a minus-sign.
Let’s agree further in the presence of a double sign we will take a plus-sign in a Euclidean space, and we will take a minus-sign in pseudoeuclidean space. With use of this agreement of a formula of a scalar multiplication of the vectors , preset by coordinates in local basic systems in spaces with Euclidean and pseudoeuclidean of a zero index metrics can be written down uniformly
3. Multilinear Form
Lemma 1. In each point of the vector space of real vectors the multilinear form of any order changes the numerical value when replacing Euclidean metric in this space to a pseudoeuclidean metric of a zero index.
Proof. In arbitrary point of the vector space of real vectors we will take tensors of first rank ? vectors and tensor of second rank and consider the linear and bilinear forms.
On the formula (12) we derive
Existence here summand with different signs for different metrics (plus―for Euclidean and minus―for pseudoeuclidean) is indication of validity of the lemma 1 relative to linear form.
If at a tensor of the second rank mentally to reject one of indexes, then, according to the quotient rule of tensor calculus, we will derive the covariant tensor of the first rank―a vector. Let’s designate this vector . On a formula (12) we will make convolution transform a vector with a vector . Let’s write down result of convolution and we will return the rejected index
We derive vector . Let’s make compression of this vector with vector . We will derive
If to substitute here , then we will receive the developed look of a bilinear form
Existence here summands with different signs for different metrics (plus―for Euclidean and minus―for pseudoeuclidean) is indication of validity of the lemma 1 relative to bilinear form.
This proof can seem to someone not quite convincing as is under construction on the basis of the quotient rule which proof out of this text. Let’s consider other proof.
The covariant tensor of the second rank by definition is equal to an external product of two covariant tensors of the first rank . The external product of two vectors and represents the set of elements of a square matrix of an order received by multiplication of each component to each component , i.e. . The bilinear form is a polynomial. Therefore it can be derived as the product of a polynomial to a polynomial , i.e.
The linear forms on formula (12)
Having multiplied these linear forms―these polynomials, we will have
Having substituted (15) here, we will derive a formula which in accuracy repeats a formula (14).
It is easy to see that the proofs executed for the linear (13) and bilinear (14) forms can be continued with use of the same concepts and methods for multilinear forms of any higher order. #
4. Derivatives, Differentials and Taylor’s Formula
Derivatives, differentials and Taylor’s formula which were defined in the vector space with Euclidean metric will be the objects of our researches. We will observe change of numerical values of the considered objects upon transition to space with a pseudoeuclidean metrics.
Christoffel symbols of the 2nd sort are components of decomposition of the second partial derivatives of a position vector in local basis , i.e.
From this it follows that Christoffel symbols of the 2nd sort are not bound to a type of a metrics of the vector space. They depend only on the choice of a type of a curvilinear coordinates .
Let is real function of points . Let’s accept that it is continuous and has the continuous partial derivatives on all variables to the necessary order inclusive. Its covariant derivatives we will agree to designate an inferior index after an asterisk.
Covariant derivatives of the increasing orders are defined by the following formulas:
Partial derivatives , , and Christoffel symbols of the 2nd sort do not depend on a type of a metric. Therefore numerical values of covariant derivatives of any order in any point of the vector space remain invariable at change of Euclidean metric on pseudoeuclidean.
Let ―the fixed point of the vector space and ―the current point of this space in rather small vicinity of point (in the vicinity in which the local basis in point can be considered as affine basis of this vicinity). Then it is possible to accept that the increment of a position vector of point upon transition to point is approximately equal to position vector differential , i.e.
From this it follows that the set of differentials represents a contravariant vector.
Lemma 2. In each point of the vector space the total differentials of any order and derivatives in the direction of any order of scalar function change the numerical values when replacing Euclidean metric in this space to a pseudoeuclidean metric of a zero index.
Proof. As covariant derivatives are covariant tensors of the first and second ranks and is a contravariant vector, the total differentials of the first and second orders
are the linear and square forms and, according to the lemma 1, for them the lemma 2, that their numerical values change at change of metrics, is the exact.
The norm of a vector is defined by a formula
Let’s remind that we take a plus-sign for Euclidean metric, and we take a minus-sign for pseudoeuclidean.
We will write down derivatives in a point towards a point :
Existence of different signs in a formula says to us about what derivatives in the direction in spaces with different metrics have different numerical values. Follows from a formula also that in the directions satisfying to inequality
derivatives are not defined.
Here proofs for differentials and derivatives first and second orders were made. It is not difficult to see that these proofs can be repeated for any higher order. #
Theorem 1. In the vector space with Euclidean metric Taylor’s formula is equality with beforehand given accuracy. In the same space with pseudoeuclidean metric of a zero index Taylor’s formula is not equality with any order of accuracy.
Proof. Taylor’s formula
defines the difference of values of scalar function of real variables in rather close located points of the vector space with Euclidean metric with second order of accuracy.
The left-hand member of this equality is a difference of values of function in two mentally the fixed points of the vector space . Operation of change of a metrics does not change space , does not change the provision of the chosen points in this space and does not change function of points in any way. Therefore the left-hand member of Taylor’s formula remains invariable at change of a metric of the vector space from Euclidean metric on pseudoeuclidean.
The right-hand member of Taylor’s formula changes the value at change of a metrics as represents the linear combination of differentials which values, according to the lemma 2, change upon transition from Euclidean metric to pseudoeuclidean metric of a zero index.
Comparison of behavior of the left-hand and right-hand members of Taylor’s formula at change of a metrics leads to the conclusion that in pseudoeuclidean space of a zero index Taylor’s formula is not equality, as was to be shown.
Here the proof is executed for the second order of accuracy. It is clear, that it can be made for somehow high order of accuracy. #
Theorem 2. The operations of differential and integral calculations developed for Euclidean space in pseudoeuclidean space do not make computing sense.
Proof. Follows from the theorem 1 that in pseudoeuclidean space the difference of values of function cannot be calculated by means of the device (16) of the differential calculus created for Euclidean space. It follows from this that in pseudoeuclidean space the theory of difference schemes and, in general, all methods of finite differences created for Euclidean space cannot be used.
The integral in the vector space with Euclidean metric of any finite-dimensional of a measure of integration domain is a limit of the integral sum, i.e. sum of differentials of antiderivative of integrand. Differentials, according to the lemma 2, change the numerical value at change of a metrics. Therefore values of integrals in pseudoeuclidean space will differ from their values in Euclidean space. It follows from this that the integral calculus created for Euclidean space is not suitable for numerical methods in pseudoeuclidean space. #
It is established that values of multilinear forms, derivatives of a scalar function and its differentials in space with pseudoeuclidean metric differ from their values in the same space with Euclidean metric. Taylor’s formula which Euclidean space is the equality expressing an increment of scalar function through differentials of this function in pseudoeuclidean space equality is not.
The executed researches lead to the conclusion that the differential and integral calculus developed for space with Euclidean metric in space with pseudoeuclidean metric are not suitable.
As differential and integral calculus of real functions of real variables is a constituent of applied mathematics, it is possible to draw a conclusion on loss of a possibility of adequate mathematical model operation of actual natural phenomena and processes of human activity by methods of applied mathematics in pseudoeuclidean space.
Results of work can be considered in theoretical and experimental studies where conclusions of the special theory of relativity are used, in particular, at creation of mathematical models of the phenomena of gravitation, cosmology and physics of elementary particles and propagation of electromagnetic waves.