Based on his own and previous experiments, H. Spencer suggested an idea that, as he put it, it is the principle of life, whose essence is the permanent transition of a homogeneous system into a heterogeneous one. In fact, this idea was used by other authors of the subsequently arisen synergetics (  , pp. 314-135). However, none of the above ordered systems that originate from the chaos evolve; the dynamic system in the Belousov and Zhabotinsky’s reaction oscillates with no further evolution. It is obvious that each molecule of the ordered system that originates in such a way is distinct from protein and nucleotide molecules via millions of stages of further evolution. Suffice it to say that the possible number of DNA isomers of the colon bacillus has been shown to be 101,000,000 (  , p. 21). This number is unimaginably great; for example, the number of atoms in the visible Universe is no more than 10100. However, except for a single isomer, the other isomers can hardly provide the full vitality of an organism. Thus, Spencer’s idea that prompted other scientists to create synergetics, in fact, gave no justification for subsequent extrapolation.
However, Spencer’s idea was not only used by the authors of synergetics; the theoretical physicomathematical foundation for synergetics was intensively being created as long as 50 years ago. Its foundation is based on the idea that an increase of entropy is considered in physics to be a general index of destruction and degradation and, on the contrary, processes of ordering, organization, and even selforganization. This point of view became so widespread that it even turned into a cultural phenomenon.
In connection with told, we will consider the fact that the cornerstone of thermodynamics is a consequence of its second law, according to which entropy in an isolated system always increases, from which it is concluded that chaotization of this system is also amplified. However, below we give examples showing that in reality this assumption is not supported. Thus, in the case of a three-phase mixture of ice-water-steam, depending on the initial parameters of the mixture components (mass, temperature, and pressure), the mixture can pass into a vapor state (chaotize), while otherwise it can turn into an ice crystal (become ordered). This example is valid for all three-phase systems. Another example was proposed by the French scientist A. Ducrocq already on the scale of the Cosmos. His idea can be explained by considering two gas clouds on the assumption that external influences on the gas cloud do not play a significant role on the scale of the cosmic (not physical) vacuum. If gravitation fails to overcome the kinetic energy of particles, the cloud (which is chaotized) dissipates simultaneously with an increase in the entropy. If this energy is overcome, the cloud is compressed to form celestial bodies (ordering) also under the increase in the entropy due to the fact that the gravitational forces under compression supply gas molecules with an additional kinetic (thermal) energy  . Thus, both for the cases of and systems the all cosmos, entropy yields unpredictable two value results, which makes it unsuitable for the description of real processes organization, and self-organization.
And, as a general conclusion, the usage of existing entropy concept and function leads to conceptual errors in those sciences which apply them. From here a natural conclusion follows that the synergetrics can hardly apply for the status of scientific discipline.
1As in different sources are meant by potentials, both potentials―and their difference between them, here further for convenience of a statement potentials will be understood as their difference, and other case will specially make a reservation.
For the analysis of what represents the existing concept of entropy we will enter previously following designations: T is the temperature (heat potential), Pi is the potential of one of i types of work carried out by the system, xi is the coordinate of the system, depending Pi on the value of the potential (or, more precisely, of the potential difference)1. For example, if in an isolated system there is an electric potential difference PV = V1 − V2, the current flows from the larger to the smaller potential, equalizing the charge distribution q (the coordinate) to. Similar processes will occur in the case of potential of mechanical forces―Pf, at the coordinate xf is the path, Pp is the pressure potential that changes the gas volume xp, is the chemical potential, that changes its coordinate, the mass of the reaction products―, etc. (  , pp. 17-18, 22, 23). The product of the is an elementary work performed during this equalization with a certain degree of efficiency, , and irreversibility, , taking into account, naturally, that factor that in real working process partial alignment of potentials will be caused by energy dissipation.
We will begin with the fact that the aforesaid does not exhaust the paradoxes of entropy. Entropy is distinguished from coordinates of work processes by the ambiguity of its physical meaning, since entropy is not measured and observed in actual experience. It can only be calculated via other observable quantities (  , p. 19). It is no coincidence that John von Neumann stated with a candor that is permissible for recognized classics: “No one knows what entropy really is” (  , pp. 153-163). Nevertheless, the expression for entropy is used to substantiate the second law of thermodynamics in terms of the proof of an increase in entropy during the heat exchange between two bodies that are isolated from the external environment in the following form: δQ1 = δQ2, from which T1dS1 = T2dS2 and dS2 > dS1. That is actually not entropy’s specific trait, rather it is a feature of any energy exchange, since, then. Thus, the difference of the energy exchange process from other types of energy exchange is not mathematical, but physical, since unlike various work processes it occurs on a molecular level.
2. Entropy: Meaning and Expression
In order to find the physical meaning of entropy, the thermodynamics as the law of degradation of energy capturing the essence of the second law, we study the original development of its formula (  , pp. 62-64) starting from the definition of the efficiency coefficient η for the cycle of a Carnot heat engine (Figure 1):
where Q and Q0, T, and T0 correspond to the heat that is received by a gas from a heating device at the temperature―T and the heat taken from a gas by a cooling device at the temperature―T0, respectively.
From Equation (1) we obtain the quantity ω, i.e., the part of the heat energy which it is irreversible will lose the potential and whose potential will become less than the initial one; then:
The result of the transformations of expression (4) is called the reduced heat. Note that the physical meaning of Equations (3) and (4) can be lost as a result of purely mathematical transformations (  , p. 66), as is confirmed here below. Then, the arbitrary work of the heat engine (Figure 2) when the cycle is divided into equal segments by parallel adiabats to form Carnot microcycles is considered (  , pp. 65-68).
It is further assumed a comparison between two paths with the same refrigerator the workflow of which runs along top and bottom curves. It is clear that the change of
Figure 1. The Carnot cycle.
Figure 2. Scheme arbitrary of the Carnot cycle.
entropy in both cases is the same. Equation (5) gives the sum of the reduced heats, which is called entropy―S and according to Equation (4) is as follows:
Therefore, it can be concluded that the entropy―S, does not depend on the path of the process and is thus a function of state.
It is further proposed to make a comparison between two cycles with the same refrigerators. Moreover, the process of heating the working substance in these cycles passes through the upper and lower curves.
Thus, the expression (6) arises as a consequence of Figure 2 and expression (4) as the sum of equal amounts of equal value:
These findings pose several questions. The first of these, what is the relationship between entropy―S and internal energy―U, which are both supposedly are functions of the state? Even posing this rather obvious question has not been received by us in a fairly extensive scientific and philosophical literature devoted to theoretical issues of thermodynamics, for example, in  , in different sections which review these works and their analysis the author pays a lot of attention.
These conclusions raise several questions at once. The first of them, what communication between entropy―S and internal energy―U, which both allegedly are functions of a state? Even statement of this quite obvious question hasn’t been met by us in quite extensive scientific and philosophical literature of the thermodynamics devoted to theoretical questions, for example, in  , in which different sections the author pays to the review of these works and their analysis much attention.
Try to find the answer to this question. To begin with, what work (A), which is measured by value:
However, the work A that in each of the above two cases is measured by (the area under curves) is greater with respect to the upper curve. By operating with average values of temperatures, find, according to Equation (2), that in the first case the irreversibility―ω is smaller, since the average temperature with respect to the microcycle upper curve is greater than that with respect to the lower curve. However, the value of entropy, which remained unchanged in both cases, does not reflect this fact. Thus, entropy does not meet the following basic requirements: first, the requirements of the second law of thermodynamics that describes the fact and measure of the irreversibility, and second, requirements to the coordinate that must change simultaneously with the change of the potential (here, T) by analogy with coordinates working processes (  , pp. 17-18, 22-23).
These facts make the necessary question: where is made the error in the output of the classic expression of entropy? The answer should be from the output resulting from the analysis of the Carnot cycle (Figure 1) and expressions (2). From the comparison it follows that efficiency and consequently the irreversibility―ω of the process depended on its way. The same applies to each microcycles in Figure 2. Accordingly, the amount of microcycles or the value of the integral (5) will be different depending on and each microcycles, and the corresponding sum given reduced heats should give different values of integral (7).
It follows that in the derivation of (4) a mistake was made, resulting in inadequate reflection of the classical expression of the entropy of the essence of the second law of thermodynamics. To fix it, we can replace the expression (4) to expression (8). In other words:
where i and 0i are indexes belonging to microcycles, respectively the symbols relating to the microcycle, carried out by the upper and lower curves. Therefore, if the top of the curves below the upper and above the lower curve between points 1 and 2 in Figure 2, instead of expression (6), we obtain the expression
Because of (8) and (9) it follows that the entropy―S, as well as heat and work depend on path of process, it becomes obvious conclusion that the only state function is the internal energy―U.
The next issue is the question of what constitutes a heat capacity coordinate―T. In order to find this coordinate the following consideration is used. As is known, the distinction of heat from work that work can directly be transformed into any other type of work (mechanical, electrical, etc.). The heat is because of the molecular nature of its energy can be converted into work only through intermediate (working substance), i.e. through the expression of heat microprocess at the macrolevel (  , pp. 60-61). For example, assume that in a Carnot cycle the function of the working fluid performs a gas. Having received heat from a heating device (the first working substance), gas (the second working substance) can perform work only via a third working substance, for example, a piston that brings an engine into action or a thermocouple in which an electric current arises. From the above reasoning and the energy conservation law, while retaining the former concept of entropy for the new expression of the coordinate of the thermal potential’s and adding the index n to entropy in order to avoid confusion, we obtain the following:
where k is the amount of work performed by all working substancebs as a result of receiving the heat from a heating device.
Substituting the value of T from (10) into (2) we obtain by elementary transformations
Whence it follows that the new entropy increase leads to an increase of the irreversibility.
Thus, the entropy acquires a real physical meaning, directly reflecting the meaning of the second law of thermodynamics and is presented in plain, experimentally determined coordinates. From (9) and (10) we see that T and Sn are in an inverse relationship. From which it follows that the entropy increase leads to an increase of the irreversibility, which means, according to (1) that with increasing temperature T the value of the efficiency―will grow, and irreversibility―(2) decrease. Therefore, in the design of heat engines (steam engine, internal combustion engine, nuclear reactors, etc.) back to the temperature of the heater and irreversibility are widely used. In this case the calculated value of the temperature of the heater is limited only by the properties of structural materials.
The implementation of the second law of thermodynamics and, as a consequence, the increase of entropy reflect the desire of potentials to 0, which allows us to Express the second law of thermodynamics in a simple and clear way, both for the statistical expression, and for cases of real systems, as
The total measure of the irreversibility of any process is expressed as follows:
This way, the function that adequately reflects the essence of the second law of thermodynamics is irreversibility―, while the new entropy function―Sn serves as a coordinate of the thermal potential―T, that simply and adequately reflects the essence of heat exchange processes and real work processes.
As for the statistical expression of the second law of thermodynamics, its value is, first of all, in the fact that it detected the reason of process irreversibility due to chaotic particle movement, leading to the processes of potential equalization. At the same time, the mechanical transfer of statistical entropy expression (15) to the analysis of real systems without the consideration of power specifics between their elements, shown above (and, as shown below, a conceptual connection in case of living systems and devices that simulate them), leads to fundamentally wrong conclusions. Now, let us try and express the statistical form of the second law of thermodynamics in the form that is similar to expressing the first law in the form (13) in order to better demonstrate at least the external similarity of these fundamental laws of physics and, at the same time, better reflecting the dynamic of transient processes.
The statistical expression for the second law of thermodynamics is as follows:
where W is the thermodynamics probability, i.e., the number of microstates that implement given macrostate taking into account that:
where P and HP are the equilibrium and nonequilibrium states, respectively. Thus, expression (14) can be written in the following form, which is similar to that of expression (12)
3. The Second Law of Thermodynamics and Biology
In our works  -  , the apparent paradox of the physical side of life is successively eliminated based on the presented approach. The essence of the paradox is that, according to the present understanding of entropy, plants, when receiving heat from the Sun, increase their entropy with a simultaneous improvement of their own organization and the organization of the entire biosphere via trophic chains. Based on the suggestion of N.A. Umov (1901), many scientists have tried to explain this contradiction from the perspective of the third law of thermodynamics (opposite to the second law), which is specific only for life (  , pp. 108-109); from the perspective of the openness of systems (L. von Bertalanffy, 1953)  ; from the perspective of the negentropy principle of information (L. Brillouin, 1956) (  , pp. 200-201); and from the perspective of synergetics (H. Haken, 1969) (  , p. 12). Based on these works, an entire school (such as M. Eigen  and I. Prigogine  ) arose.
However, as it seems, the solution to the problem of the physical meaning of life does not depend directly on life’s entropic characteristics. As was shown by P.K. Anokhin and N.A. Bernstein, the special quality of life lies in the ability of organisms to advance and make adequate reactions to various effects (sound, smell, light flashes, etc.) before a direct contact with the object that causes these effects (victim, predator, water, intimate partner, etc.). The ability to make an advance reaction gives living things great advantages over other types of objects. It allows organisms to escape negative effects and drastically seek conditions that provide the existence of their own species. However, no one raises the question: What physical conditions must be satisfied by organisms in order to react in such a way? There are only three such conditions.
1) The thermodynamic condition: the organism must possess potential energy. The latter must be kept with no dispersion for a rather long time in order to provide the work that is needed for an organism to survive. This condition is satisfied by metastable states that are widespread in nature; in these states, the energy of a high potential is protected from equalization by a potential barrier. A hydroelectric station dam is the simplest example of such a barrier: the water before a dam has a store of potential energy as compared to the water behind the dam. All the diversity of stable isotopes in the periodic table and the diversity of the world itself occur due to potential barriers. Otherwise, all the elements of the periodic table would come down to the center of the table because of the synthesis of light elements (hydrogen blast) and the decay of heavy elements (nuclear blast). Here, multimillion degree temperatures or the necessity of bombardment by particles (that destroy the structure of the atomic nucleus to provide element-to-element transitions) play the role of potential barriers. In the case of an organism, this energy is stored in basic substances (fat, carbohydrate, and protein); in the case of bacteria, it is stored in the form of inorganic materials (sulphur, arsenic, iron compounds, etc.).
2) The Information condition: an organism must have substances and structures that regulate the release of energy in response to signal-information (a weak, but specific energy impulse). In the hydroelectric dam example, this information is the action of the mechanism that moves the gate that prevents water from flowing from the upper level to a lower one. This condition is satisfied by organic and mineral catalysts (supply-line switchers, triggers, etc.) that can be introduced or, on the contrary, removed from the contact with the substance via a weak action (for example, mechanical) or activation by adding a coferment to a ferment. Such structures (we call them straightors, from the English word “straight”) possess the ability to change the states of the potential barrier. On receiving a signal, they can decrease (in the limit, eliminate) or, on the contrary, restore the potential barrier of the metastable state, i.e., can control the process of releasing the energy. Straightor reactions are of high selectivity; for example, heating accelerates a great number of chemical reactions, whereas a catalyst accelerates one reaction or a group of reactions.
3) The transformation condition: an organism must possess substances and structures that transform released energy of a high potential into work on maintaining the organism. For example, this can be a ferment molecule, since it performs not only a catalytic function but sometimes determines (together with the membrane) the direction of the reaction; the kinematic part of the lathe, which transforms engine rotation into some certain work; or the osteoligamentous apparatus of the organism, which transforms muscle actions into a great variety of combinations (motions). The property of the energy transformation when interacting with bodies is peculiar to all the material objects. In the hydroelectric-dam example, such a transformer is a hydroturbine- electrogenerator block.
A structure that satisfies the above three conditions is a signal element or, in abbreviated form, a siel. The concept of the siel determines the basic concepts that underlie theoretical biology and sciences that describe devices that perform functions that are peculiar to organisms (information theory, cybernetics, general system theory, etc.).
Knowledge. The i siel is an elementary structure that knows which signal must be reacted to and how.
Meaning. The siel structure involves not only the physical aspect of the reaction, which is specifically focused on a local operation, but also an elementary semantic aspect. The functions of individual siels are composed of trillions of elementary meanings, functions of organs are composed of siel functions, and functions of the entire organism are composed of the functions of organs.
Control. The siell is an elementary control structure, wherein a small energy of signal information controls more powerful energy flows.
Program. A program is a structure that is capable of generating signals for a given organism or automata under the action of energy flow. For example, a text is a program that generates information under the action of light flow; a magnetic tape is a program that generates information under the action of magnetic flow; etc. A program, as well as signal-information, are relative concepts. Specifically, the stationary landscape that generates signals for a man and many animals is not the program for a frog that can see moving objects only. Examples of programs include DNA, RNA, magnetic tape, and laser disks.
The world, in relation to an information device, only contains programs. For example, a geological excavation that shows a particular stage of geological evolution is a program. An energy flow (sun or electric light) is required in order to transform it into signal information. Only then can a geologist obtain signals from which information on the content of the excavation is extracted. It is obvious that real information will always be of a relative character in contrast to Brillouin’s misconception of absolute information (  , pp. 200-201). Indeed, a text will be a source of information for a man rather than an animal; a geological excavation will be the source of rather greater information for a specialist than for a simple viewer; etc.
Organization. It is obvious that organized systems are systems whose existence is provided due to signal elements (siel) involved in systems, i.e., organisms, automata, computers, robots, and their combinations.
Self-organization. In terms of strength, a signal can be smaller by many orders of magnitude than the energy of the reaction that occurs when receiving the signal. In other words, an organism provides a reaction with energy and performs its own transformations associated with this reaction, i.e., self-organization actually takes place. Otherwise, by agreement, self-organization is the required transformations in an organism (or automatic device) that are associated with the necessity to react not to external actions, but to information from the internal programs of the organism (or automatic device) for example, hunger or exploratory needs and so on.
Determination of the life phenomenon. Organisms must be active in order to provide food, growth, breeding, exploration of new territories, etc. Thus, the ability to react to information is a necessary condition of life, while activity is a sufficient condition. Hence, life is defined as an active signal (informational) form of a system’s existence.
Order (orderliness). Orderliness and organization are radically different concepts. Understanding the essence of the organization provides a specific (not intuitive) consideration of the difference between these concepts. The definition of orderliness that was given by von Neumann is as follows: the smaller the amount of information that is required to describe a system, the greater the orderliness of the system is. For example, if an infinite number of points in the plane are distributed randomly, then an infinite amount of information is required to describe their positions. Indeed, to do this, the coordinates of each point must be written. If the position is regular (for example, in the form of a straight line), the notation will be y = kx. Hence, in the case when the system can be described mathematically, the smaller the number of symbols that are required to express its orderliness, the higher the orderliness of the system is. For example, the simplicity of the space laws (Newton’s law of gravitation, the Coulomb interaction, and Einstein’s laws) suggests the high orderliness of Space. The regulation on the minimum of symbols, which characterizes the degree of the orderliness, holds as well for a text description (for example, algorithm), since the simpler the algorithm, the higher the order of its implementation.
The set of the above concepts gives a unified operational basis for sciences on life as well as devices that simulate life and allows one to perform calculations and mathematical analysis of corresponding structures  -  .
It may appear that the calculation of complex systems based on the siels that amount to millions of units in a single siel of little promise. However, there are regularities that organize them into certain associations; for example, a linear sequence of siels, which is regulated by the accumulation of the end product An (in chemistry, this is called the retroinhibition process). In this sequence, the consistency of the concentration An is maintained by the inhibitory action of this concentration on the first straightor in the chain f1:
Conceptually, an automated transfer line operates according to the same scheme: the work of the line stops when there are enough parts in storage. Then, is the siel, where Ai is the energy source in the metastable state. Thus, the number of siels (n − 1) can be calculated in the structure of the next hierarchical level. In  -  , it was shown that the given structure, as well as the structures of higher orders, are repeated regularly at various levels (from molecular to organis mic and national). This fact, together with the rapidly growing power of computers, makes this approach workable for analyzing complex systems.
For example, the recommendation of specialists in the solution of any question, before it is adopted as the decisions of senior management, must be approved by the head of their team―, to undergo a series of approvals and then return to the first link―, indicating the sufficiency or need for improvement.
Thus, the set of the above concepts gives a unified operational basis for biophysics, theoretical biology, information theory, cybernetics, and general system theory and allows one to perform calculations and mathematical analysis of corresponding structures. In addition, the ability of computers to perform intellectual operations given the difference of the elements that compose the former (electronic tubes, ferrites, semiconductors, cryotrons, etc.) suggests that there is the idea of life that can be implemented with different materials; this makes the search for life (that is identical to Earth life in terms of material) in Space unpromising.
Now let’s consider another fundamental position of synergetics is the so-called “negentropy principle of information”, developed by the American physicist Brillouin. The fallacy of this principle is revealed at the very beginning of its elicitation. To begin with, information in information theory is understood as signals entering an automatic device and in the simplest case reducing the initial number of digits (characters) from the set P0 to some digit (character), i.e., to P1 = 1, to be transmitted via a communication channel. Thus, in the case of an equal probability of each character, the value of information is measured as
This conclusion he called the “negentropy principle of information”. Thus, a bridge was allegedly made between information theory, on the one hand, and thermodynamics, on the other, although the first span of that bridge had been destroyed by the introduction of the meaningless “free information.” This conclusion has been widely used in biophysics, theoretical biology, information theory, cybernetics, systems theory, and synergetics and so on. For example, one of the founders of synergetics, Haken, wrote that “Shannon information is closely related to statistical entropy as introduced by Boltzmann” (  , p. 12). This led to the emergence of a large scientific direction holding this view, which has generated dozens of monographs and a huge number of scientific articles including works by M. Eigen  and I. Prigogine  , and filled the entire Cosmos with “information”  .
At the same time it is obvious that, on top of everything else, L. Brillouin’s conclusion, like the conclusions drawn by Boltzmann, refers to a structureless ideal gas and does not apply to organisms, whose properties are determined by their specific structures. Namely separate siel, and to built on their basis of structures of a higher level, allowing them to select from a vast range of disturbances coming from internal and external environment signals-information needed for survival and reproduction.
4. Program for the Evolution of the Cosmos
“Дарвинизм и возникшая на его основе синтетическая теория биологической эволюции (СТЭ), построены на идеях случайных химических реакций, приведших к возникновению простейших организмов и дальнейшей их эволюции на основе случайных мутаций и отбора. По словам И. Пригожина: «Наша Вселенная следует по пути, включающему в себя последовательности бифуркаций. В то время как другие миры могли избрать другие пути, нам повезло (?!-М.Ш.), что наша Вселенная направилась по пути, ведущему к жизни, культуре и искусствам” (  , c. 198). The problem of evolution of the Universe and life in her remains not only the most urgent scientific, but also world outlook problem. Cosmology speaks about emergence and evolution of celestial bodies as about fluctuation emergence of the gravitational centers and casual collisions of the educations which have arisen on their basis. The Darwinism and the synthetic theory of biological evolution (STE) which has arisen on its basis are constructed on the ideas of the casual chemical reactions which have led to emergence of the elementary organisms and their further evolution on the basis of casual mutations and selection. According to Prigogine, “Our universe has followed a path involving a succession of bifurcations. While other universes may have followed other paths, we are fortunate (?!-M. Sh.) that ours has led to life, culture, and the arts” (cited from  , p. 198). At the same time, I. Prigogine is oblivious to the fact that this “fortune” has been smiling on us for almost four billion years since the emergence of life on the Earth. During this time, the Earth’s climate has changed several times: global warming alternated ice ages, the Moon, moving away from the Earth, changed the angle of its axis and its rotation speed, the composition of the terrestrial atmosphere changed, continents drifted, volcanoes erupted, huge meteorites fell on the Earth, and so on and so forth. But steadily it was the evolution of life, and at each subsequent stage has arisen more and more perfect forms. So, for example, it is considered that falling to Earth of a huge meteorite has led of pangolins to death 60 million years ago, and has created, thanks to it, favorable conditions for evolution of mammals and, as a result, to appearance of the modern person.
That is why it seems that Rene Thom, the French mathematician and philosopher, quite reasonably argued against Prigogine when he asked “Why is the word “accident” better than “fate,” “fortune,” or “God’s will?” Moreover, for Thom fluctuations can only act as a factor that triggers the process of self-organization but does not determine it (  , p. 143). We consider two extreme cases of bifurcations that confirm Thom’s thesis. The simplest of them is the example of a supercooled liquid. Almost no bifurcation in the form of a weak influence on the liquid can lead to anything else but its crystallization. Similarly, another much more complicated case, bifurcation that consists in the penetration of ideas of the western world order that provide high standards of living, can only be realized if the cultural level of the population is sufficiently high.
In order to resolve the arising contradictions we try to refer to some known facts and conclusions drawn from them. Josh McDowell wrote “In his book Science Speaks Peter Stoner calculated… the probabilities of the prophecies concerning Samaria, Gaza and Ashkelon, Jericho, Palestine, Moab and Ammon, Edom, and Babylon”. He also emphasizes that no human being has ever made prophecies that approach those we consider and fulfilled them with such accuracy. The time interval between these prophecies and their fulfillment is so great that even the harshest critic cannot argue that they were made after the predicted events (  , p. 255). It should be noted that, as historical studies show, prophecies about Memphis and Thebes, the ancient capital of Egypt, Assyria and its capital, Nineveh, etc. were also fulfilled (  , pp. 249-309). We would like to recall now one of the ancient predictions given in the Bible that miraculously came true in our time before the eyes of the older generation. Thus, the Lord God says that when the Jews “shalt return unto the LORD thy God, and shalt obey His voice, then” if any of thine (people of Israel, M. Sh.) be driven out unto the outmost parts of heaven, from thence will the LORD thy God gather thee, and from thence will he fetch thee… And the LORD thy God will bring thee into the land that thy fathers possessed and thou shalt possess it; and he will do thee good, and multiply thee above thy fathers. … And the LORD thy God will put all these curses upon thine enemies, and on them that hate thee, that persecuted thee” (Deuteronomy 30:2, 4, 5, 7). This prophecy was given almost half a millennium before the Jews were deprived of their homeland after being defeated by Rome in 70 and 135 AD and were driven “unto the outmost parts of heaven” and for almost 3500 years before the state of Israel was founded.
Purely scientific research also confirms the existence of prophecies. Thus, for instance, eminent American physicists began to study the problems of psychic practice and performed numerous experiments to verify the existence of the phenomenon of telepathy. They reliably established not only the reality of telepathy but also the fact that a psychic guessed the content of a transmission in a few minutes, and sometimes even a few days before a generator of random variables chose the number of an experiment for him to guess (  , p. 80). Similar results were obtained by Professor V. Kaznacheev (see  ). Many thousands of predictions that have been verified by the scientific community were made by such famous prophets as Edgar Cayce, Jane Dixon, Vanga, etc.  .
However, the evident fact of the prediction of even an individual fate is underlain by a vast array of events on a cosmic scale. This conclusion is drawn from the assumption that every person’s fate is connected with the fate of other people, both known and unknown to him, by millions of threads. Our destiny can be affected by a chance meeting, election of a political figure in our country and even abroad, decisions made by administrators, statesmen, and physicians; via information myriads of other events flow. In turn, the behavior of all people depends on changes in natural conditions. People are always physically affected by a season or weather conditions. However, not only the well-being of individuals, but the progress of historical processes, is largely determined by the variable nature of the appearance of sunspots, as was shown by A.L. Chizhevskii  . As well, the state of the Sun is influenced by our galaxy, and the latter depends on the state of the Metagalaxy. Hence, it follows that the contents of the prophecies in an implicit form took into account the entire dynamics of life and cosmic processes, including the dynamics of changes in dark energy and dark matter, which were still unknown to science at that time, as well as the impacts of other, not yet discovered, factors. Consequently, if the future had been predicted to at least one person, this implies that in an implicit form this prophecy considered the entire dynamics (not yet implemented) of changes in the Cosmos over the period from the fact of the prophecy to the moment of its fulfillment. These points to the existence of a Program for the evolution of the Cosmos that is written down minutely and in such detail as the destiny of a single man and it is from this Program that a psychic somehow reads the fate of a man, a country, or of humanity, or learns about upcoming natural disasters.
However, hardly any reasonable person would believe that a Program can exist without a Programmer. As well, the Bible actually says the same “… tell us what the future holds, so we may know that you are gods” (Is. 41:23). It must be remembered, however, that everything new is well-forgotten old and the existence of the Program was known to Jewish Cabbalists, Indian yogis, Muslim theologians, Taoists, and other mystics  . Implicitly, Christianity also expresses judgment about the existence of such a Program. Speaking about the program, physicist Y.I. Kulakov recalled the words of the Apostle and Evangelist John, “In the beginning was the Word, and the Word was with God, and the Word was God. The same was in the beginning with God. All things were made by him; and without him was not anything made that was made” (John 1:1)  . In this regard, we should note that word (Greek “logos”), according to Heraclitus of Ephesus, means “universal law” or “the foundation of the world”  and can be understood as “the law” or “the program,” according to which the world is created  . However, if a Program for the entire Cosmos exists its can only be developed by a Person endowed with a wish, will, and capacity for its implementation, i.e., Him that we call God  . Thus, based on the facts we can address the issue of the primacy of the Mind or Matter.
Naturally, this raises the question of man’s free will. Kulakov writes “A law carries the idea of necessity. A program, by contrast, has an element of freedom. Different programs can exist within the same laws… In contrast to a law, a program can be changed and even destroyed” (  , p. 148). Of course, the question may arise: how can we reconcile the desires and actions of different people with free will, with a Program However, if a good manager of a company implementing a certain program is able to cope with this problem, should this be an overwhelming task for our Lord, the master of space and time with whom “a day is like 1000 years and 1000 years are like a day” (2 Peter 3:8). At the same time, one of the main instruments for both a top manager and Higher Reason is encouragement or punishment for respective actions. At the same time, according to Daniil Andreev, “some key events of major processes reside in the future seemingly firmly as predetermined points. However, they are extremely few and even those can assume different forms and various degrees of desired completeness when they are implemented in history” (  , p. 512). Hence it follows that the Program reserves the path to the key points for different people and minds at their free choice; in fact, the extent of this freedom is greater the higher their intelligence is. However, what about the cases of detailed predestination, as in the examples presented above? In these cases it can be assumed that either those were the key points of that the detailed prophecies were prompted by some higher reason that sought to show nonbelievers that not everything was determined by their will.
Examples of interaction between laws and mind can be the most complex technical devices, such as in the American “Pioneers” and “Voyagers” that for decades have been plying the Cosmos and from a distance of billions of miles transmitting information to the Earth. They contain complex mechanisms and self-adjusting programs running on the basis of the laws of physics. At the same time, their work is periodically controlled by the human mind that created them and directs their actions in accordance with its own program. It is all the more true of the Cosmos created by the Higher Reason that is actually an absolutely perfect giant mechanism that is self-regulated by laws providing the implementation of a Program that only occasionally needs the attention of this Reason, which allows scientists to investigate the laws of the Cosmos, ignoring for the time being the presence of Reason in it This was realized, for example, by the eminent astrophysicists. Hawking, calling himself a positivist who said “one is left with an origin of the universe that is apparently beyond the scope of science” (  , p. 87). Or: “general relativity predicted that time would come to an end inside a black hole… However, both the beginning and the end of time would be places where the equations of general relativity could not be defined. Thus the theory could not predict what should emerge from the big bang. Some saw this as an indication of God’s freedom to start the universe off in any manner (emphasis added, M. Sh.)” (  , p. 32). Everything new is some well-forgotten old thing and we can turn back to Sir Isaak Newton, who wrote that one could know God from the study of nature and the study of history (  , p. 67).
5. What Can the “Program” Approach Give to Science?
This approach allows, in particular, a new way to present the problem of biological evolution. To begin with, what is known the issue is that protein-encoding genes only comprise 1.1% - 1.4% of the human genome (  , p. 86). Based on this fact, the famous researchers F. Crick and J. Watson, who are recognized as the authors of the greatest discovery of the 20th century, easily believed that 98.6% of the genome is “genetic garbage.” This viewpoint was based on the conclusion made by the theory of modern evolutionary synthesis (Darwinism) that the remaining genes are chaos of random mutations on the way from protozoa to the man, now no longer used in his body. However, further research gradually revealed that an ever increasing part of the DNA was highly significant genetically. The orderly rather than chaotic nature of evolutionary path is confirmed by the results of the following research: “Some 120 scientists, all specialists, prepared 30 chapters in a monumental work of over 800 pages to present the fossil record for plants and animals divided into about 2500 groups… Each major form or kind of plant and animal is shown to have a separate and distinct history from all the other forms or kinds! Groups of both plants and animals appear suddenly in the fossil record… Whales, bats, horses, primates, elephants, hares, squirrels, etc., all are as distinct at their first appearance as they are now. There is not a trace of a common ancestor, much less a link with any reptile, the supposed progenitor” (  , p. 34). However, according to V.I. Vernadskii and later A.A. Lyubishchev and holists, it was biocenoses rather than individual species that were expected to appear at once because a species except several protozoans cannot survive alone (  , p. 298). Similar ideas based on paleontological facts were also expressed by Darwin’s contemporaries, to which he repeatedly answered along the lines that “If numerous species that belong to the same genera or families really came into life at once, this fact would be fatal to the theory of evolution through natural selection…” (  , pp. 326, 329, 331).
To answer this question, we consider the problem of biological evolution, initially with regard to the simultaneous emergence of new biocenoses. These facts immediately pose several questions. The first of them is “If life forms do not originate from other life forms as is assumed in the modern evolutionary synthesis, what is a new biocenosis produced from and how does it emerge? Why does it begin to form shortly before changes in geoclimatic conditions? Why do more sophisticated types of species appear on the tip of each subsequent biocenosis? The following assumption can give an answer to these questions. If biocenoses appeared at once (i.e., during short time in comparison with the period of their existence which example Cambrian explosion can be)  ), this implies that there were numerous acts of creation. Moreover, if more sophisticated types of species appeared on the tip of each subsequent stage of evolution this implies that the creators of evolution (“Then God said, “Let us make mankind” (Gen. 1:26)) who do not possess omniscience (“However, about that day or hour no one knows, not even the angels in heaven, nor the Son, but the Father alone” (Matthew 24:36)) learn and improve their creativity based on the experience of earlier stages. There are many other facts that support this assumption (preadaptation, prophetic phases, trends that sometimes even go beyond the reasonable limits, etc.).
These ideas formed the basis of our work and allowed us to construct a new synthetic scientific and religious hypothesis of biological evolution that is fundamentally different from both religious and fundamentalist as well as from the traditional scientific (modern evolutionary synthesis) understanding  . On the basis of this hypothesis, we proposed a new approach to genome sequencing that consists in developing its underlying ideas (  , pp. 235-242). This was prompted by the fact that according to various recent data 70 to 90% of the genes in the human genome have roles that are still unclear, even though the genome has been studied by numerous research teams. At the same time, each new discovery suggests that each new studied gene performs a specific function. Obviously, the idea of a programmed design of increasingly complex genomes could point researchers towards a much more rational way of sequencing. This assumption is confirmed by the similarity (long ago noticed by scientists) of the genomes from E to E (from elephants to E. coli). From this standpoint, sequencing should be started from the simplest organisms, whose genomes are thousands of times shorter and in which there are no introns, i.e., non-coding sections of a gene, and the percentage of genes encoding proteins increases to 80% and more (  , p. 86). This makes the sequencing of such a genome simpler by many orders of magnitude than that of the human one. This implies that the study of the simplest genomes would greatly facilitate clarification of ideas related not only to the synthesis of protein, but also to the general purpose of other genes that have much in common with them. The next step could be to study the role of the simplest intron inclusions, as they appear in the genome of simple organisms etc. After clearing out the principles (ideas) that underlie the structure of the simplest genomes, the transition to more complex ones would involve fewer unsolved mysteries, which would greatly facilitate the study of their specifics, etc. The proposed method of sequencing would allow one to identify general principles of evolution in a genome not only for species but also biocenoses and the biosphere as a whole.
Something similar is typical of attempts to study the functioning of the human brain. Comparison shows that even though the genome consists of about three billion functional units, the human brain consists of one hundred billion functional units, i.e., neuron cells. At the same time, if we take the fact into account that every neuron receives and processes information from several thousands of input dendrites and feeds the result to the output axon, which in turn transmits it to the dendrites of other cells, then we obtain the structure whose complexity is by no means inferior to that of the genome. As well, if we take the fact into account that, unlike a relatively static genome the connections of neurons are constantly changing, the resulting combinatory space of interactions quantitatively superior to any reasonable value. According to neurophysiologist K.V. Anokhin, scientists managed, using the most powerful computers, to simulate the behavior of a nematode worm, whose brain contains about 300 neurons communicating through 6000 contacts. Further work is faced with almost insurmountable difficulties. The scientific foundation for the field of brain research is modern evolutionary synthesis. However, the above-presented viewpoint could result in an utterly realistic approach based on the common program ideas laid by the evolution in the control systems of the human brain and the simplest unicellular organisms, such as bacteria or mycoplasma. This approach appears all the more justified, as it took evolution about half of the time of the world’s existence, about two billion years, to create unicellular organisms. This prompts a strategy of concentrating attention on the principles that are inherent in the control systems of the simplest organisms and after decoding of those, moving gradually to more and more complex unicellular and multicellular organisms to the ultimate decryption of the structure and functioning of the human brain.
Thus, the author suggests that information and ideas presented in this work will allow changing radically the use of the second law of thermodynamics, eliminating a number of incorrect directions of its use, both in thermodynamics and in several areas of theoretical physics, theoretical biology and sciences, modeling the behavior of living organisms, and laying foundations of a new worldview in the field of the Universe, its evolution and the place of the people in the Universe.