Thermodynamics of Life

Biology is currently plagued by several fossil concepts that may be responsible for the current stagnation in medicine. Through a careful screening of the origins of thermodynamics, such fossils concepts have been identified: assumption that heat is a form of energy, assimilation of entropy to disorder, assimilation of death to states of maximum entropy, assimilation of ATP to the energy currency of living cells, non-recognition of entropy as a state function of the whole universe, belief that free energies are another kind of energy, self-referencing in the definition of life, ignorance of basic principles of quantum physics and more particularly of the importance of intrinsic spin, confusion between three different forms of reversibility, non-recognition that irreversibility is at the heart of living systems. After stowing of these concepts in the cabinet of useless and nasty notions, a fresh new look is proposed showing how life is deep-rooted trough the entropy concept in quantum physics on the one hand and in cosmology on the other hand. This suggests that life is not an emergent property of matter, but rather that it has always been a fundamental property of a universe filled with particles and fields. It is further proposed to dismiss the first (energy = heat + work) and third laws (entropy decreases to zero at zero Kelvin) of thermodynamics, retaining only the clear Boltzmann’s definition of entropy in terms of multiplicity of microstates Ω, S = kB×Ln Ω, and the second law in its most general form applicable to any kind of macrostates: ∆Suniv ≥ 0. On this ground, clear definitions are proposed for life/death, healthiness/illness and for thermodynamic coupling. The whole unfolding of life in the universe: Big Bang → Light → Hydrogen → Stars → Atoms → Water → Planets → Metabolism → Lipids → RNA’s → Viruses → Ribosome → Proteins → Bacteria → Eukaryote → Sex → Plants → Animals → Humans → Computers → Internet, may then be interpreted as a simple consequence of a single principle: ∆Suniv ≥ 0. We thus strongly urge biologists and physicians to change and adapt their ideas and vocabulary to the proposed reformulation for a better understanding of what is life and as a consequence for better health for living beings.


INTRODUCTION
Some time ago, it has been advocated that scientific knowledge has generated during its rapid expansion a certain number of conceptual fossils. 1 Among the identified fossils we have: Newton's three laws, actions at distance in physics, existence of several forms of energy, space 'full of nothing' but having properties, hysteresis curves in ferromagnetism and entropy as a measure of disorder. It is worth noting that such fossils exist because they are vestiges of ways of thinking that are no more adapted to modern scientific knowledge. The trouble is that fossils are still well alive in the world of scientific teaching and that they are the first creatures met by young students learning mechanics, electromagnetism, thermodynamics, chemistry and biology. Being sprinkled by the dust accumulated over eons, fossils are still haunting nostalgic scientific minds writing publications or books. The field that is the most plagued with fossil thinking is obviously biology and by extension medicine. Conversely, the field that is the less contaminated by fossils is physics owing to the occurrence of two great revolutions: general relativity and quantum mechanics. Fossils also spontaneously contaminate thermodynamics and chemistry, but as soon as such scientists become acquainted with quantum physics, the contamination disappears quickly. The trouble is that biologists and physicians are hardly trained in quantum physics and have thus a minimal chance of stowing their fossils in the cabinet of useless concepts.
This is very unfortunate, as biology and medicine have to deal with the life phenomenon, a hassle not encountered in chemistry or physics. However, it appears that thermodynamics is a way of thinking that is shared by physics and chemistry on the one hand and by biology and medicine on the other hand. So, there is good chance that by focusing on thermodynamics, biologists and physicians may be able to make their revolution to cast a firm and non-fossil bridge over the chasm separating inert from living matter (see figure 1 in reference [2]). In the following, we will address the problem starting from first principles with the aim of having a clear picture on how life has appeared on earth without any violation of the second law. It is our feeling that some conceptual fossils that ought to be exorcized currently hinder useful progresses in biology and medicine. By stowing these fossils at their right place, one may hope initiating the same kind of revolution that has affected chemistry and physics at the dawn of the twentieth century. The basic aim here is not introducing totally new yet unknown concepts, but rather reinterpreting ancient ones at the light of quantum theory and at the scale of the whole universe. In the new proposed paradigm, life should no more be perceived as a highly improbable event, but rather as an inexorable consequence of universe's birth some 14 billion years ago.

LIFE AND DEATH
One of the biggest fossils that plagues thermodynamics is the assimilation of entropy with disorder.
Every scientist, even the most brilliant ones, may be tempted to use such a misconception either in teaching or in research. In fact, the misconception arises as soon as Boltzmann's relationship S = k B ·Ln Ω is not recognized as one of the most fundamental principle ruling universe's evolution starting from inert matter and ending up in living matter and consciousness. Having an unclear idea of what is lurking behind the Greek letter Ω is the main hurdle that prevents a good understanding of what is entropy. Being ignorant of the real nature entropy triggers a quasi-automatic switch of attention towards a closely related concept: energy E.
A good starting point is to spend some time around a crucial question: What is life? And one of the most obvious answers was provided by the Greek philosopher Aristotle some 2 500 years ago by noticing that spontaneous motion was an essential attribute of any living thing. And as soon as motion is identified with life, it follows the following logical inference: "Quo cicior motus, eo magis motus", stating that the faster a motion, the more of a motion it is. This innate property of motion then enters in deep resonance with the fact that the more life does, the more life it is. 3 Then enters the great Sir Isaac Newton showing that motion may be changed by applying forces (vis impressa) that could be viewed either as a temporal gradient, f = dp/dt, of the amount of motion p = m·v (where v is the velocity of a given mass m) or as a spatial gradient, f = -dE/dr, of a potential energy E. Later, correcting Descartes's misconception of the amount of motion, Gottfried Leibniz introduced its vis viva meaning "living force" that was not a force at all, but rather kinetic energy (E = p 2 /2m). With the hope of divorcing from Aristotle's duality between actuality (observed motion) and potentiality (virtual motion), it was finally decided to consider a single unifying theoretical concept (energy) that have a single manifestation in time (kinetic energy related to mass) and many manifestations in space (potential energies related to abstract fields derived from the presence of masses or electrical charges).
Consequently, with energy responsible for motion and with motion being an obvious attribute of life, an obvious connection between energy and life could be established and is still perpetuated in modern biological thinking where every event is analyzed in terms of available energy supposed to be stored in the "highenergy" part of a molecule named adenosine triphosphate (ATP). The thesis defended here, is that such a view is just a highly fossilized dogma preventing us to really understand what is life and one of its most deadly manifestation: cancer. The trouble with such a dogma is that quite great minds have been obliged to be engaged into incredible intellectual contortions to explain what is life. A most prominent contribution was obviously Erwin Schrödinger's introduction in biological thinking of a totally new crazy concept nicknamed negentropy. 4 Schrödinger's reasoning is that entropy being disorder, a living organism, an obviously ordered thing, avoid decay by eating, drinking and breathing, that is to say through the existence of a metabolism. Being an expert in physics he knew perfectly well that any calorie is worth as much as any other calorie and that the overall energy content of an organism is stationary as well as its material content. Here an exact quote on how he was finally led to introduce this new concept: Everything that is going on in Nature means that an increase in the entropy of the part of the world where it is going on. Thus, a living organism continually increases its entropy and thus tends to approach the dangerous state of maximum entropy, which is death. It can only keep aloof from it, i.e. alive, by continually drawing from its environment negative entropy -which is something very positive, as we shall immediately see. What an organism feed upon is negative entropy.
I have put under bold character the slippery parts of the argument. The first misconception is that the second law of thermodynamics stating that entropy is doomed to always increase in time does not concern a part of the world but the universe taken as a whole. The second misconception is to associate death to a state of maximum entropy. This is just utterly wrong as assuming that life is motion means that a state of maximum entropy is also a state of maximum in motion. To keep coherence with associating motion and life, one should state that death, i.e. absence of motion, should be better associated with the crystalline state observed close to a temperature of 0K and corresponding to a state of null entropy (Nernst's theorem). Accordingly, every physician knows that just after death, the body undergoes a transition from a gel state of high entropy towards a fully rigid state named rigor mortis of lower entropy.
Subsequent decomposition corresponding to an increase in entropy with liquefaction and gases escapes should be attributed to an intense activity from microorganisms that use the dead corpse as a source of food. One may also, by using suitable chemical compounds, inhibit such a microbial activity. If this is the case, the dead body increases its rigidity until achievement of the mummy state where crystallinity becomes so high and entropy so low that the dead body can remain unaltered with full exquisite structural details during a thousand of years for humans and during millions of years for animal fossils.
Obviously, Schrödinger being an expert in theoretical physics with absolutely no experience in medicine cannot be blamed for the second mistake made by trying associating death with states of maximum entropy. The need for a sign reversal in entropy is in fact a logical conclusion of such a wrong initial assumption. But, despite distilling fundamentally wrong ideas in biology, Schrödinger's little book has been greatly influential in inspiring a number of pioneers of molecular biology taking for granted that the origin of life is the same thing as the origin of replication. However, for scientists thinking that metabolism was more central to life than replication, Schrödinger's book was just a sword cutting through water. Quoting for instance Linus Pauling, Nobel Prize in Chemistry (1954): When I first read this book, over 40 years ago, I was disappointed. It was, and still is, my opinion that Schrödinger made no contribution to our understanding of life. 5 Concerning Max Ferdinand Perutz, Nobel Prize in Chemistry (1962):

Sadly, however, a close study of his book and of the related literature has shown me that what was true in his book was not original, and most of what was original was known not to be true even when the book was written (…).
The apparent contradictions between life and the statistical laws of physics can be resolved by invoking a science largely ignored by Schrödinger. That science is chemistry. 6 Finally, for the theoretical physicist Freeman H. Dyson, Henri Poincaré Prize (2012): Schrödinger's account of existing knowledge is borrowed from his friend Max Delbruck, and his conjectured answers to the questions that he raised were indeed mostly wrong. Schrödinger was woefully fully ignorant of chemistry, and in his isolated situation in Ireland he knew little about the new world of bacteriophage genetics that Delbruck had explored after emigrating to the United States. 7 In fact, Schrödinger's view was more oriented towards viruses that are just replicating molecules rather than towards living cells that could reproduce owing to the existence of a metabolism.
Alas, Schrödinger was recipient of the Matteucci Medal (1927), the Nobel Prize in Physics (1933) and the Max Planck Medal (1937). At this level of honors, everything you say is taken as golden words, even when these words have been expressed in a domain very far from your field of expertise. A striking example of the paralyzing effect of Schrödinger's two mistakes is provided by this passage of Szent-Györgyi's little book on water and cancer (chapter IV, p. 40). 3 The more life does, the more life it is; the more negative entropy is liberated, the more can be retained of it. Life supports life, function build structures, and structure produces function. Once the function ceases, the structure collapses, it maintains itself by working. A good working order is thus the more stable state. The better the working order, the greater its stability and probability. In inanimate systems the most stable state is at the minimum of free energy and maximum of entropy. This is 'physical stability'. In living systems the opposite is true. The greatest stability is at the maximum of free energy and minimum of entropy, which corresponds to the best working order. This is 'biological stability'.
Again, the man writing these words was recipient of the Nobel Prize in Physiology and Medicine (1937) and of the Albert-Lasker Prize (1954).
The first statement underlined with bold characters is a pretty good example of a circular argument, that is, an argument that assumes the conclusion as one of its premises. Such statements should be systematically avoided, owing to their inevitable evolution towards vicious circles, chains of events in which the response to one difficulty creates a new problem that aggravates the original difficulty. The difficulty of making progresses in medicine nowadays may be directly related to this first circular argument where life is defined as being life. The last statement is just the consequence of Schrödinger's initial mistake. Here we are now facing a wrong argument, as anybody well-trained in thermodynamics knows that a state of maximum free energy is always unstable, i.e. never stable. Concerning the last sentence, again it is worth stressing that in thermodynamics, the state of a minimum of entropy is the crystalline state, a state where no kinetic energy is available to perform work. If undisturbed, a crystal will always remain a crystal for eternity with absolutely no tendency to perform any kind of work, as it corresponds to a state of maximum potential energy. Here we are facing the reverse situation where an expert in biology with very little training in physics uses its scientific authority for talking outside its expertise domain. The pity was that Szent-Györgyi was on the good track by associating water and metabolism, but that he was also paralyzed by Schrödinger's wrong ideas about entropy.

ORIGIN OF LIFE
A first obvious point is the failure of modern biology to clearly explain how life has appeared on earth. Nowadays, it is obvious that acetyl-coenzyme A deriving from pyruvate decarboxylation is the universal food of any kind of living cell. However, such a statement may be wrong as it has been demonstrated that pyruvate may be engaged in a purely abiotic cycle where citrate is replaced by 4-hydroxy-2-keto-glutarate (HKG). 8 As this HKG-based cycle is able running without the help of enzymes and consuming pyruvate, glyoxylate and hydrogen peroxide H 2 O 2 instead of dioxygen O 2 , it is a good candidate for a very primitive way of unrestricted proliferation.
Yet another tacit major assumption of biology is that adenosine triphosphate (ATP) should be the universal energy carrier of any living entity. However, it has been recently demonstrated that ATP has properties of a biological hydrotrope through its ability to solubilize hydrophobic molecules in aqueous solutions. 9 Its main role would thus be to prevent the formation harmful protein aggregates as well as a being a powerful remover at millimolar scale of previously formed aggregates.
It has long being pointed out by Nobel's price winner Albert Szent-Györgyi that water should be considered as the web of life and that bioenergetics is but a special aspect of water chemistry: 3 Moreover, in a quite remarkable insight, Szent-Györgyi could foresee that during anaerobic life, a pool of H's have been constantly on tap with sufficient food to fill the pool with almost no limit to proliferation. When O 2 appeared as a waste of photosynthetic activity, it was possible to turn off the tap of the H-pool during the socalled great oxidation event (GOE), opening the way to differentiation and thus to the building of complex multi-cellular organisms. However, when the cell divides, it has to break down its bulky oxidative mechanism and revert to the more archaic use of the H-pool.
The best way to get a reasonable scenario for life apparition on earth is here to trust mathematicians and not biologists. Accordingly, biologists are concerned with nowadays life and following Schrödinger's book have taken for granted that the duplicative aspect of life is primary and the metabolic aspect secondary. Such a polarization towards the idea that metabolism is governed by gene expression being obvious for a modern cell, the good question is to wonder if the reverse order (i.e. metabolism controlling gene expression) was not the rule in the past 7 . As there is a fierce debate in biology about what was the good order at the very beginning, the best way is to get a clue from mathematics. This is because the very notion of time is meaningless in mathematics with no dependency on precise material configurations contrary to living cells that are made of matter and subjected to the arrow of time. Moreover, mathematicians have created computers that are not precisely alive, but nevertheless share with living cells the ability to deal with information.
It is a well-known fact that automatons have been invented and developed by John von Neumann; the man who gave to quantum physics its mathematical foundations. For developing computers, von Neumann has understood that any automaton should have two essential components. A first one, is hardware for processing information, the second one being software for embodying information into instructions. Transposed to a living cell, von Neumann's mandatory dualism points to proteins (metabolism) as hardware and nucleic acids (replication) as software. Could we now imagine what would be the behavior of hardware without software? Such a situation is encountered as soon as the computer enters into an endless loop. Such an automaton is doomed to crunch numbers independently for as long as it is alimented. For bacteria, this is unlimited growth while for multicellular organisms we have cancer. Now let's reverse the problem by asking what would be the behavior of software without hardware? Here again, we have an answer for both automatons and living cells: viruses. The fact that the same term has been here chosen for a stuff made of inert matter (computer) as well as for a living stuff (cell) comes from the fact that the material configuration embodying information does not matter. Of course viruses are obligatory parasites that needs a cooperative host equipped with hardware for being able to undergo replication. And from such a viewpoint a clear order emerges: metabolism first, replication second. As such a conclusion is suggested from the study of computers, it should be seriously considered as a fundamental truth for all systems implicated into information processing. The whole scenario for life apparition on Earth is now clarified and may be summarized by a series of successive events, each one requiring presence of its predecessor to be able generating its successor: Big Bang → Light → Hydrogen → Stars → Atoms → Water → Planets → Metabolism → Lipids → RNAs → Viruses → Ribosome → Proteins → Bacteria → Eukaryote → Sex → Plants → Animals → Humans → Computers → Internet → ?
The first events from Big Bang to Planets are taken from physics (cosmology and quantum mechanics) and will not be discussed in details here. Please however note that according to this fundamental life-development scenario that hydrogen should not be considered as an atom, but rather as a combination of two elementary particles (proton and electron) generated by the Big Bang that have generated quarks for building nucleons and leptons for building atoms after association with nucleons. This separation is important for stressing that hydrogen should be considered as a universal "fuel" in our universe, not only for stars (proton eaters), but also for living cells (proton plus electron eaters). Following nucleosynthesis in stars leading to supernova explosion, synthesized atomic nuclei were dispersed within the universe to form atoms and molecules on cool bodies. Among all the possible atomic combinations, we have chosen to highlight water H 2 O, as this substance has always been associated with occurrence of life. From a purely statistical viewpoint, there is in fact no other possible choice as ordering chemical elements by decreasing cosmic abundances, we get the following order: H, He, O, Ne, N, C, Si, Mg, Fe, S, Ar, Al, Ca, Na, Ni, P, Cl, K. 10 Ignoring helium (He), a closed shell unreactive atom, the most abundant nuclei prone to accept protons and electrons to form a neutral combination is oxygen. Consequently, if we admit that life is a fundamental attribute of the universe, it logically follows that its material expression as a movement should involve hydrogen, oxygen and their low-temperature marriage: water. Then, to control these natural moves, life also needs structures and from the cosmic abundance sequence, the next three recruited nuclei should be nitrogen, carbon, and sulfur as neon is, like helium, a closed shell unreactive atom. Consequently, the following gases should, for purely statistical reasons, be important for life manifestation: water = (H 2 , H 2 O, O 2 , O 3 ) and structure = (NH 3 , CH 4 , C 2 H 2 , C 2 H 4 , N 2 , NO, CO, CO 2 , HCN, H 2 CO, NCO, HNCO, H 2 S, COS).
Besides these gaseous combinations, oxygen the most abundant element after hydrogen would also combine with silicon, sodium, potassium, magnesium, calcium, aluminum and carbon leading to important crust minerals such silico-aluminates ( Accordingly, nitrogen heterocycles are commonly found in carbonaceous chondrites that are highly porous meteorites rich in carbon and water. 11 After Earth accretion and following the great deluge that have filled the oceans, one may also consider alteration of apatite Ca 5 [PO 4 ] 3 (OH) by water and carbon dioxide assisted by the intense ultraviolet radiation in provenance from the Sun: The basic building blocks of ribonucleotides [1'(A,G,C,U]-Ribose-(5')CH 2 -O-PO 2 -(OH)] ⊝ may then further be assembled into RNA's, with the help some H-pool and most probably clays (silico-aluminates). Obviously, one may also use the intense energy provided by cosmic rays to create 20 building blocks for an organic hardware at the surface of meteoric materials for instance (Table 1): Reduced amino acids: {(n-p) CO + CO 2 + p COS + q HCN} + m H 2 + γ = C n+q+1 H 2(m-n)+q N q O 2 S p + n H 2 O Oxidized amino acids: {n CO + 2 CO 2 + q HCN} + m H 2 + γ = C n+q+2 H 2(m-n- For the existence of left-handed amino acids and the virtual exclusion of their right-handed forms, one may invoke the asymmetric distribution of neutrinos emitted by a supernova 12 . Further condensation to form polypeptides has probably occurred within the van der Waals gap of clays minerals thanks to carbonyl sulfide for instance 13 . Clays or iron sulfur bubbles (see 7 for details concerning plausible scenarios and references) would be necessary for protection of these fragile polymers from intense ultraviolet radiations emitted by the Sun. Obviously, lacking nitrogen-containing gases, one may also envision synthesis of fatty acids at the mouth of black smokers for instance where the reduc-ing power of the magma meets water (see 8 for a more detailed story): {(n-1) CO + CO 2 } + (2n H 2 + magma) = C n H 2n+2 O 2 + (n-1) H 2 O Such fatty acids would allow formation of oily little bags holding inside their cavity a more or less random collection of organic molecules. Such proto-cells would concentrate organic matter and after becoming too big would be cut in half producing two daughters inheriting in a statistical way the chemical machinery.
At this stage, the oily bags would be confronted to the problem of keeping a good solubility for their large amount of watery organic matter. A crucial step would thus be selection of ATP as a powerful hydrotrope. 9 This is because ATP becomes essentially a ribonucleotide after removal of two phosphate groups. So, if RNA's could be formed from AMP within these oily bags, the creation of ATP under low water activity conditions is not unlikely. But RNA is a molecule able to replicate itself that could be transferred from bag to bag carrying, at each transfer, deterministic genetic information instead of the statistical whole chemical machinery. Having a clear scenario of life apparition on Earth, the only remaining obscure point that remains to be clarified is the physical nature of a primitive metabolism. During the eighteenth century, the nature of heat was a deep question related to the question of how improving steam engines to get the maximum efficiency from a given amount of combustible. A decisive step was made in 1784 by the French chemists Antoine Laurent de Lavoisier and Pierre Simon de Laplace after invention of an ingenious ice-calorimeter measuring the amount of heat emitted during combustion and respiration. By measuring the oxygen consumed during respiration it was thus proven that combustion and respiration were one and the same and that the amount changes depending on human activities: exercise, eating, fasting, and sitting in a warm or cold room 14 . However, Benjamin Thomson, Count Rumford, in a famous experiment made in 1798 showed that the heat generated in the process of boring cannon was a definite, measurable quantity, which did not reduce as long as the experiment was continued. It thus follows that the source of the heat generated by friction in these experiments, appeared evidently to be inexhaustible. 15 For Rumford, it was obvious that the only thing that could be produced without any limit from mechanical work was motion, meaning that heat should indeed be a form of motion.
But at that time heat was not perceived as motion but rather as a kind of immaterial fluid, named caloric, that could be exchanged between material bodies depending on their thermal state measured by their respective temperatures. In 1824, it was even possible to forge a physical unit, the calorie, as being the amount of heat necessary to change the temperature of 1 gram of water from 14.5 to 15.5 °C under atmospheric pressure.
The same year, the French engineer, Sadi Carnot makes a decisive contribution with the happy idea of a reversible engine that would be able to turn the shaft backwards, delivering the same work w back to the engine and the same heat q back to the high-temperature reservoir 16 . He was then the first perceiving that no heat engine could be more efficient than a reversible engine operating between two temperatures t 2 (reservoir) < t 1 (heat's source). Accordingly, if Carnot's principle were wrong, then it would be possible to build machines that would run forever, delivering an infinite amount of work without any expenditure of fuel (perpetual motion machines of the second kind).
One of the big advantages of reversible heat engines is that they are universal devices, working independently of the working substance (not necessarily steam) or on the mode of operation (internal machinery does not matter). However, Carnot could not give a quantitative criterion for reversibility, meaning that his decisive contribution was in fact completely ignored. In 1840, Dr. Julius Robert von Mayer, a German physician, while surgeon to a Dutch India vessel cruising in the tropics, observed that the venous blood of sailors seemed redder than venous blood usually observed in temperate climates. 17 Mayer then reached the conclusion that the cause must be the lesser amount of oxidation required to keep up the body temperature in the tropics, suggesting that the body was a thermal machine dependent on outside forces for its capacity to act. Such a revolutionary idea was however completely ignored by physicists until 1847, when another German physician, Hermann von Helmholtz, had been independently led to the idea of energy conservation. Meantime in England, James Prescott Joule was going on from one experimental demonstration to another, suggesting the existence of a universal mechanical equivalent of heat. In 1845, after several years of hard experimentation in his kitchen, Joule was finally supported by William Thomson, (later Lord Kelvin), for a definitive establishment of the law of conservation of energy.
It was only after recognition of a mechanical equivalent of heat by Joule and Kelvin that reversible efficiency e r was established to be a universal function of the temperatures. 18 Introducing its universal temperature scale that is independent of the properties of any particular substance, Kelvin could show in 1854 that the efficiency e of real heat engine efficiency should obey the following inequality: Here e r is Carnot's universal reversible efficiency, q 1 being the heat received by the cold reservoir and q' 2 = -q 2 , the heat discharged from the hot source, with equality if and only if the engine is reversible.

ENERGY AND SPIN
At this stage (1854), we meet another fossil concept stating that heat should be a form of energy. The wrongness of such an idea may be easily demonstrated by the fact that heat can be created at will from friction, whereas mechanical energy cannot be created or destroyed. It follows that enunciating the first law of thermodynamics as E int = q + w, where E int is a total internal energy, q heat and w mechanical work is evil science. 19 Adding two quantities measured with the same physical unit (joule) but of different nature explains why thermodynamic structure appears strange and confusing relative to other fields of physics, where such an error is never made. It is thus time to dive into quantum theory, a science where, contrary to thermodynamics, energy has a clear definition, as being the eigenvalue of an ab initio Hamiltonian operator acting on a Hilbert's space spanned by the eigenvectors of the Hamiltonian operator (Heisenberg's representation). Accordingly, at this level of theory to each system composed of N positively charged nuclei associated to N negatively charged electrons corresponds a characteristic discrete energy spectrum {ε n } indexed by an integer n called a quantum number. And here a very strange thing occurs, as instead of putting the N electrons into the ground state ε 1 of the lowest energy in order retrieving the lowest possible energy, electrons occupies not only the ground state levels but also other higher energy levels up to a maximum value (n max ). The rule governing the filling of these high energy levels follows from a property called "spin" taking the value onehalf for protons, neutrons or electrons.
Accordingly, as electrons are not classical particles, but rather quantum entities ruled by a wave-function, they should obey Pauli's exclusion principle stating that a non-degenerate energy level ε n cannot hold more than 2 electrons: one spin 'up' (eigenvalue +1/2) and the other one spin 'down' (eigenvalue -1/2). For highly symmetric molecules, it may happen that two or more energy levels could be degenerated, that is to say that a number m of quantum states share the same eigenvalue. In such a case, Hund's rule states that the configuration displaying the lowest energy, called the "ground state", is the one having the maximum intrinsic spin as well as the maximum angular momentum. The energy spectrum {ε n } associated to any combination of nuclei and electrons is nowadays readily obtained from scratch by solving Schrödinger's equation under a various set of approximations. Thus, filling each energy level with ν n electrons (ν n = 2, 1 or 0) starting from the most negative energy value, the total molecular energy when all nuclei are at their equilibrium positions may be written: 20 For a stable molecule, all filled level (ν n = 2) should be of low energy (ε n < 0), while all empty levels (ν n = 0) should be of high energy (ε n > 0), meaning that E molec becomes more and more negative as the total number of electrons increases. When ε n < 0 (bonding state), there is a good screening by the negatively charged electrons of the highly repulsive nuclei-nuclei interaction. In such a bonding state nuclei are engaged in a chemical bond with a bond order of 1. Conversely, when ε n > 0 (anti-bonding state), there is bad screening of the positively charged nuclei by the electrons, leading to their separation and consequently the bond order is counted as -1. By summing all bond orders over all occupied states, a total bond order is obtained that is usually 1 (single bond), 2 (double bond) or 3 (triple bond). If the bond order is zero, it is impossible to make chemical bonds, a situation encountered with neutral inert gases such as helium, neon and argon that exist only under a mono atomic state. Moreover, as electrons repel each other's, removing one electron to form a cation has a stabilizing effect on the energy levels whose energies become more negative. Similarly, adding an electron to form an anion has an overall destabilizing effect on the energy levels whose energies become less negative.
Having an energy levels diagram in hand and electrons obeying Pauli's exclusion principle, two essential energy levels ruling chemical reactivity should be considered (called frontiers orbitals). These two levels are the HOMO (acronym for highest occupied molecular orbital) that fixes the spin state and the LUMO (acronym for lowest unoccupied molecular orbital), the first empty level located just above the HOMO. Now, a first general rule states that the larger the HOMO-LUMO gap, the higher the chemical stability. This rule has for immediate consequence that the lower the HOMO-LUMO gap, the more reactive and unstable the species is. These rules explain why a radical having only a SOMO that has both HOMO and LUMO character, i.e. a zero HOMO-LUMO gap, belongs to the class of the most unstable and reactive species. And as radicals can be very dangerous species for other non-radical molecules, their role in a living cell is always twofold depending on concentration. At low concentration and high water activity, radicals act as redox signaling messengers with important regulatory functions leading to the so-called positive physiological stress or eustress. 21 At high concentration and low water activity, the same radicals may be responsible for deleterious effects on DNA, polyunsatured fatty acids (PUFAs) and proteins leading to the so-called negative physiological stress or distress. Such a stress-response hormesis is now well documented, meaning that radical scavengers may act either as protective agents or as poisons and should be used with extreme care. Moreover, as terms such as ROS, RNS and antioxidants are quite vague, it is very difficult to forecast what will be the effects of redox-active species.
It is also the HOMO-LUMO frontier orbitals that allow deciding if a molecule should be considered as an acid oxidant or as a base reductant. Accordingly, to behave as an acid or oxidant, a molecule should be able to accept electrons and needs for that to have a LUMO of negative energy. Reciprocally, to behave as a base or reductant, a molecule should be able to give electrons and thus needs to have a HOMO of positive energy. Within such a frame any chemical transformation means involvement of a HOMO on one reactant (the base or the reductant) interacting with a LUMO on another reactant (the acid or the oxidant). Depending on the relative energy order of these frontiers orbitals, all chemical reactions may be grouped in just two classes: i) Acid-base reactions when the HOMO of the base has a lower energy than the LUMO of the acid. Such reactions are easily recognized as in such cases oxidation numbers of all atoms remains the same before and after the reaction. In aqueous solutions acid-base interactions usually involves transfer of a proton H ⊕ .
ii) Redox reactions when the LUMO of the oxidant has a lower energy than the HOMO of the reductant. In such a case some oxidation numbers are doomed to change before and after the reaction through exchanges of one or two electrons.
It is also worth noticing that according to Noether's theorem, the covariance of the equations of motion regarding a continuous transformation with n parameters implies the existence of n quantities, or constants of motion, i.e., conservation laws. 22 More precisely, for each infinitesimal generator of a given continuous Lie group associated to a variable r, it exists a momentum p that remains constant in time and a relativity principle for the variable r. For instance, physical laws of mechanics and electromagnetism are known to be covariant under Poincaré's symmetry group ISO(3,1) having 10 infinitesimal generators. Then, for any infinitesimal translation in time (r = t), the associated conserved momentum is energy (p = E) with arbitrariness in the origin of time. Likewise, for any infinitesimal translation in space (r = x, y, or z), linear momenta (p = m·v x , m·v y and m·v z ) are conserved with arbitrariness in the origin of space. Moreover, for any infinitesimal boost in speed of the center of mass (r = v x CM , v y CM or v z CM ), the coordinates of the center of mass at t = 0 (p = x CM°, y CM° and z CM°) are conserved with arbitrariness in the absolute speed of center of mass. Finally, for any infinitesimal rotation in space (Euler's angles r = α, β, γ), there is conservation of angular momenta (p = L α , L β and L γ ) with arbitrariness in the orientation of space. Consequently, at the mechanical level, although the coordinates and velocities of the constituent parts of an isolated mechanical system may change with time, the sum of all the kinetic and potential energies of all the constituent parts (total energy) is a constant of the motion and has a fixed value, E (Noether's theorem).
Another point following from Noether's theorem is that spin is basically an intrinsic angular momentum that should, as mechanical energy, never change even if molecules are engaged in chemical transformations. This second conservation properties gives rise to the so-called Wigner-Witmer correlation rules that determine the tendency of a reacting system to conserve spin angular momentum. 23 These Wigner-Witmer correlation rules (see Table 2) are of the utmost importance because if they are not satisfied for a given reaction, the reaction will occur, in case of small spin-orbit coupling, only at a very slow rate without a catalyst. This is why you may perfectly mix hydrogen and oxygen in stoichiometric proportions without any violent reaction, even though hydrogen is a one of the strongest reductants and oxygen one of the best oxidants, just after fluorine. This potentially highly exothermic reaction cannot occur in without sparkles, heat or light, simply because it is spin-forbidden (see below). It is the HOMO frontier orbital that allows predicting what will be the spin of a molecule, with three main possibilities.
i) The number of electrons is even and the HOMO is not degenerated. In such a case, the total spin of the molecule is zero corresponding to a singlet spectroscopic state (S = 0). The water molecule is a good example of such a possibility. In fact, most stable molecules fall in this first category.
ii) The number of electrons is odd and the HOMO is again not degenerated. In such a case, the species is called a radical having a total spin of one half corresponding to a doublet spectroscopic state. In such a case the HOMO becomes a SOMO, an acronym for singly occupied molecular orbital. The hydroxyl radical HO• is a good example of this second possibility. Most radicals Table 2. The Wigner-Witmer spin correlation rules. If S A is the spin of reactant A and S B the spin of reactant B, a reaction will be spinallowed if the total spin of the products is included in the series: |S A + S B |, |S A + S B -1|, |S A + S B -2|,…, |S A -S B |.

Reactant A Reactant B Total allowed spin
are highly unstable and are responsible for many deadly chain reactions leading to explosions.
iii) The HOMO is degenerated meaning that the molecule will exist under several spin states depending on the number of electrons that are left as well as the total number of energy levels that are degenerated. Dioxygen O 2 is a typical example of such a situation, with two spin states: S = 0 (singlet spectroscopic state) and S = 1 (triplet spectroscopic state) linked to a doubly degenerated SOMO. Owing to Hund's rule, the state of the lowest energy is the triplet, noted with the spin multiplicity (2s+1) as a superscript before the formula: 3 O 2 .
As dihydrogen H 2 and water H 2 O are singlet state molecules, the direct oxidation of hydrogen by oxygen (total spin S = 0 + 1 = 1) is thus spin forbidden (final state: water with spin S = 0) and cannot spontaneously occur.

INTERNAL ENERGY, HEAT AND WORK
It is crucial realizing that there is absolutely no room for such a thing called heat at a microscopic level (atoms and molecules). Accordingly, if there are quantum operators for position in space, energy, linear and angular momenta and associated conservation laws arising from Noether's theorem, it is not possible defining quantum operators for heat and time. Consequently, there is no reason for heat to be a conserved entity in full agreement with Count Rumford's cannon boring experiments. Similarly, as there is no quantum operator for time, the origin of time cannot remain undetermined and arbitrary as soon as heat exchanges becomes allowed. Heat and the arrow of time (irreversibility) are thus two deeply entangled notions rendering meaningless the assimilation of heat with a particular form of energy. Heat is in fact an alien concept to energy and as metabolism is a friend concept of heat it logically follows that metabolism and life are alien concepts to energy. Moreover, adding heat and work in order retrieving a conserved total internal energy state function as usually done in expressing the first law of thermodynamics, should as already stressed, be avoided. It follows that adding a label "internal" to the word "energy" means something else that ought to be further clarified and discussed.
A perplexing thing is obviously that the new concept of internal energy shares with mechanical energy the same physical unit (joules J) despite the fact of being of a fundamentally different nature. In fact, the slipping from mechanical energy to internal energy is the consequence of considering not a single quantum entity, but rather a huge number (typically 10 24 ) of indistinguishable quantum entities. This means switching from the microscopic world of atoms and molecules to the macroscopic world of substances with the imperative need of distinguishing between microstates and macrostates. Accordingly, for a system made of N particles, a microstate is the enumeration of 6N numbers specifying the spatial positions (x i , y i , z i ) and velocities (v xi , v yi , v zi ) of each particle (i = 1,…, N) belonging to the considered system. For the same system, a macrostate is an arbitrary set of n control variables such as: temperature, pressure, electrical potential, chemical potentials, electric field, magnetic field, surface tension, altitude, speed of the center of mass, etc. For a pure neutral substance at rest without boundaries and not submitted to gravitational, electric or magnetic fields, a macrostate is defined by only 2 variables: temperature and pressure against 6N for each microstate. Temperature is necessary to know what will be the highest energy level (n max ) accessible in the {ε n } energy spectrum putting a constraint on microstates' velocities (v xi , v yi , v zi ), while pressure is necessary to put a constraint on allowed microstates' positions (x i , y i , z i ). As each particle of a microstate may be found under different excited states {ε 1 , ε 2 , …, ε nmax }, one may define the macroscopic total energy, also called internal energy as: 24 A comparison between expressions of E molec and E int is quite instructive and clearly shows the difference between molecular energy, a concept whose value depends only on occupancy numbers (ν n = 0, 1 or 2) and internal energy which is a statistical concept whose value is fixed by populations n i (i = 0, 1, …, +∞) of each accessible energy levels ε i . Now, at the thermodynamic level, it was recognized that if a system is thermally isolated from its surroundings (no exchange of heat, i.e. q = 0) and also mechanically isolated (no work is done, i.e. w = 0), then the function E int of its thermodynamic state does not change. That is one fundamental property that the mechanical energy E and the internal energy E int have in common. The second is that if the mechanical system is not isolated, its total energy E is not a constant of the motion, but can change, and does so by an amount equal to the work done on the system: ∆E = w. Likewise, in thermodynamics, if a system remains thermally insulated (q = 0), but is mechanically coupled to its environment, which does work w on it, then its internal energy E int changes by an amount equal to that work: ∆E int = w. This coincidence of two such fundamental properties is what led to the hypothesis that the thermodynamic function E int has something to do with the mechanical energy E, the total of the kinetic and potential energies of the molecules, of a system having huge number of degrees of freedom.
But a critical assumption, thermal insulation, remains for identifying E with E int , as if the system is not isolated, exchanging heat with its surroundings for instance, then the energy E is no more a constant of the motion. It is precisely at this point, that a divorce occurs between thermodynamic energy and mechanical energy, and one should thus refrain from writing E int = q + w, something allowed on the ground that q and w share the same physical unit (Joules), but that is nevertheless forbidden on the ground that mechanical energy (work) has an associated quantum operator, whereas it exists no quantum operator associated to heat. Deeply linked with this divorce is the distinction between reversible and irreversible phenomena. This divorce is also the reason why Max Planck about a hundred years ago was complaining against an error "impossible to eradicate" concerning the confusion made by scientists between mechanical, thermodynamic and Carnot reversibility. 25 These three kinds of reversibility may be clarified by considering a system A evolving into another B. At the level of microstates, reversibility means the reversal of all constituent parts velocities, to carry back the system to state A along its previous followed path. But, to restore the original state A, a second reversal of all velocities is necessary when each individual part has recovered its initial position. This is the so-called mechanical reversibility. But, one may also envision running the system is the opposite direction B → A, restoring only the original macrostate in terms of temperature and pressure for instance (Carnot's reversibility) and not the original microstate (mechanical reversibility). However, it may happen that the reverse B → A process at a macrostate level may not be feasible owing to supercooling at a phase transition for instance. Nevertheless, if the original macrostate could be recovered by a succession of states B → C → D → A, without any external changes, then we are facing thermodynamic reversibility.
But nowadays, who cares about all these fundamental distinctions? Confusion between mechanical and thermodynamic reversibility leads immediately to the apparent impossibility of reconciling the second law, claiming the existence in nature of irreversible processes, with the full reversibility of the equations of motion. But if one makes the distinction between a mathematical fact (mechanical reversibility impossible to realize on a huge amount of constituent parts) and what can be really done in a laboratory (thermodynamic reversibility), the apparent paradox disappears.

ENTROPY AND IRREVERSIBILITY
After this digression into quantum physics, showing that heat cannot be a form of energy but something else, we may go back to Kelvin's expression of Carnot's principle. The key point is that this principle is formulated through an inequality, the equality holding only for a reversible transformation. Kelvin could not go one step further by introducing a new state function S such that for a sum of infinitesimal heat increments dQ along a cycle where the end state coincide with the initial state: Again, the equal sign applies if and only if the process A → B is reversible. Here, T denotes the temperature of a heat bath with which the system is momentarily in contact to exchange heat, which is not necessarily the temperature of the system. It was the German physicist Rudolf Clausius that was responsible for this crucial step having coined the name "entropy" for this quantity (meaning "in evolution" through heat), by analogy with the word "energy" (meaning in action through work) 26 . One may notice that in such a relationship, the negative of the left-hand side may be interpreted as the entropy gained by the heat reservoirs that constitute, for the system, the "rest of the universe". So for two processes that begins and ends in thermal equilibrium, a golden rule for evolution with heat involvement should be: Such an inequality means that only three kinds of processes have to be considered in nature 27 : i) Natural or irreversible process: ∆S univ > 0. ii) Idealized or reversible process: ∆S univ = 0. iii) Unnatural or non-spontaneous process: ∆S univ < 0.
It is worth noting that such a formulation involving the universe, a spherical entity having a diameter of about 880 Ym, is mandatory as it is the only really closed system unable to exchange matter, heat or radiation with its surroundings. Consequently, an implicit mandatory act is to split the universe total entropy change ∆S univ into a first term ∆S syst summing all changes occurring in one part of the universe of particular interest called the "system", and another sum of all entropy change ∆S surr occurring in the remaining part, called the "surroundings". It is worth noting that such a partition is totally arbitrary, as it exists nothing in physics that would allow declaring that such one given partition is better than another partition.
But, having to deal with the whole universe whose diameter is 880 Ym may be a really shocking situation for a meter-sized scientists and worst for a micrometersized bacteria. The only scientist that would have not been shocked would probably be the German physicist Ernst Mach who was convinced that local physical laws are determined by the large-scale structure of the universe. Thus speaking of the law of inertia, Mach's own words were: When, accordingly, we say that a body preserves unchanged its direction and velocity in space, our assertion is nothing more or less than an abbreviated reference to the entire universe... In point of fact, it was precisely by the consideration of the fixed stars and the rotation of the earth that we arrived at knowledge of the law of inertia as it at present stands, and without these foundations we should never have thought of the explanations here discussed. The consideration of a few isolated points, excluding the rest of the world, is in my judgment inadmissible. 28 It is worth recalling that Mach's book was highly influential in orienting Albert Einstein thoughts towards formulation of its theory of general relativity that requires an ether connecting every mass: Recapitulating, we may say that according to the general theory of relativity space is endowed with physical qualities; in this sense, therefore, there exists an ether. According to the general theory of relativity space without ether is unthinkable; for in such space there not only would be no propagation of light, but also no possibility of existence for standards of space and time (measuring-rods and clocks), nor therefore any space-time intervals in the physical sense. 29 We have put in bold character some crucial words such as inadmissible or unthinkable in the mouth these two top scientists that both suggest that there is great danger in believing that isolated masses may exist. For Mach, the mere fact that two masses mutually interact is the consequence of the existence of the whole universe. Similarly, for Einstein, the same two masses can never be disconnected from the unique ether filling the whole universe.
Such considerations are crucial for biology in realizing that it is meaningless of speaking of a living cell without speaking of what surrounds this living cell. Similarly, in chemistry, it is the existence of a container that allows speaking of a chemical bond between atoms. Atoms and molecules exist only because they are confined in a small part of the whole universe. A proof that chemical bonds have no existence by themselves is clearly evident by letting a molecule diffuse into the intergalactic space. Here the volume is so huge that the molecule will spontaneously dissociate into atoms and that atoms will also separate into protons, neutrons and electrons, whatever the considerable "attractive forces" holding these particles together on earth. Nuclei, atoms and molecules can manifest themselves only after confinement into a small volume (nucleus for nucleons and atoms or molecules for electrons). This is precisely why the unique state of matter in the universe is the plasma state and why any atmosphere around a planet becomes an ionosphere at its interface with intergalactic space. In other words, what we see at a local scale cannot be disconnected from configurations of matter at much larger scale. Such a fundamental fact of nature is evident not only in classical mechanics (law of inertia), general relativity (existence of an ether connecting all masses) but also in quantum physics where it could also be demonstrated that molecular structures have no intrinsic existence 30 . If such implicit subtleties are evident for scientists well acquainted with general relativity or quantum mechanics, they are just ignored by other scientists not trained into these two disciplines, prone to believe that atoms or molecules have an existence independent of their container. Being ignorant that atoms and molecule are just ideas or conceptual schemes that have no independent reality has led to many paradoxes and confusing situations in science. In fact, the only real tangible thing is the universe taken as a whole that constitutes the single and only acceptable reference state for defining fictive entities such as atoms, molecules, cells, planets and galaxies as lucidly perceived by Ernst Mach. Such a view agrees fully with quantum mechanics, as the only way for having null wave functions is to go at the farthest edge of the universe. Obviously, people trained to consider that matter particles are submitted to local forces may be deeply shocked by such an effect of the configuration of the whole universe on tiny little things such as molecules or cells. But, realizing that forces in fact does not exist being just the effect of nonlocal fields filling the whole universe, the shocking statement becomes a mere platitude, an obvious consequence of modern ideas about space, time and matter.
Forgetting that the only real thing is the whole universe was responsible, in thermodynamics, for the assimilation of heat with energy. By putting focus exclusively on energy that can never change, entropy, the only concept allowing evolution with time, was then assimilated to disorder and chaos. So, one should first realize that heat is not a particular form of energy, but is rather the manifestation of an entropy flow. Another crucial point is that entropy is not a measure of disorder but a quantity like mass, amount of motion, volume, electrical charge, area, particles that may be exchanged between two systems. So, when system A accepts entropy from system B, temperature T A increases (heating), volume V A increases (expansion) and the so-called "bonds" between sub-parts are destroyed increasing the total number of particles N A (disaggregation, loss of structure, catabolism in biology). Of course sub-system B that have given entropy to A has decreased its temperature T B (cooling), occupies a smaller volume (contraction) and has created new "bonds" decreasing its total number of particles N B (aggregation, creation of structure, anabolism in biology). Most importantly, if the entropy exchange is irreversible, this means that de novo entropy has also been created whose excess has been released in the universe to which systems A and B belong. At this fundamental level there is not need bothering about energy because the total sum (including the energy stored in the universe) is the same before and after the exchange of entropy (Noether's theorem). So, the real tangible thing allowing perceiving an arrow of time should be entropy. And here, we are not speaking of the entropy content of a sub-system, but of the entropy of the universe, taken as wholeness.
FIRST AND SECOND LAW OF THERMODYNAMICS However, Clausius's claim for the existence of a thing called entropy has the drawback to put at the root of thermodynamics two very different laws: the first law emphasizing conservation of something identified with energy ("Die Energie der Welt bleibt constant") and the second law introducing entropy, associated to heat that is doomed to never decrease ("Die Entropie der Welt strebt einem Maximum zu"). Moreover, enunciating the first law as equivalence between work (a conserved entity) and heat (something that could be created) has the consequence of rendering completely obscure the meaning of entropy, by assigning its attributes to energy, a conserved quantity. As a result entropy is reduced to a lifeless empty shell with obscure physical meaning while heat assumes a schizophrenic double role that is to say a strange mixture of energy and entropy, instead of being clearly considered as caused by an entropy flow. 19 If one insists on speaking of energy and introduce correctly the first law, the only correct way is to follow the mathematician Constantin Caratheodory that distinguishes between adiabatic processes (no heat exchanged) and non-adiabatic processes (heat exchange are allowed) 31 . Next, experiments demonstrate that adiabatic work of a given quantity produces the same change in temperature no matter how the work is produced, whether by friction, by turbulent motion, by compression of gas, or electrically. Then, because the adiabatic work is independent of the kind of work that is done, it should be equal to the difference between two values of a state function U = E int , the internal energy, so that the energy change is defined in differential form as dU = δw(adiabatic), where δ is used for work because it is a state function only for adiabatic changes and not for any kind of change as U. Consequently, if a change of state is not carried out adiabatically, the work δw is no longer equal to dU and the numerical difference between dU and δw is attributed to the transfer of a certain amount of heat δq = T·dS (i.e. transfer of entropy) to or from the surroundings as a result of a difference of temperature across a thermally conductive boundary. As heat is not an exchange of energy, but an exchange of entropy, one should refrain to write that δq = dU -δw as usually done, but rather that dU(non-adiabatic) ≠ δw(adiabatic).
The identification dU= δw(adiabatic) applies in fact only for systems having a constant volume (dV = 0). For systems evolving at constant pressure (dP = 0), the effective work available under adiabatic conditions is reduced by a quantity -P·dV that corresponds to the work done by the system against the applied pressure when the total volume changes by an infinitesimal quantity dV, leading to dU = δw(adiabatic) -P·dV = δw(adiabatic) -d(PV). The second expression stems from the fact that dP = 0, allowing introducing a new state function H = U + P·V, named enthalpy, and such that dH = δw(adiabatic).
Concerning the second law, existence of entropy means that a natural representation of internal energy is to consider this entity as a function of three extensive variables: entropy S, volume V and number of particles N: These makes appear, temperature T, pressure P and chemical potential µ as intensive conjugated variables to entropy, volume and number of particles. Now, let's suppose that X is a conserved quantity for a system divided into sub-systems A and B. As X A + X B = X tot is fixed, we should have for any transfer of X between A and B: dX tot = 0, i.e. dX A = -dX B . But we know from Clausius' second law that at equilibrium the total entropy S univ = S A + S B tends to be maximized, meaning that: But X could well be the total energy U(S,V,N) meaning that: Consequently, for T A < T B , one should have dU A > 0, stating that heat must flow from the high temperature sub-system towards the low-temperature one (thermal transfer). But if X stands for the total volume V(U,S,N), we have by the same reasoning: Then, at constant temperature (T A = T B ) and P A > P B , one should have dV A > 0, stating that volume should flow from the low-pressure sub-system towards the highpressure one. A last possibility could be that X is the total number of particles N(U,S,V): Thus, at constant temperature (T A = T B ,) and µ A < µ B , one should have dN A > 0, stating that transport of particles is required from the high chemical potential sub-system towards the low chemical potential one (diffusion). It also follows from the above reasoning that if two systems are in thermal, mechanical as well as diffusive equilibrium, temperatures, pressures as well as chemical potentials of both systems must be the same everywhere in both systems. So, we see that through the idea of maximizing entropy, it has been possible to give a precise definition of the so-called intensive variables T, P, µ as conjugate variables of the three extensive variables of a state function U(S, V, N). It is worth noticing that no special meaning has been here given to the fact that according to the first law U should be a conserved quantity because if one has U(S, V, N) it also logically follows that one also has S(U, V, N) or V(S, U, N) as well as N(S, U, V). In other words, internal energy U, entropy S, volume V or total number of particles N, are all good state variables of any system. This means that staying at a macrostate level, there is no clear reason to favor energy over entropy, volume or number of particles. Accordingly, under extrapolation at the scale of the universe, saying that energy should always be conserved is fully equivalent to the statement that the total volume of the universe should remain the same or to the statement that it is not allowed to create or destroy particles. Putting emphasis on energy and not on entropy, volume or number of particles is at this level just not admissible.
There is also a concern by writing the first law as dU(S,V,N) = T·dS -P·dV + µ·dN because such an expression cannot tell us what will happen if our system bears a total electric charge Q, another extensive variable not appearing in the definition of U. Accordingly, it will be totally ridiculous to speak of a living cell as U(S,V,N) system because without electrical potentials ψ created by ions there would be no life. Fortunately, in our formulation of what is internal energy we have complete freedom for defining what is variable X. Let's for instance assume that X is electrical charge Q, then all we have to do is to add a new electrical term for defining the internal energy variation: dU(S, V, N, Q) = T·dS -P·dV + µ·dN + ψ·dQ and it immediately follows that: Then, at constant temperature (T A = T B ,) and ψ A < ψ B , one should have dS univ ≥ 0 or dQ A > 0. This means that positive electrical charge has to flow from the high electric potential sub-system towards the low electrical potential one with, at equilibrium, the same electrical potential everywhere in the system. Alternatively, one may also say that negative electrical charge has to flow from the low electrical potential sub-system towards the high electrical potential one. But these considerations apply only to a cell with static free electrical charges. What about the displacement of bound charges after application of an electric field E? To take into consideration possible changes in the total dipolar moment D (C·m), we may write dU(S, V, N, Q, D) = T·dS -P·dV + µ·dN + ψ·dQ + E·dD, meaning that: Then, at constant temperature (T A = T B ,) and E A < E B , one should have dS univ ≥ 0 or dP A > 0. This means that some dipolar moment should flow from the high electric field sub-system towards the low electric field one with, at equilibrium, the same electric field everywhere in the system. But we are still not considering a real living cell because free charges may also move generating magnetic fields B. We are thus also led to consider possible changes in a the total magnetic moment M (A·m 2 ), by adding a new variable to the first law dU(S, V, N, Q, D, M) = T·dS -P·dV + µ·dN + ψ·dQ + E·dD + B·dM, meaning that: Again, at constant temperature (T A = T B ,) and B A < B B , one should have dS univ ≥ 0 or dM A > 0. This means that magnetic moment is expected to flow from the high magnetic field sub-system towards the low magnetic field one with, at equilibrium, the same magnetic field everywhere.
One may thus begin understanding that the first law of thermodynamics is not really a law, but rather a mere kitchen recipe for dealing with many kinds of perturbations. Suppose for instance that we apply a perturbation that is not thermal, mechanical, chemical, electrical nor magnetic. Then the first "law" stating the conservation of the function U(S, V, N, Q, D, M) will of course be violated because energy could now flow in a reservoir not explicitly considered in the total internal energy. In other words, the first "law" will have to lose its status of being a fundamental law of nature. In fact, this will never happen because the first "law" is a clever recipe allowing dealing with anything you want to deal with. Accordingly, for a living cell it should be obvious that at least one variable is still missing in the U(S, V, N, Q, D, M) state function. Until now, we have not given a single clue about how distinguishing between sub-systems A and B. This is because we are just playing a purely mathematical game with a recipe U(S,…) associated to the maximization of the S parameter. If we want to consider a real system such as a living cell, one have to say something about the area A of the physical interface separating the cell from its surroundings by writing: dU(S, V, N, Q, D, M, A) = T·dS -P·dV + µ·dN + ψ·dQ + E·dD + B·dM + σ·dA, where σ is the interfacial tension responsible for changes in area: It may then be anticipated that at constant temperature (T A = T B ,) and σ A < σ B , one should have dS univ ≥ 0 or dA A > 0. This means that area should flow from the high interfacial tension sub-system towards the low interfacial sub-system with, at equilibrium, the same interfacial tension everywhere.
For a real living cell, one may also notice that life has appeared on Earth and that this planet through its total mass M and radius R creates a gravitational field g = G·M/R, where G is Newton's universal gravitational constant. As a real living cell is composed of N particles having masses, the total weight W = m·g, should be an additional extensive variable for the internal energy associated to altitude h a conjugate intensive one: dU(S, V, N, Q, D, M, A, W) = T·dS -P·dV + µ·dN + ψ·dQ + E·dD + B·dM + σ·dA + h·dW, leading to a new equilibrium condition in presence of gravity: With the law dS tot ≥ 0 it may be anticipated that at constant temperature (T A = T B ,) and h A < h B , one should have dW A > 0. This means that masses should flow from the high altitude sub-system towards the low altitude one with, at equilibrium, the same altitude for all weights.
The advantage of such a formulation of thermodynamics is that whatever your definition of what is a macrostate the "conserved" internal energy U in terms of variables (S, V, N, Q, D, M, A, W,…), evolution is always ruled by a single fundamental law: dS univ ≥ 0 with transfer of entropy, volume, particles, electrical charge, dipolar moment, magnetic moment, area or masses ruled by an intensive parameter measuring a kind of "energy concentration" (temperature, pressure, chemical or electri-cal potential, electric or magnetic field, surface tension, altitude, etc…). The three dots in the above formulations means "any quantity that doubles when the amount of a given stuff is doubled" for extensive variables and "corresponding energy concentration associated to a given stuff " for intensive variables. And of course, it exists an infinite number of stuffs with an infinite number of ways of measuring energy concentration relative to a given stuff. For instance, if you consider that the center of mass of a living cell has a speed v CM (intensive energy concentration) the associated extensive stuff will be the amount of motion of this center of mass p CM of the cell with dU = … + v CM ·dp CM .
The quite fuzzy mongrel aspect of energy was indeed well perceived by the French mathematician Henri Poincaré:

In every particular case we clearly see what energy is, and we can give it at least a temporary definition; but it is impossible to find a general definition of it. If we wish to enunciate the principle in all its generality and apply it to the universe, we see it vanish, so to speak, and nothing is left but this -there is something which remains constant. 32
This is why, as far as life phenomenon is concerned, one should not rely on energy and the first law, but only on the second law stating that for any kind of evolution a single non-ambiguous and universal criterion should be used: dS univ ≥ 0. In fact, it should be easy to realize that as evolution means that it exists a stuff called "time" that is always flowing from past to future, time and the second law are in fact two different ways of speaking of the same basic stuff of our universe.

ENTROPY AND MACROSTATE MULTIPLICITY
So, among all the possible extensive variables that could be associated to a macrostate, entropy and not energy should be the privileged one because it is the only variation that is allowed to change in a unique direction defining unambiguously a biological time for any living species. Unfortunately, this logical choice has not been retained by biology that focuses exclusively on the extensive fuzzy variable: energy. Such a wrong choice is beyond any doubts linked to the fact that modern science is born after identification of the force concept during the eighteenth century through the birth of Newtonian's mechanics. The next step logical step was to move during the nineteenth century from forces (M·L·T -2 ) that may appear or disappear to something that could never be created nor destroyed (first law), i.e. energy (M·L 2 ·T -2 ). If this was a quite interesting move for understanding the behavior of inert matter, it was a complete sterile move for a good comprehension of living systems that are doomed to be born, to perpetuate (life) and finally to die. Even if energy and entropy were born the same year (1854) from the study of heat engines, entropy has been perceived from the very beginning as a negative "bad" thing, i.e. a degraded form of energy that is inexorably dispersed through the whole universe and that could never be recovered for performing useful work.
Fortunately, through the advances made in kinetic gases theory, it was realized that temperature, the conjugate intensive parameter of entropy could be associated to the average kinetic energy of a large assembly of tiny particles that could not be cut into smaller pieces through chemical means (atoms). Similarly, pressure that is the conjugate intensive parameter of volume could be associated to the average force per unit area exerted by atoms hitting the walls of a container. This was the birth of statistical physics that soon leads Ludwig Boltzmann to give a microscopic interpretation of the "bad guy" preventing heat engines to work with 100% efficiency: S = k B ×ln Ω. It is worth noting that k B , the socalled "Boltzmann's constant" was not introduced by Boltzmann itself, but by Max Karl Ernst Ludwig Planck that was deeply interested in -even obsessed with -the second law of thermodynamics. The constant was introduced with another fundamental constant, the quantum of action h (also named Planck's constant) for explaining the mathematical form of the black body radiation spectrum 33 . In this relationship Ω is called the macrostate's multiplicity, that is to say the total number of microstates (positions and velocities of all particles constituting the system) compatible with a given macrostate. Since the logarithm is a monotonic function, the tendency of multiplicity Ω to increase is the same thing as saying that entropy tends to increase: ∆S univ ≥ 0. Another advantage of such a formulation is that considering our two sub-systems A and B, one has Ω tot = Ω A ×Ω B and thus S tot = S A + S B , the familiar extensive property of entropy.
The power of this new formulation of entropy may be easily demonstrated by considering a system of N distinguishable particles placed in a volume V at temperature T. From quantum physics, we know that it is possible to associate to each particle of mass m, a DeBroglie's thermal wavelength: Consequently, at this temperature each particle occupies a quantum volume v = Λ 3 , cutting the total vol-ume into Z = V/Λ 3 elementary cells. Therefore, there are Ω = Z N equivalent ways to spread the N distinguishable particles over Z elementary cells, leading to an entropy: We thus learn from Boltzmann's equation that entropy increases for any increase inthe total number of particles N, of the available volume V and of the temperature T. In fact, the above relationship is not quite correct because quantum physics imposes that atoms and molecules are indistinguishable particles. The computation of the multiplicity Ω in such a case is trickier and the correct result is: 34 Now, for an isochoric process in a closed system characterized by ∆N = ∆V = 0, it comes that ∆S = Nk B ·ln(T f /T i ) 3/2 , while for an isothermal process (∆N = ∆T = 0) we have ∆S = Nk B ·ln(V f /V i ). This demonstrates, without any reference to the first law, that the sole knowledge of entropy is sufficient to understand the basic behavior of a system of N particles enclosed in a volume V at temperature T. We may also predict that for an isentropic process (∆S = ∆N = 0), any expansion (∆V > 0) should be associated to a decrease in temperature. Introducing now the first law stating that for a mono atomic ideal gas, U = (3/2)Nk B ·T, the derivation of the ideal gas law is straightforward: It then follows that for an isobaric process in a closed system (∆P = ∆N = 0), we should have P/Nk B = T/V = cste, meaning that ∆S = Nk B ·ln (V f /V i ) 5/2 . Considering again an isochoric process, we have V = Nk B T/P = cste, meaning that ∆S = Nk B ·[ln (T f /T i ) 5/2 -ln (P f /P i )], while for an isothermal one T = P·V/Nk B = cste, leading to ∆S = Nk B ·[ln (V f /V i ) 5/2 -ln (P f /P i ) 3/2 ].
So, through the simple equation S = k B ·ln Ω, many predictions could be made that could all be confirmed by making experiments with gases. Even the second law dS univ ≥ 0 could be anticipated by considering that if Ω A is the multiplicity of a macrostate A and Ω B is the multiplicity of another macrostate B of the same system, the most probable macrostate should be the one displaying the largest multiplicity, i.e. the largest entropy. A microstate might be inaccessible because it has the wrong energy. So, from a statistical viewpoint, the second law means that states always evolve from configurations of low probability (small multiplicity) towards configurations of maximum probability (the highest possible multiplicity compatible with the imposed constraints). Again, it is worth noting that concepts such as energy, heat or work introduced for dealing with heat engines are completely absent from this formulation. Moreover, associating energy with Hamiltonian or Lagragian operators or functions is surely quite interesting but totally useless as far as thermodynamics is concerned.
To reconcile both approaches, one should use a thermostat that fixes the temperature T and thus puts a constraint on the average quadratic speeds of the constituent parts. This allows mechanical energy to fluctuate at a microstate level with no important consequences for the macrostate level. This stems from the fact that fluctuations in the energy are minute compared with the total energy of the thermostat. In such a case, the internal energy U of a system of fixed temperature T may be identified to the average single particle mechanical energy about which the system's mechanical energy fluctuates: To know the system's energy levels ε i we must know its volume V for constraining the spatial positions and also the total number of molecules N present in the system, for only then is the mechanical system fully defined. The function Z(β) is called the partition function and is a very useful entity allowing linking accessible energy levels of a system to a macroscopic property, its internal energy U =N×<E>.

FREE ENERGIES
It also follows from the definition of the partition function that entropy may also be written: This new kind of "energy" corresponds to Helmholtz's free energy that is defined in macroscopic thermodynamics, as the Legendre's transform of internal energy U. From F(T,V,N), another Legendre's transform leads to Gibbs' free energy: In fact, it is possible to derive a more intuitive understanding of what are free energies 35 . Let's consider a set of N molecules able to occupy just two energy levels separated by an energy gap ∆U. To have an equilibrium situation, the number of molecules going from the lower level to the upper level should be at any time equal to the number of molecules going from the upper level to the lower level. According to Boltzmann's law the fraction f of molecules that can be excited to the upper level owing to a thermal fluctuation at constant volume is f = exp(-∆U/k B T). Now, from the statistical definition of entropy, S = k B ·ln Ω, where Ω is the multiplicity of a macroscopic state, equilibrium is expected when: Here ∆S is the entropy difference between the two states, ∆S = S(up) -S(low), and K eq , the so-called "equilibrium constant" such that ∆F = ∆U -T·∆S = -k B T·ln K eq . Similarly, the fraction f of molecules that can be excited to the upper level owing to a thermal fluctuation at constant pressure would be f = exp(-∆H/k B T), leading following the same reasoning to the second kind of free energy ∆G = ∆H -T·∆S. Consequently, if one is interested in populations, the pertinent functions for isothermal transformations are not internal energy U or enthalpy H, but rather the associated free energies F or G depending on the second constrained parameter: volume for F or pressure for P. But what's about considering the case of non-isothermal transformations? It is easy to see by the above reasoning that the pertinent functions for following populations should be S -∆U/k B T at constant volume and S -∆H/k B T and no more ∆F or ∆G that are clearly defined only at constant temperature.
In fact, the same conclusion could be reached by ignoring microstates and considering splitting of the whole universe into system and surroundings separated by an interface that may allow or not entropy exchanges: 36 dS univ = dS syst + dS surr ≥ 0 As explained above, for micrometer-sized bacteria, universe and surroundings (anything that are not inside the lipid double layer) is really colossal (hundreds of yotta-meters in size) and such a global formulation is not at all adapted to the scale of a cell or of a multicellular organism. But, relying on the fact that energy is a form of adiabatic work δW(adiabatic), i.e. a work done with no heat exchange, and that energy cannot be created or destroyed, it is possible to masquerade entropy exchanges with the surroundings as adiabatic work done at a given temperature T: dS surr = δW surr (adiabatic)/T = -δW syst (adiabatic)/T Moreover, biological transformations usually occur under a constant pressure provided by earth's atmosphere and not with constant volume as living cells may swell or shrink by absorbing or releasing water. Thus introducing enthalpy as dH = δW sys (adiabatic), it follows that for any infinitesimal change: It is worth noting that such legitimate transformations have completely eclipsed the original partition between the system and its surroundings with a complete palming of the two huge systems (universe and surroundings). We have thus now two equivalent terms: the one at the left dS univ referring explicitly to the whole universe and showing the reason for the second law (no possible decrease of S univ ) and the one at the right making only reference to the small sub-system, with a tacit assumption that variations of entropy and enthalpy observed on the system alone are in fact exactly related to entropy variations of the whole universe. In fact such an assumption are usually simply ignored by most scientists not well acquainted with thermodynamic subtleties, giving the false impression that the entropy of the small sub-system has to increase independently of the entropy of the whole universe, a major pitfall to be avoided. This was, of course, Schrödinger's first fatal error upon writing his little book about what is life. But the error in forgetting that thermodynamics is the science of the whole universe has still more perverse consequences. Accord-ingly, if the temperature remains constant during the infinitesimal transformation, then dT syst = 0, allowing writing: This basically means that at constant pressure and temperature the right criterion of spontaneous evolution is not dG = d(H -T·S) ≤ 0 as usually stated in most textbooks, but rather an increase in the so-called Planck's function dψ = d(S -H/T) ≥ 0. 36 One may of course argue that if temperature is constant, dψ = -d(G/T) = -dG/T ≥ 0, meaning that as temperature is a positive quantity that dG = -T·dψ ≤ 0. There is also a deep subtlety here linked to the fact that by writing dG ≤ 0, one tacitly assume that the system evolves at constant pressure in contact with a thermostat, whereas writing dψ ≥ 0 only assume constant temperature whether the system is in contact with a thermostat or not. So, if the criterion dψ ≥ 0 is a special case (dT = 0) of a most general criterion dS univ ≥ 0, it also appears the criterion dG ≤ 0 is a special case of dψ ≥ 0 (dT = 0 fixed by a thermostat to ensure that both initial and final states are at thermal equilibrium).
The importance of considering dψ ≥ 0 and not dG ≤ 0 as a criterion for spontaneous evolution at constant temperature and pressure is well illustrated by the temperature dependence of the ionization constant of acetic acid 36 . Measurements show that as the temperature is increased from 0 °C, the degree of ionization first increase reaching a maximum just below 25 °C, and then decrease with increasing temperature. But considering the temperature dependence of ∆G° for this ionization shows a monotonical increase with no maximum in the experimental range of temperatures studied. On the other hand, considering the same temperature dependence of Planck's function ∆ψ° leads to a domeshaped curve with a maximum around 25 °C. This demonstrates the clear superiority of Planck's function for comparisons of the degree of spontaneity of a given transformation at different temperatures. 36 Consequently, one should really avoid the common error of thinking that by adding the word "free" before the word "energy", one still refers to energy changes. It should rather be realized that "free energies" are in fact entropies, an obvious statement when looking at Planck's function ψ rather than Gibbs' G. In fact, the error of assimilating Gibbs' free energy to energy may be traced back to 1923 in a very popular thermodynamic treatise. 37 Besides forgetting that thermodynamics is a science of the whole universe, there is also the fact that entropy changes ∆S syst are masqueraded in Gibbs' formulation as energy changes after multiplication by the temperature of the thermostat. Such a manipulation, pushes to the belief for unaware people that a thermodynamic system tries, upon spontaneous evolution, to minimize its energy, as in reality he tries to maximize the entropy of the universe! From this fundamental error follows the wrong idea that changes always proceed from configurations of high energy to that of low energy. In fact, this just cannot be owing to the fact that energy is always conserved, meaning that any energy decrease somewhere must exactly match energy increase elsewhere. 38

THE SECOND LAW AND THE UNIVERSE
In line with the fact that energy is a conserved quantity that should never created nor destroyed, it may seem at first sight surprising to see molecules with large negative energies popping from zero. In fact, it happens that the decrease in energy is related to a zero energy state where a distance equal to the diameter of the whole universe separates the nuclei from their electrons. This raises the interesting question of what may be the total energy of the whole universe. A pertinent answer would of course be that to have a reasonable chance meeting, nuclei and electrons should have at least some kinetic energy E univ that is different from zero and whose exact value does not really matter. Accordingly, when these particles come close enough to interact, their average kinetic energy increases by a certain amount <∆K> = E tot -<K> due to the trapping of the electrons in nuclei Coulomb's potential (Heisenberg's uncertainty principle: ∆p·∆x ≥ ħ/2) associated to a decrease in potential energy <∆U> = -2×<K> (virial's theorem). As total energy should always be conserved, one should have <∆K> + <∆U> = 0 = E univ -3×<K>, i.e. E univ = 3×<K>. There is thus absolutely no decrease in total energy when electronic shells appear around nuclei and when chemical bonds between atoms are created, but just a different partition between kinetic and potential contributions, relative to an arbitrary absolute energy content of the whole universe.
But, if there is the same total energy content between an assembly of separated nuclei and electrons dispersed in the universe and the same assembly occupying a quite tiny volume, why atoms and molecules should form? As explained above, the answer is simply that entropy is higher after formation of atoms and molecules than before. At first sight, it could be strange associating an entropy increase to a process leading to a strong decrease in volume. But again, the golden rule is that entropy could be allowed to decrease in one small part of the universe (called atoms and molecules), provided that the other parts of this universe have increased their entropy to more than compensate the necessary decrease. And one must not forget that entropy may be associated to visible matter (atoms, molecules) as well as invisible matter (neutrinos) or non-matter (photons). Everything that could be counted as particles (photons, neutrinos, electrons, nuclei, atoms, molecules, cells, organisms, etc.) carries a part of entropy. The higher is the number of entities, the higher the entropy (see above).
Accordingly, as atoms are created in stars and as stars emits a huge number of invisible neutrinos and photons (with a small number that are "visible") in the intergalactic vacuum, a strong increase in the total entropy of the universe is always associated to the formation of nuclei and atoms. In other words if the universe is full of atoms it should also be full of neutrinos and photons. This could be checked by back of an envelope calculation. Let <M> be the average mass of a star (in grams), N s the total number of stars in a galaxy and N g the number of galaxies in the universe. The total number of H-atoms should then be n H = N g ×N s ×<M>×N A , where N A is Avogadro's constant. Taking the mass of the sun, m 0 = 2×10 33 g, as a reference, the stellar and sub-stellar initial mass function (IMF) displays a power law distribution f(m) = (m/m 0 ) -α , with α = 0.3 (m/m 0 ≤ 0.08), α = 1.3 (0.08 ≤ m/m 0 ≤ 0.5) and α = 2.3 (m/m 0 ≥ 0.5) 39 . Integration of such IMF being F(m) = (1/1-α)×(m/m 0 ) 1-α allows computing and averaged mass ratio: Now, for a galaxy such as the Milky Way, the total amount of visible mass is m/m 0 = 0.42×10 12 , 40 leading to an average number of stars N s ≈ (0.42/5.15)×10 12 ≈ 0.8×10 11 . Finally, the current best estimate of the total number of galaxies in the universe is N g ≈ 2×10 12 [41]. So, the total amount of H-atoms in the universe may be estimated as n H ≈ 2×10 12 ×0.8×10 11 ×2×10 33 ×6×10 23 ≈ 2×10 80 . For the total number of photons, we may use the blackbody equation with a temperature of the cosmic micro-wave background T(CMB) = 2.726 K 42 , as an estimate of the current density of low-energy photons. Converting Planck's black-body function into the phase space number density of photons gives: Here ζ(3) = 1.202057 is Apéry's constant, leading to N(CMB)/V = 411 photons·cm -3 . The volume of the universe being 3.5×10 86 cm 3 , we get a total of 1.44×10 89 photons of low energy liberated owing to the assembly of all atoms and molecules (including those produced on earth) in the universe. For neutrinos, we have a ratio He/H = 0.075, heavier elements being relatively rare. Given that Helium has two neutrons, and that creating a neutron also creates a neutrino, we can estimate the total number of neutrinos to be about 3×10 79 . This shows that if neutrinos participate in the overall entropy budget of the universe, photons give, nevertheless, as expected, an overwhelming contribution.
Oblivion of photons' contribution to the entropy budget of the universe has of course deep consequences in biology, leading to the ridiculous claim that living systems violate the second law of thermodynamics. Another nasty consequence is the idea that the sun is a source of energy. As explained above, energy being by essence a conserved quantity there is neither source of energy nor high-energy molecules in the universe. We have shown above that chemical bonding is the consequence of a confinement that redistributes kinetic and potential energies at constant total energy. Concerning life, we have a low entropy container called the sun pouring high-frequency photons on the earth. But, as energy should always be conserved and entropy should always increase, the earth must in return pour a high number of low frequency photons into the intergalactic space. What have happened in the stars for creating atomic nuclei and in meteorites for creating molecules, also apply to the creation of living cells on earth. Basically, to each reduction of entropy for visible matter corresponds a large increase in entropy carried away by photons. Thus, earth by receiving photons from the sun centered on λ = 0.5 µm creates photons centered on λ = 10 µm photons that are emitted towards the intergalactic space. As energy is always conserved, one single photon from the sun (at 0.5 µm) generates 10/0.5 = 20 earth photons (at 10 µm), leaving on earth wonderful and highly sophisticated living struc-tures. Of course the same 20:1 ratio is retrieved by comparing the temperature of sun's surface computed from Wien displacement law (T = 5760 K) and that of earth surface (T = 288 K = 15°C) as 5760/288 = 20. The fact that we may here use either wavelengths or temperature stems from Noether's theorem stating that energy should always be a conserved quantity (first law of thermodynamics). Speaking of energy consumption or energy sources is thus pure non-sense and biologists should better refer to food (low entropy source, sun) transformed into biomass (low entropy, living species) and heat or waste (high entropy, climate or pollution) 8 .

BIOLOGY AND THE SECOND LAW
From the very beginning of its introduction by Rudolf Clausius, entropy was considered as a state function taking definite values for equilibrium states. What was entropy for non-equilibrium states was just ignored as the main focus during the nineteenth century was on optimization of heat engines. Fortunately, thanks to Boltzmann's equation S = k B ·Ln Ω, popularized by Planck and Einstein, we have in hand a generalized definition of entropy applicable to any kind of transformation and that is clearly defined even for non-equilibrium states 43 . Moreover, such a fundamental equation also helps to clarify what lurks behind the notion of an irreversible phenomenon. Let Ω initial be the phase volume occupied by all microstates compatible with an initial macrostate. In setting up such a state the experimenter's apparatus can put the system only in some uncontrolled point in Ω initial . Then owing to Liouville's theorem stating the conservation of any phase volume by the equations of motion, the process initial → final cannot be reproducible unless the phase volume Ω final is large enough to hold all the microstates that could evolve out of Ω initial . In other words, the requirement that S final ≥ S initial (i.e. Ω final ≥ Ω initial ) is not a mysterious law of nature, but just stems from the need to have a reproducible process 44 . Accordingly, following Boltzmann, Planck and Einstein, any process such that Ω final ≤ Ω initial , should not be considered as forbidden or impossible, but only as improbable; i.e., not reproducible. This is because the ratio of the number of microstates associated to a transformation is given by: As the smallest entropy difference that could be measured in the laboratory is about 1 µJ·K -1 , it follows that for a process such that ∆S = S final -S initial = -1 µJ·K -1 one has Ω final ≈ Ω initial ×exp (-10 17 ). Under such conditions, the final state appears to be so tiny relative to the initial one, that trying to perform the same experiment again and again will always lead to different outcomes. So, it is the mere desire of a human being of studying nature using scientific reproducible experiments that imposes the second law. Fundamentally, anything may happen in nature, but as soon as scientists try focusing on regularities or reproducible facts, then they cannot escape from the second law.
This basically means that perpetual machines of the second kind do exist in nature (we have called them cells) but at the cost of producing non-predictable outcomes (a phenomenon called life). When a scientist pretends that a perpetual motion of the second kind cannot exist, he is right, but then he considers only artificial machines and not living cells. The fundamental keyword characterizing the second law is thus not disorder but reproducibility. In such a case, it follows that S = k B ×ln Ω applies equally well to determining which non-equilibrium states can be reached, reproducibility, from which others and without any restriction to slow, reversible processes. Returning to the case of equilibrium thermodynamics, these considerations lead us to state the conventional second law in the form: The experimental entropy cannot decrease in a reproducible adiabatic process that starts from a state of complete thermal equilibrium. 43 Now, as far as living systems are concerned, the generalization of the second law to non-equilibrium processes appears to be crucial for explaining how the animal muscle succeeds in performing work from activated molecules with 70% efficiency. 45 Accordingly, believing that the muscle behaves as a heat engine, would mean that the maximum attainable work would obey Kelvin's formula for the efficiency η max /% = 100×(1 -T 2 /T 1 ) that considers a universal reversible Carnot heat engine operating between upper T 1 and lower temperature T 2 . According to this formula, considering a muscle (T 1 = 310K) working at room temperature (T 2 = 300K), one expect that η max = 100×(1 -300/310) = 3%! Worst, as soon as room temperature reaches the temperature of the muscle, efficiency drops to exactly zero… To justify the 70% observed efficiency at room temperature, the temperature of the cold reservoir allowing performing mechanical work should be T 2 = 310×(1 -0.7) = 93K = -180 °C. The only correct conclusion to be drawn from these numbers is simply that the animal muscle cannot be a heat engine. But considering the same problem starting directly from Boltzmann's equation and not from Kelvin's one, it transpired that: 45 Here, the variable r stands for the non-equilibrium analog of the T 2 /T 1 ratio. Being derived under the most general form of the second law, S(initial) ≤ S(final), without restriction of being at equilibrium, this last equation applies to any kind of engine fueled with an energy E 1 focused over N 1 degrees of freedom of the engine and delivered to a large sink reservoir characterized by an average energy E 2 = ½N 2 ×k B T 2 . Assuming that energy E 1 is delivered as n quanta of individual energy e = 69 zJ focused on a single vibration mode of the muscle (N 1 = 2n), leads to: Of course, if the quanta of energy were focused on two vibration modes instead of a single one, the maximum efficiency would drop as with N 1 = 4n, we have now r = 0.118, i.e. η max = 63%. Had the available chemical energy spread over ten vibration modes before being transferred, the efficiency would be only 10%. The experimental value being 70%, we have here the proof that the muscle is really an amazingly tuned quantum machine and definitively not a heat engine.
Such considerations show how a biological system could be far from equilibrium, even when a thermometer bulb registers a "uniform" temperature within the system. Such a fallacy of thermal equilibrium in a living cell has oriented the whole modern literature of bioenergetics towards Helmholtz's (constant volume) or Gibbs' (constant temperature) "free energies", that apply only when the reaction proceeds so slowly that thermal equilibrium is established at all times. This basically means that heat flows and diffusion fluxes are rapid enough, to maintain uniformity. In a living cell where molecules are not free to diffuse rapidly owing to the presence of membranes (compartmentalization) the best thing to do is thus to rely exclusively on Planck's function, which measures the total entropy discharged in the universe without the constraint of being connected to a thermostat.
With all these clarifications in mind, it should now be clear that non-spontaneous transformations occurring under ambient pressure and characterized by S final < S initial (non equilibrium), ∆ψ < 0 (equilibrium without thermostat) or ∆G > 0 (equilibrium with thermostat) may in fact occur either in a reproducible way (∆S univ ≥ 0) or in a non-reproducible way (∆S univ < 0). Of course, as far as living systems are concerned, the non-reproducible evolution (∆S univ < 0) is completely useless for a single isolated cell and is usually encapsulated under different names such as "hazard", "chaos", "chance", "noise", etc. On the other hand, the reproducible evolution (∆S univ ≥ 0) is strongly valorized under other names such as "necessity", "will", "aim", "determinism", etc. But both fundamentally exists in nature and if one switch from the cell level to the species level, (∆S univ < 0) transformations becomes valorized taking the name of "complexity" or becomes the central dogma of biology "Omnis cellula e cellula" 46 , stating that the apparition of a single living cell means that a kind of perpetual motion of the second kind called life is initiated that can never be stopped. And as explained just above, a statement such as (∆S univ < 0) is the insurance that life taken, as a whole, is a fundamental property of the universe that would always find its ways whatever the external conditions. Life could well be a very slow process under unfavorable conditions, but nothing can prevent its manifestation. This would of course be the case if the constraint (∆S univ ≥ 0) were a real law of nature and not just the need of considering exclusively reproducible events. Because adding such a constraint means apparition of an apparent time arrow reflecting the mere fact that macrostates with large multiplicities are, for purely statistical reasons, systematically "favored" over macrostates with low multiplicities.
So, it is somehow satisfying to see that the formalism of thermodynamics leads to the same conclusion as general relativity or quantum mechanics that time fundamentally does not exist. Time is a pertinent attribute only for reproducible processes and if such a constraint is not applied by a conscious being, everything becomes possible and then the mere notion of time evaporates either in nothingness or in endless eternity. Such a conclusion is also coherent with the fact that consciousness should pre-exist to time, space and matter. [47][48][49] THERMAL COUPLING AND THE SECOND LAW Further clarification is also needed for non-spontaneous reproducible processes that are characterized by S final < S initial and ∆S univ ≥ 0. This basically means that a local decrease in entropy is tolerated as it is fully compensated by a much bigger increase in the entropy of the whole universe either through generation of heat or by through generation of wastes that could be particles of matter or particles of light (photons). This possibility of releasing entropy either under a material form or under an immaterial form stems from Sackur-Tetrode's equation underpinning the fact that mass is itself a form of entropy and that entropy is dependent on the total number of particles created that could be indifferently fermions (matter) or bosons (interactions). Of course, for accepting such an idea, it is mandatory to refer to quantum field theories where matter particles may be created or annihilated at will and where each interaction between fermions is interpreted as an exchange of bosons. So, to observe non-spontaneous reproducible processes in nature, one may involve a coupling either with light as evidenced in photosynthesis or with other molecules as evidenced by chemiosmotic processes, such as oxidative phosphorylation.
But before considering such thermodynamic coupling in living systems, one may first consider coupling in heat engines. As exposed above, thermodynamics was first developed to find the maximum theoretical efficiency during the conversion of heat q into useful work w. The idea behind a heat engine is to dispose of a source of heat q 2 that could be extracted from a heat reservoir at the highest possible temperature T 2 . If there is available a cold reservoir at temperature T 1 < T 2 , then this temperature difference may be exploited to obtain work w: As realized by Carnot, the equality holds if and only if the engine is reversible. In the latter case the "wasted energy" q 1 (Carnot) is delivered as heat to the reservoir at temperature T 1 . The idea is now not to produce work, but rather to deliver the maximum possible heat to that lower temperature reservoir. This is the conversion problem faced in every home, where one has heat from a gas, oil, wood, or coal flame but wants to heat the house in the most efficient way. Here, we are moving from heat engines to heat pumps. The idea is thus to have an ambient heat reservoir (the outside world) at temperature T 0 < T 1 , and using a perfect Carnot engine to obtain the heat q 1 (Carnot) and using the work w available, to drive a heat pump between T 0 and T 1 , yielding the additional heat: Applying standard thermodynamics, it thus comes that the maximum attainable heat q 1 = q 1 (Carnot) + q 1 (pump) and the heat extracted from the outside reservoir q 0 are such that: 50 As before, equality holds if and only if the process is reversible. It is thus easy to see that there is always a net gain (G > 1) as soon as T 0 < T 1 < T 2 . This also means that heat may flow spontaneously from room temperature T 1 to a higher temperature T 2 because there is simultaneously a compensating heat flow to a lower temperature T 0 . In such a case, one may write with -q 1 , the heat extracted from the room and -q 2 the heat delivered to the hotter place (T 2 > T 1 ) that: This shows that no spontaneous heat transfer is possible if T 0 = T 1 , but as soon as T 0 < T 1 , heat may flow spontaneously from the cold point T 1 to the hot point T 2 because in the same time more heat is transferred to the cold reservoir. One also sees that the lower is T 0 , the higher is the amount of heat flowing from T 1 towards T 2 , even if T 1 < T 2 . This is the basic idea behind any kind of thermodynamic coupling (here with heat engines and pumps) allowing benefiting from a large global entropy flux for inverting locally a smaller entropy flux. Such simple thermodynamic considerations help explain how life apparition on a planet may starts as soon as it becomes cold enough for allowing efficient thermal coupling between hot organisms working at temperature T 2 ≈ 37 °C drawing heat from a cold surface at T 1 ≈ 15 °C (greenhouse effect) in thermal contact with a cold huge reservoir at T 0 ≈ -18°C (planetary equilibrium temperature). It is worth noting that such a thermal coupling is purely physical and does not depend on the existence of a metabolism based on chemistry. This of course means that warm life is fed by the earth and not really by the sun that behaves as a low entropy source relative to the earth even if it is a high entropy source relative to a icy intergalactic space. It is in this precise sense that life on earth is intimately non-mechanically coupled to what happens at the scale of the whole universe and why thermodynamics is a quite subtle science relative to mechanics or electromagnetism.

CHEMIOSMOTIC COUPLING AND THE SECOND LAW
What can be done with heat may obviously also be realized through chemistry, as atoms and molecules may be considered as "canned heat". Let us assume that we dispose of a chemical reaction able to liberate a given quantity of entropy to the whole universe ∆ψ > 0. One may then consider that Boltzmann's constant k B could behave as a universal quantum of entropy for evolution, just as Planck's constant h corresponds to a quantum of action for motion. With such a quantum, one may write that ∆ψ = N×k B > 0. Let now assume that we want to perform a non-spontaneous but nevertheless reproducible chemical reaction characterized by ∆ψ' = -N×k B < 0. The question is how could we may benefit from the fact that N > N'? Let also η = n×N'/N be the efficiency of the coupling. Here we have to consider the fact that we are dealing with basically irreversible processes (chemical reactions) and that part of the entropy has to be necessarily evacuated as heat. This means that the efficiency can neither be η = 1 (reversible unrealistic case) nor η = 0 (no coupling at all as all the entropy is exported as heat). The question is thus to find the optimum value for η (or n). Now, from thermodynamics of irreversible processes we know that not very far from equilibrium, it should exist linear relationships between disequilibrium degrees, D, and corresponding flows, J = L×D 51 , where L is a phenomenological coefficient that corresponds to conductance for electrical conduction (Ohm's law I = ∆V/R), diffusion coefficient for diffusion (Fick's law J c = -D×dc/dx), thermal conductivity for heat conduction (Fourier's law J q = -λ×dT/dx), kinetic constant K for advancement of a chemical reaction (Prigogine's law J S = -K×∆ψ). Focusing on the chemical case, we should have: Derivation of this relation against n, then shows that the optimum efficiency is obtained when n = N/2N' or η = 0.5. This means that 50% of the available entropy should be used for creating a low entropy mixture (biomass and wastes) and the remaining 50% evacuated as heat. Such a result is perfectly understandable as low values of n means bad coupling, and thus large production of heat. Such a situation is kinetically good because the liberated heat promotes a high disequilibrium degree, giving a large flux of entropy. Conversely, high values of n mean good coupling with low-heat production. But in such a case the disequilibrium degree is low and the kinetics bad, giving a small entropy flux. A good com-promise between speediness and efficiency is reached when entropy is equitably shared for creating both matter and heat.
Such considerations allow, on the most general grounds, retrieving clear definitions for different states: life with healthiness (η = 0.5), life with catabolic illnesses (η < 0.5) or with anabolic illnesses (η > 0.5) and of course death by combustion (η = 0) or death by accumulation of matter (η = 1).

REFORMING BIOLOGICAL THINKING
It follows from the above analysis that any kind of biological thinking should be centered on the concept of entropy of the whole universe and not on energy. Moreover, the fact that free energies are in fact entropies urges for a reform of the vocabulary. This could be easily done obvious by focusing exclusively on Planck's function, ψ = S -H/T = -G/T that clearly emphasizes its entropic nature while keeping the historical separation between entropic and enthalpic effects. The proposed reform would greatly simplify the subject, as instead of using a counterintuitive ∆G ≤ 0 condition for spontaneous evolution at constant temperature and pressure, one would have ∆ψ ≥ 0, in straight line with the second law. The term energy would then be reserved for discussing molecular properties where a clear definition as the eigenvalue of a Hamiltonian operator is available. This would have the consequence of rendering facultative the presentation of the so-called "first law", as for macrostates, such principle is more a recipe associated to the definition of the macrostate rather than the expression of a fundamental law of nature. Of course, the law of conservation of energy for microstates would keep its fundamental nature, as it is deep-rooted in Noether's theorem and not linked to the empirical definition of what is a macrostate.
Concerning thermodynamic databases compiling Gibbs' free energies of formation for numerous chemical compounds a simple rescaling, ∆π i° = -∆ f G°/T would be necessary. Here, the symbol ∆π i° should be understood as an "irreversibility potential" measuring the maximum amount of entropy, hold by a given substance relative to the elements taken in their standard state, that could be irreversibly transferred from the substance to the whole universe during a chemical transformation. The new convention, already used in a previous paper, 8 would then be that for each transformation it exists a thermodynamically allowed spontaneous irreversible direction (∆π i° > 0) and another direction (∆π i° < 0) that imperatively needs a coupling with another reaction (∆π i '° > -∆ π i °) to have (∆π i '° + ∆ π i °) > 0. Gibbs' free energy of formation from the elements taken in their standard states that are needed for giving numerical values to irreversibility potentials may be derived either indirectly for calorimetric measurements (∆G = ∆H -T·∆S) or through measurement of redox potentials E (∆G = -n·F·E, with F ≈ 96 500 C·mol -1 and n the number of electrons involved). Many compilations of such values exists in the literature such as NIST-JANAF Thermochemical tables for molecules, 52 U.S. geological survey bulletins for minerals 53 and IUPAC technical reports for radicals. 54 Concerning units, one should obviously stick to the international practice of expressing energy E in Joules (J) and entropy S in J·K -1 . However, one Joule being the energy associated to displacement of a mass m = 1 kg at a speed of v = 1 m·s -1 is not very convenient for biology where everything happens with molecules (m ≈ 10 -27 kg) at a nanometer scale (d ≈ 10 -9 m). Fortunately, it exists only six universal constants available for dealing with energy at different scales: linked to electric current I: E = 2h×α×I/e. Now, as far as biology is concerned, two obvious qualities emerges U ≈ -100 mV, the membrane potential and T ≈ 310K, the temperature of the human body, leading to E pot = -0,1×160.2 = -16 zJ and E temp = 310×13.81/2 ≈ 2 zJ, with 1 zJ = 10 -21 J. It thus appears that the zeptojoule (zJ) is a quite convenient unit of energy for quantifying biological processes. This seems to be a much better idea than constantly referring to the energy associated to the irreversible hydrolysis of ATP, which is free energy and thus entropy. This explains why, depending on experimental settings this "reference" value may be anywhere between 35 and 70 zJ depending on the available concentration of magnesium ions. 55 As membrane potential, body temperature and hydrolysis of ATP always amount to a few or at most tens of zeptojoules, such sub-multiple of the joule appears to be a very convenient unit. For chemists and physicists that are not acquainted with such unit, we have the following approximate conversion factors: 1 kJ·mol -1 = 1.66 zJ (chemistry) and 1 eV = 160.2 zJ (physics).
It could however happen that the only experimentally available data is the standard enthalpy of formation ∆ f H°. In such a case, one may evaluate entropy of a species of molecular weight M and spin S at temperature T and external pressure P through the following relationship: Here Ξ is Sackur-Tetrode's constant taking the value Ξ = -1.1517078 for T 0 = 1K, P 0 = 100 kPa and M 0 = 1 Da = 1 g·mol -1 , while k B = 0.01380649 zJ·K -1 is Boltzmann's constant. The partition functions q rot and q vib make a zero contribution for mono atomic species. For diatomic species, the entropy will depend on a symmetry number σ = 1 (AB case) or σ = 2 (AA case), and on two spectroscopic constant B e (rotational constant) and ω e (vibrational constant): If B e and ω e are expressed in cm -1 , we have = hc/k B = 1.4388 cm. It is worth nothing that the vibrational contribution is significant at T = 298.15K only if ω e < 1000 cm -1 . For polyatomic molecules containing N atoms, contributions from every vibrational mode (3N -5 modes for a linear molecule and 3N-6 otherwise) should be added. In such a case, the rotational partition function, depends on the three principal moments of inertia I 1 , I 2 and I 3 : Here the symmetry number is σ = 1 (point-groups: C 1 , C i , C s or C ∞v ), σ = 2 (point-group D ∞h ), σ = n (pointgroups: C n , C nv or C nh ), σ = 2n (point-groups: D n , D nh or D nd ), σ = n/2 (point-group S n ), σ = 12 (point-groups: T or T d ), σ = 24 (point-group O h ) or σ = 60 (point-group I h ). Knowing the absolute entropy, it is possible computing an entropy of formation from elements in their standard states ∆ f S° and the associated irreversibility potential π i° = ∆ f S° -∆ f H°/T.
The above considerations apply to species in a gaseous state. For neutral species, the change in irreversibili-ty potential induced by hydration may be evaluated from Henry's constant H°c p according to: This expression is valid for H ref = 1 M·atm -1 , meaning that gas solubility and partial pressure are expressed with units mol·L -1 and atmospheres respectively. Henry's constants for numerous gases have been tabulated. 56 For anions and cations, a rough but convenient way of treating hydration is the Born-Mayer equation needing 3 parameters: the electrical charge z, a molecular radius r and the relative dielectric constant of the solvent ε r : 57 With e 2 /4πε 0 = 230.71 zJ·nm, we have for T = 298.15 K and ε r = 78.4 it comes that π i° = 0.38197×z 2 /r(nm). For getting more accurate values considering the structure of the water molecules around the ions, one should rely on molecular dynamics simulations. CONCLUSION Time should now be ripe enough for replacing the term "bioenergetics" by "biothermodynamics", stressing the fact that energy is a property attached to individual microstates and entropy a property associated to macrostates, i.e. to large (typically 10 24 ) collections of microstates (multiplicity Ω). This basically means that entropy is meaningless for individual microstates and that energy is also meaningless for a given macrostate. In fact, speaking of energy is only pertinent when considering a system made of a single unbreakable entity whatever its size that may be atomic (quantum mechanics) or macroscopic (classical mechanics of rigid bodies). In such a case, energy corresponds to the possible eigenvalues of a quantummechanical Hamiltonian operator (atoms and molecules) or to the sum of a kinetic contribution proportional to mass times the square of a velocity and of a potential contribution function of the square of spatial coordinates (rigid macroscopic bodies). As soon as one is facing a system made of many similar entities having independent motion, the pertinent variable becomes entropy; energy then being a loose concept whose exact meaning depends on the set of variables controlled by an experimenter for defining a macrostate. This obviously greatly simplifies the presentation of thermodynamics with just a definition of what is entropy, S = k B ×ln Ω, and single law of evolution ∆S univ ≥ 0. By contrast, the standard presentation sticking to history that uses three different "laws": U = q + w = constant (Kelvin's first law), ∆S ≥ 0 (Clausius' second law) and S = 0 if T → 0 (Nernst's theorem or third law) is full of very subtle pitfalls that have been examined with full details in this paper.
Accordingly, using Boltzmann's equation, Nernst's theorem becomes a platitude as by definition Ω ≥ 1, with Ω → 1 when T → 0. Just writing ∆S ≥ 0 without referring to the fact that one is considering entropy of the whole universe explains Schrödinger's first error. Finally, adding heat q, which is the product of entropy's flux by a thermal potential, and work, which is the product of a force by its displacement is highly misleading, the only justification being that both quantities share the same physical unit (joules). Thermodynamics is in fact a quite subtle science because it has one foot deep-rooted in quantum mechanics, as the principle ∆S univ ≥ 0 is just the expression of Heisenberg's uncertainty principles for a large collection of similar objects. And because one has to consider the whole universe that is the only physical system being really isolated from its surroundings, it has another foot deep-rooted in cosmology through the Bekenstein-Hawking entropy of a black hole characterized by the surface A of its event horizon: Such relations show that entropy per unit area is the unique physical concept able to weld all known universal constants (c, G, h, k B , e, α and µ 0 ) into just 2 compact scale-invariant quantities. The first relationship emphasizes the material character of the universe (fermions for building structures), while the second one emphasizes its complementary immaterial character (bosons for transmitting forces stabilizing structures). The intimate link between entropy and time suggested by the ∆S univ ≥ 0 constraint for reproducibility is further indication that life speaks the language of entropy (or its immaterial version, information) and not that of energy. The domain where such reformulation will bring about conceptual breakthroughs is obviously medicine as already suggested 58 and further developed in forthcoming papers.
After reviewing of these ideas by anonymous referees, several comments need to be added to this conclusion. Stressing that biology and medicine are currently on a wrong way does not mean that thermodynamics, quantum mechanics and chemistry are free of defaults. If there are no doubts that life relies extensively on far from equilibrium thermodynamics, one may argue that such thinking apply also to abiotic systems. This implies that living systems are in some way at another level of thermodynamics of irreversible systems. But, it is worth recalling that irreversibility may be considered by two different theories. There is the linear theory, extensively developed by Brussels' School, and the non-linear theory needed to describe chaotic systems. Again, there are little doubts that the non-linear theory of chaos should be the right way of thinking for a good understanding of living systems. This simply stems from the fact that the linear thinking is just a special case of the non-linear one. But climbing at the non-linear level is no guarantee that we are at the top. Because, an essential ingredient of life is still missing: consciousness. I will not go further here because the interplay between consciousness and life has been extensively discussed in previous papers [2,59,60]. This basically means that cleaning up the mess at the nuts and bolts level is also needed in quantum mechanics and chemistry.
This last point was pinpointed by one of the referee and if not properly discussed, it may seem that by focusing on biology and biology, I am putting the cart before the horse. I fully agree with this view, stating that there absolutely no guarantee that quantum physics, lying behind entropy, is not badly flawed. Accordingly, we know that the entire mystique surrounding quantum physics could be easily avoided. Thus, to justify Planck's blackbody spectrum, the entry-point of quanta in physics, we just need: the equivalence principle, the assumed absence of a perpetual motion machine in a classical gravitational field and classical electromagnetic zeropoint radiation (see [61] and references herein for more details). It is worth stressing that in this no-quantum we absolutely need absence of a perpetual motion machine. This basically means that we absolutely don't need the quantum mystique for stressing the crucial role of entropy. If I have chosen here to favor a quantum flavor of physics, this is just because quantum physics belongs to the current paradigm. But, relying on quantum principles is definitively not a prerequisite for an entropybased reformulation of biological thinking.
One should also be aware that chemistry was at the end of nineteenth century a powerful horse for thinking "quantum". I have even defended elsewhere the idea that chemistry is in fact irreducible to quantum physics. 62 And if this is true, it then logically follows that biology should also be irreducible to quantum physics. This stems from the fact that both sciences rely extensively on thermodynamics. There is now a convergence towards the idea that scaling symmetry is the missing ingredient of contemporary physics, 63,64 chemistry 65 and biology. 60 The only needed discussion is how entropy deals with scaling symmetry. It is at this point that enters information theory as explained elsewhere. 60 It should be however crystal clear that this does not imply that computers should be the next stage of progress, as computers are only able manipulating information that is devoid of meaning. By contrast, living systems can manipulate entropy fluxes to create information full of meaning. Again, this is because consciousness lies above information, entropy or matter. 59 Computers should then be viewed as mere technical and stupid tools for conscious beings and not as intermediates in the emergence of consciousness from matter. In such a new paradigm, there is even a place for the role of dissolved gases in water. This stems from the fact that information processing in living systems is based on water and not on silicon. This is precisely why there is so much water in any living cell. And water without dissolved gas cannot hold the information long enough to be processed. Obviously, water with gases should no more be called water. It should be called interfacial, 66 zoemorphic, 67 morphogenic, 68,69 EZ-water 70 or what you want but please don't call it "water".