Saturday, July 17, 2010

Many faces of emergence

The notion of emergence remains one of the intellectual symbols of the ultimate snobs. People who claim to believe in emergence think that they are very cool, progressive, open-minded, and creative thinkers who promote supermodern scientific methods that were unknown to the old, dull scientists.



In reality, emergence has been the bread and butter of science for centuries, long before the term was explicitly used.

Whenever scientists succeeded in reducing a more complex phenomenon, CP, into some more microscopic laws, ML, they showed how reductionism works in practice. The main difference between reductionism and the emergent philosophy is that the reductionists consider ML important whenever it is used to derive CP while emergentism views such a derivation as a reason to spit on ML because the derivations and the understanding is not what they are really after.

Even more striking is the advertised high degree of creativity and imaginativeness of the worshipers of emergence because they're actually some of the most narrow-minded, naive, and unimaginative people on the planet. This point will be the main topic of this essay.

Two myths about the universal character of emergence

When the emergentists say that a complex phenomenon, CP, is emergent, they almost universally make the following two related assumptions:
  1. A configuration of the complex object must correspond to a very large number of possible arrangements of the microscopic building blocks: emergent objects must automatically carry a high entropy (or useless classical microscopic information).
  2. The underlying information has to be encoded in some underlying "bits" or their base-N counterparts and these units of information are carried by some discrete entities that can be localized in ordinary space and time (think of a spin network).
It is not hard to see why these misconceptions are so widespread among the emergentists. It's because these people keep two basic "toy models" of emergence in mind:
  1. Thermodynamic limit of statistical physics.
  2. Emergence of complex objects on a computer where the information is encoded in bits.
Indeed, various material properties may arise from a large conglomerate of atoms. There are many ways how the atoms may be arranged. The statistical (and physical) properties of such arrangements allow us to calculate the properties of the material. And indeed, typical materials carry a high entropy because the atoms may be arranged in many ways.




And indeed, the computers may store the information about many things that may resemble the real world in binary digits or bits.

However, the idea that the complex phenomena always respect the "lessons" of the two examples above whenever they emerge from more microscopic, fundamental laws is utterly ludicrous. In fact, it's much more likely that they violate both of these principles. Nature knows many more recipes how to construct interesting complex physics out of some more elementary laws.

Counterexamples to the myths

Although I will eventually focus on quantum gravity, let me emphasize that these two "templates" of emergence are violated in more ordinary, non-gravitational physics as well. Let us mention two trivial examples:
  1. Superconductors or other materials at low enough temperatures exhibit many interesting emergent phenomena but their entropy can be extremely low.
  2. Hydrogen atoms or other atoms can be found at various discrete energy levels - which can be used to store the information - but the number of such levels is not finite and the individual possible levels are not equally likely. The internal state of an atom is not equivalent to a bit or a few bits by any stretch of imagination. Also, the internal structure of the atom is not described by discrete diagrams but by sophisticated continuous wave functions that solve certain differential equations.
Concerning the first point, I would like to add that it is "morally true" that complex phenomena arise from some collective behavior of a large number of elementary building blocks. However, mainly because of quantum mechanics, it is simply not true that the composite object always has to have "many microstates" that describe a particular situation.

In particular, the third law of thermodynamics (in its quantum refinement) guarantees that all conventional objects - including complex materials - converge to zero entropy when the absolute temperature is sent to zero kelvins. They simply want to drop to the ground state, the lowest energy level, because there is not enough "thermal chaos" for them to try the ineffective, higher-energy states.

In classical physics, it would also be possible for a crystal to pick a zero-entropy state at a low temperature if you constructed it out of atoms whose potential energies have minima at a finite relative distance.

However, in quantum mechanics, such a situation is much more generic because the quantum task of energy minimization almost always tends to lead to some "preferred wave functions" that represent the optimum balance between various forces. The size of the Hydrogen atoms may be estimated from the minimization of the total (kinetic plus potential) energy.

The kinetic energy increases with the momentum while the potential energy increases with the size of the atom. However, both (average squared) momentum and the radius can't be zero because of Heisenberg's uncertainty principle that requires that the product of their uncertainties can't drop below Planck's constant. So the atom picks a "compromise" value of the radius. The argument above may be used to estimate the Bohr radius; however, a more detailed and accurate calculation of the size of the atom and its properties exists in quantum mechanics.

While the wave function is nonzero for almost any position of the electron - and it is therefore a combination of "many classical configurations", the precise ratios (and relative phases) of the amplitudes at all points are actually determined for every energy level and every pair of points in space. The resulting wave function (e.g. of the Hydrogen ground state) therefore behaves as a single preferred state in the Hilbert space. Its entropy is zero.

When there is a very large number of microstates that correspond to a macroscopic physical object, it doesn't mean that the object has lots of fascinating emergent properties etc.

Instead, what it means is that the object is hot. And excessively hot objects such as plasma are often not too interesting; they're not good starting points for complicated processes such as life. Later, I will discuss black holes where a very high entropy is associated with a low temperature (that requires a negative heat capacity) but for normal objects, the number of microstates essentially measures "temperature" rather than "intellectual depth". ;-)

The myth of a computerized Nature

Concerning the second point - the "computer template" of emergence - I have discussed the misconception about discrete physics many times on this blog. Most people find it easy to imagine how the information can be stored in bits. What they find difficult is any other way to store the information.

But Nature never stores all the information in bits at the fundamental level. Whenever the information seems to be composed out of bits, we are always talking about some man-made "computers" that are designed so that some bits can be localized, all other values then 0/1 are "almost prohibited" by some additional engineering, and only the values 0/1 are influencing other important processes in the "computer" by design (although there's clearly and universally a lot of other physical information in the system).

At a fundamental enough level, Nature respects some laws that are derived from some fundamental principles and some kind of locality either in spacetime or on the world sheet or another auxiliary space. Locality is always one of the essential constraints that are paramount for us and Nature to choose some specific laws from many potential candidates. But locality doesn't mean that all the elementary objects have to be point-like. And it surely doesn't imply that they only produce "discrete" shapes in spacetime. In fact, smooth functions (e.g. wave functions) that measure various correlations are omnipresent.

So the information is always carried by some continuous wave functions or wave functionals defined at some spaces - either the real space or more abstract ones. The energy levels may be quantized, the Hilbert space may be re-organized to the energy eigenstate basis, and the systems may therefore carry a finite amount of information, especially if you require that the energy has to be lower than a certain upper bound (because you find a finite number of states for which this condition is met).

However, the relevant Hilbert space, even when it is made finite-dimensional, can never be naturally written as a tensor product of many copies of a two-state "qubit" Hilbert space or any base-N generalization of such a space. The ways how Nature encodes the information are much more diverse than "K bits or qubits" and the richness of the "codes" how the information can be stored and is stored in Nature is Her amazing achievement rather than a defect!

Just to remind you, Nature stores information into features of states in the Hilbert space (it was thought to be the phase space before quantum mechanics was discovered). The basis of energy eigenstates is arguably the most important and universal method to organize the Hilbert space.

To understand how the information is actually stored in the real world, you have to learn the correct theories how Nature operates, including their Hamiltonians (or other dynamical laws). You will learn conformal and non-conformal, topological or non-topological, four-dimensional and D-dimensional, commutative or non-commutative, theories that are based on point-like or brane-like fundamental objects and may contain many things such as gauge fields, bosons, fermions, Yang-Mills or diffeomorphism symmetries, different topologies of spacetime, and many other things.

You must carefully isolate which theories may correctly describe Nature - by comparing them with the increasingly detailed empirical evidence - and you must understand the Hilbert spaces of these relevant theories to have a reasonable idea how Nature actually stores the information. The ingenious physicists can "guess" completely new kinds of laws that were not realized as possibilities in the past - but these new laws must still pass the empirical tests before they become parts of the physics toolkit.

Alternatively, you may also start with a belief that the information in Nature has to be stored in the very same way as it is on the iPhone. And you may try to force Nature to respect this belief of yours. But once you do so, without any care about the empirical evidence that your theory is dynamically wrong and produces wrong predictions, you are becoming a crackpot. Stupid journalists may still hype your ideas but that doesn't make you less of a crackpot.

Gravity doesn't exist for Erik Verlinde

I am getting to the topic of quantum gravity. Erik Verlinde has preposterously argued that gravity is an entropic force. Many people - but probably not Erik himself - imagine the underlying information as a sequence of bits and they are consequently victims of the second myth ("Nature must be a binary computer").

However, Erik Verlinde shares the first myth with all these people - which include a vast majority of the laymen as well as a vast majority of the workers on loop quantum gravity, spin foams, causal dynamical triangulations, and many other similar approaches to quantum gravity.

All of them believe in the myth that the vacuum - or e.g. a pair of heavy, mutually attracting objects - has to correspond to a large number of microstates because the vacuum - or the force between the two heavy objects - is "emergent".

That's nothing else than the first misconception that I have identified above. While gravity and spacetime are emergent entities, at least in some sense - as string theory and especially pictures such as AdS/CFT, Matrix Theory, and dualities between theories in different spacetimes show - it doesn't follow that they're emerging as a thermodynamic limit of many microstates.

Moshe Rozali was able to formulate this point pretty concisely:
Hi David [Berenstein], 
for my taste the best semantics is that gravity is an emergent phenomenon, in that it is a long distance approximation to something more detailed (rather than being a complete story).

I think pretty much everyone agrees that gravity is emergent in that sense, but the story involving thermodynamics is much more fishy (hard to come up with a version of such claims that is not obviously wrong).

So I think it is important to emphasize the difference between the two: the emergence of approximate structures in some limit is more general than, and should not be confused with, the coarse-graining associated with the transition from statistical mechanics to thermodynamics.
Exactly. Emergence often produces phenomena that approximately follow certain laws. But it doesn't follow that all emergent theories must describe every emergent object or phenomenon as an object with a high entropy or a process that produces a high entropy.

In particular, it can be easily shown that Erik Verlinde's assumption that the entropy of two heavy neutron stars depends on their distance is simply wrong. As I explained in Why gravity can't be entropic and elsewhere, such an assumption would lead to two serious problems:
  • irreversibility
  • violation of the equivalence principle in observations of individual quantum particles
Both of these predictions of Erik Verlinde's picture can be quickly ruled out. Imagine two neutron stars whose size (and relative distance) is comparable to the black hole radius corresponding to their mass. They orbit one another.

Irreversibility of entropy changes

In Verlinde's picture, they are equipped with an entropy that depends on their distance i.e. the gravitational potential; it's the very point of Verlinde's proposition that the entropy depends on the separation and the gravitational potentials. Moreover, Verlinde explicitly argues that if the distance is changed by 20%, the entropy of the pair of the neutron stars will change by a substantial portion of the black hole entropy corresponding to the same mass. That's how the counting goes.

This is, of course, totally unacceptable because the black hole entropy is just huge. The black hole entropy is the largest entropy that a localized or bound object of a given mass can have. The entropy of a black hole of a solar mass would be close to 10^{60} (times Boltzmann's constant). If such a huge entropy is produced because of a modified distance between the Sun and another object, be sure that this increase of entropy can't be undone.

That would be too serious a violation of the second law of thermodynamics.

Erik Verlinde and his fans produce a lot of fog about this simple point. They say that the orbiting of neutron stars along ellipses is isoentropic, and therefore reversible, in Verlinde's setup. It can be easily seen from Verlinde's formulae that it is not the case. It's the very point of his picture that the entropy strongly depends on the gravitational potential energy but not the kinetic energy of the stars. So if you change the potential energy to the kinetic one, the entropy increases and you won't be able to revert the increase.

Moreover, much more generally, isoentropic or strictly adiabatic processes simply don't exist in Nature. These two concepts are always just idealizations and they can only be approximately satisfied by infinitely slow phenomena in which all objects remain in perfect equilibrium - at a constant temperature across the whole physical system at all times. This requirement obviously can't be satisfied if you have complicated fast stars and other objects rotating around each other.

Equivalently, if an entropic force is able to "push" objects around by finite speeds and finite acceleration, the entropy differences between the initial and final states have to be finite as well and a finite amount of entropy will be produced along the way, too. In Verlinde's picture, the produced entropy will actually be huge, comparable to the black hole entropy.

Verlinde has never really explained what he means by the temperature - he obviously has to "redefine" the concept of temperature if it is given by the gravitational acceleration between the Sun and the Earth in Planck units ;-). But whatever he does, it's clear that he can't guarantee that such a temperature will be constant across the space for all, even fast, gravitational processes. That would be an unacceptable violation of locality and other things.

Moreover, it can be seen that this uniformity of temperature surely contradicts his own formulae. It's also true that one can't "redefine" temperature. Temperature is what can be measured by thermometers and it can be shown that it is linked to the energy per one degree of freedom, roughly speaking. You can't give it a totally different meaning just because you want your formulae to work. Unless you are a crackpot, don't mess up with these basic rules of thermodynamics and statistical physics.

Equivalence principle and interference

Another established principle that Verlinde's - and similar - "emergent" pictures of gravity, based on the thermodynamic limit, contradict is the equivalence principle. In a freely falling elevator, you may perform any (sufficiently local, so that tidal forces can't be seen) experiment. And you won't be able to decide whether you're in a freely falling elevator or outside all gravitational fields.

This principle implies that all ordinary objects fall at the same acceleration in the Earth's gravitational field - or any other gravitational field, for that matter.

But the principle actually holds not only for ordinary objects but for many other things, including interference patterns. You may try to organize a double-slit experiment in a freely falling elevator. The result will be that the interference pattern will also fall with the same acceleration and you won't be able to distinguish physics in a gravitational field from physics outside the field - but simply accelerated in the right direction.

More than 35 years ago, these experiments were actually done in the context of neutron interferometry and the equivalence principle worked perfectly. The neutrons interfering with themselves still create an interference pattern but its location is modified exactly in the same way as if you assume that the whole wave functions are "freely falling" in the gravitational field as well. One can also explain this shift of the interference pattern by gravity's contribution to the phases of the wave functions.

In this way, the experimenters have actually been able to measure not only the Earth's gravitational field but even its gradient, the tidal forces! Interferometry is a damn accurate method.

It follows that even individual neutrons, with all of their quantum and probabilistic behavior, see exactly the right gravitational field that they should see. The interference pattern is influenced by gravity; but it is not destroyed by gravity which is what would happen if you were assuming that the neutron actually interacts with many more degrees of freedom that are responsible for the distance-dependent entropy. Those degrees of freedom would constantly "measure" the neutron and destroyed the interference pattern.

Gravity simply can't be "entropic" in this sense. It can't depend on having a large number of degrees of freedom. It properly acts on individual particles as well.

Equivalently, you can say that the survival of the interference pattern requires the Hilbert space of the neutron to be a tensor product of the Hilbert space of wave functions on a space and a Hilbert space of the neutron's internal states. The latter may be trivial e.g. one-dimensional and only the former - the usual "waves in space" - are actually responsible for the interfering behavior. However, Erik Verlinde's picture assumes that the number of "internal states" of the neutron actually depends on the position (on its altitude), so the tensor-product structure of the Hilbert space is broken and the interference becomes impossible.

Summary

Erik Verlinde's model is just another crackpot model of gravity. It revives LeSage's gravity, a defunct 17th century model of gravity based on ultra-mundane corpuscles.

These particles were envisioned to fly everywhere in space but the massive objects would create shadows in the sea of these particles. The shadows would induce an excess of the corpuscles hitting you from the side where the number of massive gravitational sources is smaller. That would attract you in the direction of the massive sources. One can actually reconstruct the 1/r^2 formula for the attractive force.

However, LeSage's theory can be shown to be wrong, e.g. because the corpuscles are slowing all objects down. They induce a friction because the motion in the gas of the ultra-mundane corpuscles is very similar to swimming in the water. There's no way to make the friction acceptably small.

The friction is nothing else than a source of irreversibility. Actually, LeSage's force is an entropic force because it tries to increase the entropy by reducing the non-uniformity of the density of the ultra-mundane corpuscles - by reducing the shadows in between massive objects.

That's actually a way to see that LeSage's theory is not just similar to Verlinde's gravity; it's a special example of Verlinde's gravity. Actually, it is a generalization of a special example because the entropy carried by the corpuscles was adjustable. By making the corpuscles' speed very high (you actually needed speeds that were vastly higher than the speed of light), you could hope to reduce the entropy change and reduce the "friction" (i.e. the problematic irreversibility).

In Verlinde's gravity, the total entropy linked to the gravitational potential is fixed - and comparable to the black-hole entropy (for two neutron stars with parameters comparable to that of the black holes). It is so huge and fixed that it is impossible to pretend that the irreversibility could be made small.

Of course, many crackpot theories like that will probably be proposed, named, and hyped in the future because the human stupidity is probably unlimited. But those who want to understand something important about the real world should notice that "emergence" - which is the only property of gravity that Verlinde has correctly identified (but he was very far from being the first one) - doesn't mean that there must be many distinct microstates that may produce new physics in a thermodynamic limit.

Actually, the only way how simple enough objects in the context of gravity may produce entropy that has something to do with gravity is the production of event horizons. Black holes carry a huge entropy that is proportional to the area of their event horizons. But if there are no horizons, there is no entropy. In particular, two very cold neutron stars that orbit each other simply don't carry any macroscopic entropy whatsoever - surely not one that would be linked to their gravitational attraction.

If this observation would lead you to conclude that gravity is a fundamental rather than emergent force, then indeed, gravity is a fundamental rather than emergent force. The actual gravity has many "emergent" features (for example, in perturbative string theory, gravity emerges from the exchange of a particular vibrating virtual closed string mode) but the very force is certainly not emerging from high-entropy configurations.

It's time for sensible people to stop presenting narrow-mindedness and complete lack of imagination and knowledge of Nature's clever tricks as "originality" or "creativity" or something hot that should be supported by the society if not the scientific community itself. Verlinde's picture is not creative, imaginative, or original in any way. It's just stupid and everyone who joined it has joined it either because he loves to jump on bandwagons or because his IQ is just hopelessly low (or both).

The genuinely creative people who will contribute to our understanding of quantum gravity and the real world in the future will have to crack much more diverse and much more new mechanisms how phenomena may emerge - and their explanations will have to agree with the known phenomena and the known Nature's tricks and principles. Attempts to deny that Nature has certain properties, attempts to deny that there are other tricks She has to use, and attempts to populistically promote naive viewpoints that even the average laymen could understand are not signs of creativity but proofs of its absence.

And that's the memo.

3 comments:

  1. Mr. Motl -
    there is an apparent typo in the paragraph beginning "Whenever scientists succeeded": you illustrate "The main difference between reductionism and the emergent philosophy" by never saying what emergentists say but stating what reductionists think twice. Please correct.

    -- Jiri

    ReplyDelete
  2. I am not a string theorist; but Einstein's general relativity shows gravity to be only an apparent force, just like a centrifugal force. Two questions: Do you postulate centrifugalons? How do photons interact with gravitons?

    ReplyDelete
  3. Dear Jiří,

    thanks for the typo you found. The second "reductionists" should simply have been "emergentists".

    Concerning your first question, general relativity is based on the equivalence principle - the equivalence of acceleration and gravity.

    This equivalence also means that "centrifugalons" are the same particles as gravitons. If you're observing the world from a rotating reference frame where apparent (centrifugal) forces exist, you may also describe physics as physics in a flat space modified by a condensate of "centrifugalons" e.g. gravitons.

    They just modify the flat spacetime geometry to the correct one that also produces the centrifugal force. Locally, these fields and particles are really indistinguishable from gravitons.

    Concerning your second question, photons interact with gravitons much like all other particles that carry energy interact with gravitons. In particular, photons' paths are affected by the gravitational field - that's why you can see an effect (bending of light) during solar eclipses.

    Photons also carry energy which is a source of the gravitational field. So they modify the gravity field around them as well. It boils down to F_{alpha beta} F_{gamma delta} g^{alpha gamma} g^{beta delta}. That's the usual f-mu-nu-f-mu-nu Lagrangian for the electromagnetic field. But I wrote it in such a way that the upper metric components remained explicit: the term is an interaction between the electromagnetic (A_mu) field and the gravitational field (the metric).

    At the quantum level, you may want to write "g_{mu nu}" as "eta_{mu nu} + h_{mu nu}", the flat space plus a perturbation. The h-field creates and annihilates the graviton. So the term generates a quartic interaction vertex between 2 photons and 2 gravitons. They surely do interact much like other pairs of particles. But there are also other vertices that you can find if you look at the Lagrangian carefully.

    You must also realize that the gravitational field around the massless photons is kind of "more trivial" because in a very boosted reference frame, the energy of photons becomes nearly zero, so the field should vanish. A careful analysis of the mixture of the gravitational waves in the geometry around a photon is needed - the initial conditions etc. matter.

    Best wishes
    Lubos

    ReplyDelete