Sunday, February 07, 2010

Entropy, information, and mixed states

Off-topic: Internet Explorer users are urged to instantly install this patch from Microsoft (click) fixing a security issue recently found in Europe.
People around physics - and increasingly often, many physicists themselves - are getting confused about ever more elementary concepts in physics.

A few years ago, it became obvious that most people in the broader physics community are inherently incapable to understand string theory and why it is inevitably the only mathematically possible unifying theory of gravity and quantum mechanics. Well, it's a difficult subject and a PhD - not even a PhD in theoretical physics - can't guarantee that you will be able to master it.



String theorist Anne Hathaway

But things have been getting worse recently. Today, even fundamental pillars of quantum mechanics, relativity, and statistical physics are controversial once again. They're controversial because people - and often people with physics PhDs - just don't understand them. Or to say the least, they don't understand them well. It has become very fashionable to do things that would be inseparably connected with the unquestionable crackpots just a few years ago.

A few years ago, everyone would agree they were silly and physically impossible - but it's obvious today that many people would only agree because everyone else did, not because they actually understood the issues.

History of entropy

One of these raped theoretical constructs is the concept of entropy. While this concept became popular and important in various disciplines at various later times, all of its universal logical and physical properties have been fully understood since the late 19th century, i.e. for more than 100 years.

The history of the notion may still be divided to the thermodynamic and statistical era.

The thermodynamic era, covering approximately the first half of the 19th century, only cared about the macroscopic, directly observable features of the physical phenomena that have something to do with entropy. The statistical era, covering approximately the second half of the 19th century, was concerned with the explanation of these facts using more elementary, microscopic building blocks. In this era, thermodynamics - a "principled" theory that was only constraining the macroscopic phenomena - was suddenly understood in terms of statistical physics - a "constructive" theory that creates a model of the world in terms of atoms (or other basic degrees of freedom).




This separation of physical theories to "principled" (based on general principles that should always apply) and "constructive" theories (building upon very specific elementary laws) is due to Einstein: he would include relativity among the "principled" theories.

In 1824, Sadi Carnot would notice that the heat was going from warmer objects to the colder ones and there was a degree of irreversibility that you couldn't get rid of. In 1850, Rudolf Clausius formulated the second law of thermodynamics - that the entropy can never decrease - essentially in its modern form.

You must realize that already in 1845, Joule has understood the equivalence of heat and energy. At those early industrial times, people would play with heat engines all the time and they probably understood them more than most of us do today.

However, in 1872, Ludwig Boltzmann would explain all those insights in terms of statistics applied to atoms. He has also derived the second law of thermodynamics from the constructive approach: his proven version of the second law is known as the H-theorem. Of course, other people were important in thermodynamics and statistical physics, too: Maxwell, Gibbs, and many others.

Macroscopic systems at nonzero temperature are inevitably described by classical physics pretty well - so thermodynamics is inevitably classical, in a sense. The microscopic explanations don't have to be classical but Boltzmann's proof was only created in the framework of classical (non-quantum) physics, of course. However, when quantum mechanics was born in the 1920s, all these old proofs and concepts were instantly promoted to the quantum language. There was nothing hard about it at all and I will show why.

In fact, Ludwig Boltzmann was already "thinking" in the quantum fashion and he could predict some changes that the quantum revolution would later bring. For example, he kind of knew that there should be a natural "unit of phase space" (a power of Planck's constant). Those precious expectations make Boltzmann one of the early forefathers of quantum mechanics.

Information: bits and e-bits

If you read random bits "0", "1" from a source, you are getting information. If both "0" and "1" have the probability to occur at 50%, and they're not correlated with the previous digits, each binary digit carries one bit of information.

"N" bits are able to distinguish "2^N" equally probable possibilities. They carry the information "log_2(2^N) = N" bits. The logarithm's base is two. Of course, in physics and mathematics, it's way more natural to use natural logarithms whose base is "e=2.718...". While base-two logarithms are important for computers which are man-made, the natural exponentials and logarithms are those that appear in Nature. (Some "discrete physicists" are still unable to understand this simple point.) Physics wants to deal with Nature so we would say that "N" bits carry the information "ln(2^N) = N ln(2)" where "ln(2)=0.693" or so.

One digit that has "K" equally likely values (a base-K digit) carries the information of "ln(K)", and so on. It's also useful to learn Shannon's generalization of the information entropy.

Imagine that you have a noisy picture, with many (N) bits being either 0 or 1, the points are uncorrelated among the pieces of the picture, but the probability of having "0" is "p" while the probability of 1 is "1-p". How much information is there in the picture? Can you substantially compress the computer file if "p" differs from 1/2? You bet.

I said that if you could distinguish "2^N" choices, the information carried by the sequence of bits - that actually tells you which choice you got - equals "ln(2^N)". You can explain it in another way, too: it's "ln(2^N)" because the probability of getting a particular sequence was equal to "1/2^N", and the information carried by the sequence was thus "-ln(1/2^N)".

In the very same way, if you have bits such that "0" appears with probability "p" while "1" appears with probability "1-p", you can say that if you get another "0", you got "-ln(p)" of information, while if you get another "1" digit, you received "-ln(1-p)" of information. Relatively to the previous paragraph, I just replaced "1/2^N" by "p" or "1-p" because that's the right map here.

For example, if the digits "1" are very unlikely, i.e. if "p" is very close to "1" while "1-p" is close to very zero, you get a lot of information (relatively) if you're lucky and you just obtain one pixel "1". On the other hand, another "0" doesn't tell you too much because you expected "0" to be probably there.

What is the average information carried by one bit or pixel if "0" occurs with probability "p" and "1" occurs with probability "1-p"? Well, that's easy. You must compute the expectation value. With probability "p", the information is "-ln(p)", because the digit turns out to be "0". And with probability "1-p", the information carried by the pixel is "-ln(1-p)" because the digit is "1". So the statistical average is
Information = - p ln(p) - (1-p) ln(1-p)
We only have two terms because we had two possibilities. It's obvious how to generalize it to the case of more options than two:
Information = - Σk pk ln pk
where the probabilities must sum to one, i.e. "Σ_k p_k = 1". This is the formula for the entropy, too.

If there are "continuously infinitely many" possibilities, you must replace the summation by the integral, and the probability "p_k" by the density "rho". In that case, "p_k" becomes a dimensionful "rho(k)", assuming that the options are distinguished by a dimensionful "k" (i.e. positions and momenta of particles), and in order for the logarithm's argument to be dimensionless, which it should be, you should first divide "rho(k)" by a natural unit of the "phase space volume". This turns out to naturally be equal to a power of Planck's constant "h" in quantum mechanics.

But if you would use an incorrect value of "h", the resulting error would be just an additive shift of the information by "- integral rho ln(h)" which is equal to "-ln(h)" because of the normalization condition for "rho" (whose integral equals one). But note that "-ln(h)" is independent of the particular configuration "k". It's a universal additive shift.

In classical thermodynamics, it was never possible to determine the overall additive shift to the entropy - only the entropy differences were objectively physical. In quantum physics, things become kind of more unique and well-defined because the entropy itself, not just its changes, can be calculated, including the additive shift, because we know what the "right" (or at least "natural") value of "h" is that should be inserted to the logarithms: it's Planck's constant or its power.

An important consequence is that in classical physics, the third law of thermodynamics would only say that the entropy of a system was approaching a universal constant at "T=0". In quantum mechanics, we can say that the constant is "S=0". I will discuss quantum mechanics later.

Indistinguishable microstates

A formula for the information or entropy is nice and important but we mustn't get lost in the formulae. We must also know what they mean. In particular, we must know whether the information we're talking about is actually known to us or unknown to us.

The very motivation of using entropy in physics (or elsewhere) is that we effectively "don't distinguish" or "can't distinguish" or "don't want to distinguish" many configurations - or sequences of bits - from one another. They look "macroscopically identical" because we're not able to measure the detailed properties of all individual atoms. If that's so, the same formula I derived above,
Entropy = - Σk pk ln pk
informs us about the inherent amount of disorder that is hiding in the microscopic information - the degrees of freedom that are hard to measure macroscopically. The right hand side of the equation above is dimensionless. In order to compare it with the entropy in thermodynamics, which has units of heat per temperature, you should multiply the formula above by Boltzmann's constant "k":
S = Entropy . k, k = 1.38 x 10^{-23} J/K
I used two symbols, "Entropy" and "S", for the two versions of the same thing. They only differ by "k" - a constant often set to one by the adult physicists who conveniently use the adult units.

Such a proportionality constant had to occur because people had introduced two units, Joule and Kelvin, for the energy and temperature, not realizing that these quantities are "secretly" the same thing because temperature is just the average energy carried by one quadratic degree of freedom (actually two of them, because E=kT/2). So it's natural to treat them as the same quantity and Boltzmann's constant "k" is the conversion factor from Kelvin degrees to Joules.

The second law

One law of physics has been associated with the entropy since the very beginning, and it was actually the primary reason why the physicists invented the concept of entropy in the first place: it never decreases, at least not by macroscopic amounts.

That's easy to see. The probabilistic distribution for various microstates corresponds to a "cloud" in the phase space - the points where the cloud is present have "rho" substantially greater than zero. If you evolve this "cloud" according to the equations of motion, it will spread, like all clouds. The resulting cloud will look chaotic and its snippets will cover most of the phase space - or its part that respects the conservation laws.

If you take the final cloud and create a "fuzzy" version of it, by adding all states that are macroscopically indistinguishable from the points included in the cloud (a kind of convolution with another, ball-shaped cloud - whose size and shape depends on what you mean by "indistinguishable", an issue that always depends on some conventions), you will inevitably obtain a cloud with a higher entropy. Why? Because it will dilute the numbers "p" or "rho" so that they're more uniform than before. And the key formula for the "Entropy" simply increases if things get more uniform. For example, note that "-p ln(p) - (1-p) ln(1-p)" is maximized at "p=1/2" - the uniform distribution between "0" and "1" as the values of the bit.

Boltzmann's proof of the H-theorem is just a mathematical version of the simple argument above. It is de facto completely trivial and it holds completely universally - regardless of the "interpretation" or "visualization" of the particular configurations (or sequences of bits).

The upgrade to quantum mechanics

We said that the formula for the entropy could have been rewritten from a sum to the integral if the degrees of freedom were continuous. But it actually turns out that quantum mechanics - because it's quantum - allows us to return to the original, "simpler" formula involving a sum rather than an integral. We have
Entropy = - Σk pk ln pk
In this case, "p_k" is the probability of a particular microstate - an element of an orthonormal basis of the Hilbert space. Because such sums can be rewritten as traces, we also have
Entropy = - Tr ρ ln ρ
where "ρ" (rho) is the density matrix. In an appropriate orthonormal basis, the Hermitean matrix ρ can always be diagonalized. And the eigenvalues are nothing else than the probabilities "p_k" of the individual microstates (basis vectors) that we were discussing previously.

Note that the trace can also be rewritten as an integral, if you insert the completeness relation for a continuous basis. However, in quantum mechanics, you could only integrate over "x" only or "p" only, but not both (because of the uncertainty principle). Alternatively, you may "cover" the phase space by another set of microstates that are quasi-localized, and you know that there will be 1 microstate per volume "h = 2.pi.hbar" of the phase space.

The proof that the entropy doesn't decrease - more precisely, that the probability that it decreases by "Δ Entropy" is as small as "exp(-Δ Entropy)" which is de facto zero for macroscopic changes - is pretty much identical in quantum mechanics as it has been in classical physics. In fact, it's simpler because we have a basis-independent formula involving the trace and because the probability densities appearing in the logarithms are nicely and automatically made dimensionless.

Also, the specific properties of quantum mechanics - such as interference and the ability to violate Bell's inequalities via entanglement - become irrelevant in statistical physics because the very goal of statistical physics is to consider the effect of having very many states that differ from each other in details and that decohere: whenever you have such large ensembles of states, the classical reasoning is becoming a good approximation. Decoherence guarantees that interference disappears. Consequently, the overall logic of quantum statistical physics is the same as it was in classical statistical physics.

For example, if the density matrix is mixed (i.e. more than one of its eigenvalues is nonzero), you shouldn't imagine that the precise form of the density matrix is an "objective state of the world". Instead, you may always imagine that the density matrix just encodes your ignorance about the actual state but the actual state may always be imagined to be a pure state. The same thing was actually true in classical physics: the probabilistic distributions on the phase spaces were just a technical tool to deal with our ignorance while the underlying physics was thought to have a well-defined pure state (a point in the phase space). But if you don't know what the state is, you're forced to calculate with the probabilities, anyway.

(The only new feature of quantum mechanics in this respect is that the pure states can be complex linear combinations of other pure states, which is also why even pure states lead to probabilistic predictions in QM, as they have to be rewritten as combinations of other pure states, namely eigenstates of the measured observables. But this fact, linked to the quantum superpositions, is getting less important if you talk about whole ensembles of macroscopically indistinguishable states because the density matrix is de facto proportional to the identity matrix acting on these subspaces of the Hilbert space, so the ability of quantum mechanics to produce vectors in "interesting new directions" of the Hilbert space can't be exploited here.)

I want to stress that all these things have been understood for 80 years - and most of them (those unrelated to quantum physics) for more than 130 years. The arguments above, or its obvious refinements or specializations that use the notation that is appropriate for a dynamical system (such as quantum field theory), tell us everything that is universally true about the concept of entropy and its behavior. In particular situations, we may calculate how much the entropy is changing and what it means - but these things depend on the "Hamiltonian" or "dynamics", of course.

The only insights that don't depend on them are the general consequences of statistics, logic, and properties of the logarithm - and most of them have actually been explained in the text above.

Irreversibility

The notion of entropy is critically linked to irreversibility, as we have mentioned, and irreversibility means that the physical phenomena don't look identical if you revert them in time - if you run the "movie" backwards. The most omni-present fact that can't be changed is that the entropy increases from the past to the future. It never decreases. This is true even if the underlying laws are time-reversal symmetric - and the "practically relevant ones" approximately but almost exactly are time-reversal symmetric and all of the known ones must actually be exactly CPT-symmetric, in fact, which is morally similar to the T-symmetry.

In the previous sentences about the asymmetry, "the past" and "the future" are defined according to the logical arrow of time which must always exist in any argument that studies incomplete knowledge. "The past" is what we can remember or what we can know, "the future" is what we can't know, what we can attempt to predict, change, expect, hope for, or be afraid of. For Nature Herself, "the future" is what she must evolve given the existing data from "the past".

All these things make future and past asymmetric. There's no way to define logic in such a way that the future and the past would play symmetrical roles. The only new thing you should do would be to "rename" the future and the past i.e. to pretend that the past is the future and vice versa. But that's just a vacuous linguistic exercise: the actual future and the actual past differ by striking logical properties and it makes no sense to obscure which is which.

Gaps in people's knowledge

I claim that all the stuff above is de facto a rudimentary undergraduate physics material. And I claim that every single person who has recently claimed that the "entropy" is deeply mysterious or it can be used in ways that are "completely new" or that the second law has become a "hot" question has misunderstood some basic segments of the knowledge above which is too bad.

The whole notion of entropy was designed, and is still critically useful, for understanding of the irreversibility in the world because the increasing character of the entropy is its basic property. Erik Verlinde doesn't seem to be getting this point: when a theory talks about changes of entropy, it is inevitably a theory of irreversible phenomena. These phenomena are qualitatively different from the fundamental T-symmetric (and thus reversible) or at least CPT-symmetric laws of mechanics (including gravity) or field theory (including general relativity).

Other people such as Sean Carroll don't understand that the thermodynamic arrow of time is fully explained by the statistical considerations above and by the existence of a logical arrow of time. The latter is inherently T-asymmetric and it always has to exist to allow us any kind of reasoning of the type "A probably implies B" as long as "A" and "B" are propositions about events that occur in time or spacetime. One can't look for any "additional" explanation of the asymmetry because it would clearly be either equivalent to the logic or incompatible with it. In particular, thermodynamics in a lab has nothing whatsoever to do with cosmology.

Media-driven deterioration of physics

After string theory, it was also quantum mechanics, relativity, and now entropy that have gotten very controversial because of the fog produced by tons of incompetent science journalists and that are increasingly expanding in "professional science", too. Junk and manifestly wrong papers and crackpots are being increasingly highlighted while quality physicists are losing influence. This is just a sociological process but the existence of meaningful research does depend on the existence of a broader society that makes it possible.

I wonder what will be the next pillar of physics that will come under attack? Will the apples start to fall to the skies because the new "revolutionary" physicists dudes will be eager to compete with Isaac Newton?

16 comments:

  1. We show in

    http://www.nada.kth.se/~cgjoh/ambsthermo.pdf

    (with a summary in chapters 1-2) that the concept of entropy is unphysical, and understood by nobody.

    We show an alternative form the 2nd law without reference to entropy, only using the physical concepts
    of kinetic energy, heat energy, work and turbulent dissipation. Hope you will take a look.

    ReplyDelete
  2. I think one of the reasons you are seeing others question old "facts" is because of the brick wall string theory has hit. If the theory isn't move forward then something must be wrong. People are trying to find out what's wrong.

    My own personal opinion is that physics has long ignored part of QT that has been around for a long time. It is called the "observer". Even if you don't want to give thought any physical meaning, you may want to consider that another energy form exists.

    My own prediction is physics will continue to spin until a new force is assumed to exist and utilized in string theory to collapse dimensions and/or limit solutions. I think any physicist that demonstrates a new force that resolves the problems with string theory could get it moving ahead once again.

    Whether that force is related to intelligence (thoughtons?) does not necessarily need to be addressed.

    Just thinking out of the box ...

    ReplyDelete
  3. I am not a physicist. The 2nd law has always seemed to me be to be tautolgically trivial. Clearly I myself have a vey low entropy - You will respond - you are not a closed system. OK. is the earth a closed sytem -- No. A galaxy ? No
    The Universe ? I asume you will say yes - but that is only by definition QED

    ReplyDelete
  4. Nice clear article (I wish I could be as clear).

    I am not sure I agree with the conclusion because I feel things have to be a little deeper than the current paradigm of irreversibility in some sense the entropy as time or any other disordering.

    But I certainly understand your frustrations with those who with some position do not really understand the significance of the principles they are expert in using. Perhaps the decoherence on some deep level of consciousness does border on incoherence in some sort of flux.

    I think the next thing to fall in your sense and maybe for the better is the idea of space as a form of dimensional measure. Of course it is still much a question of our unifying what is finite or not in both directions of groups and their reducibility. In short physics, including string theory, has not caught up to the underlying mathematics.

    ReplyDelete
  5. Dear Richardo,

    your comments might explain some people's reasoning - but this reasoning that you summarized as the "brick wall" theory is completely irrational - or dishonest. It has nothing to do with science.

    Even if the progress in string theory - or any cutting-edge research at any moment of the history of science (and we may talk about any discipline of science) - were slow (which I surely don't think is the case of string theory, but let's just assume it is, for the sake of the argument), this fact would simply not allow a scientist deserving the name to say manifestly incorrect statements.

    In most cases, the progress is slow because pretty much everything that has been uncovered has been uncovered. This is the case of virtually all theories that reside in the pillars of the current science - and the civilization. Relativity, quantum mechanics of atoms, finite groups, choose any example you want. Some of them are "completely" finished, others are not - because the open questions are slightly related to "uncertain" tails of the older, almost closed, topics.

    Just the desire of someone to change things, or continuously be revising things, simply cannot be used to justify the replacement of the correct insights by incorrect insights. Science is not about the search for hypotheses that are new. Science is the search for the hypotheses that are true which is something completely different. If something is true and non(something) is false, one must get ready to live with something, and withuot non(something), forever. Science doesn't allow any affirmative action for false statements. On the contrary, science is a systematic activity meant to exterminate ("falsify") false statements. Whoever disagrees with this basic principle is simply not doing science; he is not approaching the world honestly.

    And finally: Sorry, I don't understand anything from your observer comments. It sounds like some meaningless verbal ejaculate, sorry.

    Best wishes
    Lubos

    ReplyDelete
  6. Dear Norpag,

    apologies but I don't understand what you're saying, and everything I seem to understand is wrong, except for your not being a physicist.

    The second law is true but one needs a proof to see why it's true (e.g. a proof of the H-theorem, in one framework or another). It can be derived, so it's true, but the proof involves more than simple logical operations, so it's not "tautologically true". A tautology is a true statement that is true because of the properties of AND, OR, NON etc., regardless of the interpretation of the sub-statements in the proposition. That's not the case of the 2nd law which does care about the behavior of real variables.

    Second, you say that "your entropy is low". Well, unless you're dead or frozen, which doesn't seem to be the case given your comment, your entropy is gigantic, at least equal to 10^{26} which is 100,000,000,000,000,000,000,000,000 (times Boltzmann's constant, but it's actually high even in SI units, at least thousands of J/K) in the case that you don't know the scientific notation.

    Third, the second law is not about your entropy being law or high. It is about the change of the entropy in time. So whether someone's entropy now is called "high" or "low" - without talking about other things - has nothing to do with the validity of the second law.

    Fourth, the total entropy of closed systems increases - Yes, one uses the adjective "closed" in the sentence. But closed systems are not defined according to their desire to obey the second law of thermodynamics. The closed systems are defined independently of entropy - as systems or subsystems that don't exchange any energy/heat with the rest of the Universe. And there are damn good approximations of closed systems in the reality.

    Even if there were none, it doesn't matter because in reality, the second law holds even for the real systems which are not closed, as long as one accounts for the outflow of the entropy. This outflow is typically smaller than the entropy production inside, anyway. One surely doesn't have to go to a galaxy or the Universe.

    Your issue is simply a non-issue. To make the statement of the second law simple, we usually talk about closed systems. But that's no real constraint, and the second law is extremely important despite its containing the innocent adjective.

    Best wishes
    Lumo

    ReplyDelete
  7. Luboš,

    If I'm understanding you correctly, you're saying gravity can't be derived from entropy because gravity is time reversible and entropy is not.

    On the face of it, that sounds like a very sane argument. But surely those who are playing around with the idea of deriving gravity from entropy must be aware of this? I'd be interested in hearing their response.

    On a different but related topic, I'd love to hear your opinion of the relationship between information theory and physics in broad brush strokes. For example, do you feel one is a subset of the other? Or that some fundamental link that connects the two? Or are they two completely disjoint fields of study?

    ReplyDelete
  8. Dear Dr. Motl,

    I have been thinking about the unwanted certainty, constant particle count, and asymmetry of the graph theory models.

    While tinkering with the Schwarzschild black hole entropy (e.g., S = 4 pi E^2 / E_p^2), I found that a characteristic energy level can be associated with radial distance if the Planck energy value is considered to be replaceable

    S = pi r^2 / l_p^2 = 4 pi E^2 / E(r)^2,
    E(r) = E 2 l_p / r.

    Where r > R_s, the g_{tt} component of the Schwarzschild metric is equivalent to

    g_{tt} = -(1 - E(r) / E_p).

    For instance, the Earth's mean radius is r ~ 6371 km, and its mass is M ~ 5.97 * 10^(24) kg. The resulting characteristic energy level at the surface of this idealized Earth is E(r) ~ 2.7 Joules (e.g., 1.7 * 10^(19) eV).

    Given this specific magnitude for E(r) at the surface of the Earth, it seems worthwhile to consider whether or not the observed abrupt falloff in the cosmic ray energy spectrum at around 10^(19) eV is mostly dependent on the values of r and M at the site of observation (HiRes, Pierre Auger).

    If this were indeed the case, then it would seem plausible that all massive bodies are electromagnetically non-interacting where E_k^2 / (E_0 + E_k) \geq E(r). Perhaps such behaviour would also be related to the missing mass problem of galactic dynamics? The phrase ``a bit too convenient to be true'' comes to mind.

    I know that the GZK cutoff mechanism is generally used to explain the cosmic ray energy spectrum, but I thought that this characteristic energy level coincidence was too neat to completely ignore (e.g., Nature's playing a trick on someone here!!).

    ReplyDelete
  9. Dear magicjava, you asked the same question in the fast comments - that's where my response is to be found, too.

    ReplyDelete
  10. Dear Shawn,

    thanks: it must surely be interesting but I don't understand it. What is the "characteristic energy level at the surface of Earth" and why should it be related to the GZK cutoff? Do I understand well that it's just some numerology?

    Thanks, LM

    ReplyDelete
  11. Lubos,

    Let me say from the outset that I don't consider Verlinde's approach to be on the right track. However, I believe that your interpretation of time homogeneity in GR is only partially valid. This is what standard textbooks on GR have to say about energy conservation:

    1) In general, the covariant energy-momentum continuity of matter (div Tuv =0) cannot be integrated to give a globally conserved energy of matter (Em).

    2) Energy conservation does not hold in general curved space-times. The strong equivalence principle leads to a local conservation of Em.

    3) One can construct gravitational energy-momentum pseudo-tensors whose time-time (t00) component describes gravitational energy density at some level.

    4) Globally conserved total energy Etot = Em+Eg can be found only in asymptotically flat space-times (= isolated systems).

    Regards,

    Ervin Goldfain

    ReplyDelete
  12. Dear Ervin, completely agreed. I hope you don't think I disagree. I was talking about the total energy and similar things because they are legitimate in the Newtonian limit, and Erik Verlinde talked about it in the first place. Cheers, LM

    ReplyDelete
  13. Dear Dr. Motl,

    Yes indeed, it's most definitely a numerology. It is meant to replace the GZK cutoff mechanism.

    I just thought that somehow the event horizon could be arbitrarily associated with the Planck energy, and then I went hog wild from there. :)

    I think that the basic result of this inquiry is: If one were to move far away from Earth to a place where E(r) is much less than 10^19 eV (say, 10^10 eV), then one would observe that the cosmic ray suppression (e.g., the "knee of the curve" in GZK specialist parlance) happens at around 10^10 eV, not 10^19 eV. The suppression effect would rely on the gravitational potential at the site of measurement.

    That said, I don't really see how it's even feasible to test such a thing, which puts it in the same category as those ideas that speculate that the centre of the galaxy does not contain a black hole, but simply a giant herd of space kittens -- we just need to get there to see it for ourselves, honest, swear to God, I promise! ;)

    ReplyDelete
  14. P.S. Perhaps it's not apparent to the readers, but your post asked: "I wonder what will be the next pillar of physics that will come under attack"?

    My answer was "the GZK cutoff". I thought it was kind of funny because it even threw entropy into the mix. I was saving this "gem" for quite some time, and thought it was an appropriate time to spam the world with it.

    Sorry, my humour is kind of warped. :)

    ReplyDelete
  15. Confusion about statistical physics may be associated with shortcomings in the manner in which universities present the statistical ideas. When properly presented, these ideas solve the problem which philosophers call the "problem of induction," after "induction," the process by which one generalizes from observational data to a scientific model (aka theory).

    A model is a procedure for making inferences. In the construction of a model, the builder repeatedly faces the question of which inference is correct in a set {a, b...} of candidates. The model builder must answer this question each time it is posed. Logic is the science of the principles by which this inference is identified. These principles are called the "principles of reasoning." The problem of induction is to discover the principles of reasoning.

    The cardinal principle of logic is called the law of non-contradiction. The greatest of barriers to solving the problem of induction proved to be satisfaction of the law of non-contradiction. Many approaches were tried but all failed with the result that the problem of induction was solved only recently.

    It can be shown that an inference has a unique measure in the probabilistic logic. This measure is called the "entropy" or "conditional entropy" of the inference. The probabilistic logic is obtained in a generalization from the deductive logic in which the rule that every proposition has a truth-value is replaced by the rule that every proposition has a probability.

    The existence and uniqueness of the measure of an inference suggests that the problem of induction might be solved by an optimization in which that inference is judged correct whose measure, the entropy or conditional entropy, is minimized or maximized. In the period 1963-1975, the theoretical physicist Ronald Christensen showed that this approach did, in fact, solve the problem of induction. Few university professors are aware of this advance with the result that logic, epistemology, statistics, and related topics are presented to students of physics and other disciplines in an unnecessarily murky manner. A consequence from the murky presentation is that today it is quite likely that even after an extended stay in a university, a student will graduate having never been exposed to the principles of reasoning. Universities routinely hand out "doctor of philosophy" degrees to people who don't know how to reason!

    ReplyDelete
  16. Hi Lubos,

    I just learned about this thread, it seems to be dated so I posted a reply to your Amazon review.

    Joe

    ReplyDelete