Thursday, July 11, 2013 ... Deutsch/Español/Related posts from blogosphere

The "Past Hypothesis" nonsense is alive and kicking

I can't believe how dense certain people are. After many years that Sean Carroll had to localize the elementary, childish mistakes in his reasoning, he is still walking around and talking about the "arrow of time mystery" and the "Past Hypothesis":

Cosmology and the Past Hypothesis
His Santa Cruz, California talk about this non-problem took no less than 3.5 hours. Imagine actual people sitting and listening to this utter junk for such a long time. I can't even imagine that. He has even coined some new phrases that describe the very same misconception of yours.
If there is one central idea, it’s the concept of a “cosmological realization measure” for statistical mechanics. Ordinarily, when we have some statistical system, we know some macroscopic facts about it but only have a probability distribution over the microscopic details. If our goal is to predict the future, it suffices to choose a distribution that is uniform in the Liouville measure given to us by classical mechanics (or its quantum analogue). If we want to reconstruct the past, in contrast, we need to conditionalize over trajectories that also started in a low-entropy past state — that the “Past Hypothesis” that is required to get stat mech off the ground in a world governed by time-symmetric fundamental laws.
As talented enough students learn when they are college juniors or earlier, statistical mechanics explains thermodynamic phenomena by statistically analyzing large collections of atoms (or other large collections of degrees of freedom) and doesn't depend on cosmology (or anything that is larger than the matter whose behavior we want to understand) in any way whatsoever. It's the atoms, short-distance physics, that determines the behavior of larger objects (including the whole Universe), not the other way around!

Reverse Times Square 2013. In the real world, objects moving forward and backward in time can't co-exist, a well-defined logical arrow of time indicating what evolves from what has to exist everywhere. That's also why none of the clowns above managed to unbreak an egg. Exercise for you: Can the video above be authentic or was it inevitably computer-reversed afterwords? You should be able to decide by looking at details of the video. For the advanced viewers: Which parts were edited?

Moreover, since its very birth, statistical physics was a set of methods to deal with the laws of physics that were time-reversal- (or CPT-) symmetric and it always "got off the ground" beautifully. There is absolutely nothing new about these matters in 2013. In fact, nothing really qualitative has changed about statistical physics for a century or so.

At least, you see some progress in Carroll's talk about these matters because in the quote above, he at least seems to admit that to reconstruct the past, you need to use somewhat different methods than to predict the future. For years, we would hear him saying the – completely wrong – statement that the retrodictions follow the very same procedure as predictions.

However, his ideas about the way how retrodictions work are still completely wrong.

He thinks that to retrodict the past, "we need to conditionalize over trajectories that also started in a low-entropy past state". But that's not true. In fact, the correct method to retrodict the past – one based on the Bayesian inference – automatically implies that the entropy was lower in the past than it is now.

I have already explained this thing about 20 times on this blog but the reason why I keep on repeating the simple argument is that I want to be absolutely certain that I am accurate when I say that every physicist who still fails to understand these issues is either a fraudster or a complete moron avoiding fundamental knowledge whatever it costs.

To predict the future, you take the initial state – either a point in the classical phase space or the pure state vector in quantum mechanics; or (if you deal with incomplete knowledge) the probability distribution on the phase space or the density matrix in quantum mechanics – and evolve them in the future direction. In general, this tells you the probabilities of different final states whose interpretation is straightforward (frequentism is good enough).

However, to retrodict the past, you need to follow a reverse procedure. The possible initial states are different hypotheses \(H_i\) as understood by Bayesian inference. These hypotheses – initial states – predict (i.e. imply) different observations (observed in the present). The actual observations are evidence that may be used to adjust the probabilities of different hypotheses (initial states).

It's important to realize that just like we can't predict the exact state in the future – only probabilities may be reconstructed – we can't retrodict the exact microstate in the past. We may only retrodict probabilities of different states but we have to do it right. When we do it right, we automatically see that the entropy of the state in the past had to be lower, just like the second law of thermodynamics always said.

I borrow the proof that the entropy of the initial state was lower from the December 2012 article, Information and Heat. Let's consider the evolution from the ensemble \(A\) to ensemble \(B\). By ensembles, I mean some states roughly given by their macroscopic properties which may be represented by one of many "practically indistinguishable" microstates. Only when we consider statistically large collections of atoms or other degrees of freedom, we may derive some non-probabilistic statements such as "one entropy is higher than the other".

The probability of the evolution from \(A\) to \(B\) is\[

P(A\to B) = \sum_{a\in A}\sum_{b\in B} P(a)\cdot P(a\to b)

\] where \(a,b\) go over microstates included in the ensembles \(A,B\), respectively. This notation is valid both for classical physics and quantum mechanics (where the microstates refer to basis vectors in an orthonormal basis of the ensemble subspace); in quantum mechanics, we could write the expression in a more basis-independent way using traces, too. If we compute the probabilities for ensembles, we must sum the probabilities over all possible final microstates \(b\) because we're calculating the probability of "one possible \(b\)" or "another possible \(b\)" and the word "OR" corresponds to the addition of probabilities.

On the other hand, we have included \(P(a)\) which denotes the probability that the initial microstate is a particular \(a\), because the probability of evolution \(P_{a\to b}\) is only relevant when the initial state actually is the particular \(a\). Now, we will simplify our life and assume that all microstates \(a\) are equally likely so \(P(a)=1/N(A)\), the inverse number of microstates in the ensemble \(A\). This "egalitarian" assumption could be modified and different microstates in the ensemble could be given diifferent prior probabilities but nothing would qualitatively change.

It's also possible to calculate the probability of the time-reversed process \(B^*\to A^*\). The stars mean that the signs of the velocities get inverted as well; in quantum field theory, the star would represent the complete CPT conjugate of the original ensemble because CPT is the only reliably universal symmetry involving the time reversal (but the star may also mean T whenever it is a symmetry). Clearly, the probability is\[

P(B^*\to A^*) = \sum_{a^*\in A^*} \sum_{b^*\in B^*} P(b^*) P(a\to b)

\] where I used that \(P(a\to b)=P(b^*\to a^*)\) from the time-reversal symmetry (or, more safely, CPT symmetry) of the evolution followed by the microstates. Note that the two probabilities are almost given by the same formula but in the latter one, we see the factor \(P(b^*)=1/N(B^*)=1/N(B)\) in our simplified "equal odds" setup. At any rate, the ratio is\[

\frac{P(A\to B)}{P(B^*\to A^*)} = \frac{P_a}{P_b} = \frac{1/N(A)}{1/N(B)} = \exp\left(\frac{S_B-S_A}{k}\right)

\] where I used \(N(A)=\exp(S_A/k)\) – the inverted formula from Boltzmann's tomb – and similarly for \(B\). But now realize that \(k\) is extremely tiny in the macroscopic SI-like units while \(S_B-S_A\), the entropy change, is finite. So the ratio \((S_B-S_A)/k\) is equal either to plus infinity or minus infinity. It follows that one of the probabilities, \(P(A\to B)\) or \(P(B^*\to A^*)\), is infinitely (even exponential of infinity) times greater (i.e. more likely) than the other one which means that only one of the processes may occur with a nonzero probability (just like probabilities in general, both are less than or equal to one which means that the larger one equals at most one and the smaller one is infinitely smaller than that, i.e. zero).

If you look at the sign, the only possible (nonzero probability) transition between ensembles is one for which the final entropy is greater than the initial one. If \(S_B-S_A\gt 0\), the numerator \(P(A\to B)\) is infinitely greater than the denominator \(P(B^*\to A^*)\) which means that only the numerator is allowed to be nonzero. In other words, \(A\to B\) may occur while \(B^*\to A^*\) is prohibited.

Again, I have proven that the evolution in which the entropy increases is hugely more likely than the reverse process – the ratio of their probabilities is \(\exp(\Delta S/k)\) where \(\Delta S\) is the entropy difference between the initial and final state. To prove such a thing, I didn't have to assume anything about the Earth, the Solar System, the Universe, or cosmology because these – totally universal – calculations in statistical physics and their conclusions (that were already known to thermodynamics) have nothing to do with these large objects. If we were talking about the entropy change in a closed lab, we have only discussed the degrees of freedom (atoms...) inside the lab. Whatever the rest of the Universe was doing had no impact whatsoever on the calculation (and the resulting odds).

The calculation above is not hard but you may say that this particular general derivation above – presented in a way optimized by your humble correspondent, to say the least – is a bit unfamiliar to students who don't go through clear enough courses. Perhaps it is a bit ingenious. It may be OK if people happen to be ignorant about it.

However, what I find amazing is that people like Carroll are clearly ignorant about certain facts indicating that they must completely misunderstand the scientific method. Why? Sean Carroll is clearly convinced that the laws of physics as he knows them imply that the entropy of the young Universe had to be higher than the current entropy. So this is a prediction of those laws of physics and those methods to calculate that he is proposing. Because he also agrees that the entropy of the young Universe was actually lower, it must mean that his laws of physics or methods or both have been falsified. They're dead because they have made a wrong prediction (in this case, infinitely many exponentially bad predictions about pretty much everything!). Once they're dead, they can't be made "undead" again. This is a trivial observation that everyone who uses the scientific method in any context simply has to realize. Because Sean Carroll doesn't realize that, he can't possibly understand what the scientific method is.

If there is a contradiction between the empirical data – low entropy of the Universe in the past – and your theory, your theory has to be abandoned. You have to throw at least pieces of it and, perhaps, replace them with other pieces. Just adding "extra stuff" to your theory can't possibly help. If the theory was able to produce a wrong prediction (at least with a high enough confidence, like 99.9999%), then it has been falsified and making the theory more extensive (by the supposedly helpful additions such as "the cosmological realization measure" or the "Past Hypothesis") just can't solve the problem.

I am also amazed by the brutal distortion of the history below – Sean Carroll must know very well that it is a distortion:
That would stand in marked contrast to the straightforward Boltzmannian expectation that any particular low-entropy state is both preceded by and followed by higher-entropy configurations.
Ludwig Boltzmann has never believed that the entropy was higher in the past than it is today. In fact, he was the history's most important man who has demonstrated just the opposite – that the increase of the entropy is a universal property of any system with many degrees of freedom. He has proven this second law of thermodynamics using the tools of statistical physics – where the second law is represented by the H-theorem, a more waterproof and transparent demonstration of why the second law has to be right.

It's important to realize that this second law or H-theorem apply to any moment of the history and any state. It doesn't matter whether the year "now" is 2013 or less or more. The proof of the H-theorem doesn't depend on "now" or the current year at all. It always implies that the entropy increases. It will increase between 2013 and 2014 and it was increasing between 2012 and 2013, too. If \(t_1\lt t_2\), then \(S(t_1)\lt S(t_2)\). One may have different emotional attachments to these two periods of time (perhaps we were not yet born at time \(t_1\) while we fell in love at \(t_2\)) but physics – and the proof of the H-theorem – just isn't about emotions. It is about facts and the laws of physics and about mathematically valid propositions and implications – and those are (perhaps except for the facts that may be interpreted as time-dependent things) independent of the time, independent of the current year.

I am also stunned by the fact that among the hundreds of readers who were probably going through the blog entry at Sean Carroll's blog, there was not a single person who would understand these basic issues about statistical physics and who would kindly explain to Sean Carroll how utterly stupid he was being and how superfluous all his concepts and proposed hypotheses are.

Incidentally, some news related to time: Strontium lattice clocks may improve the accuracy of the state-of-the-art atomic clocks by a stunning factor of 100! See BBC, Nature.

Add to Digg this Add to reddit

snail feedback (35) :

reader Dilaton said...

Duh, I never heard about this "Path hypothesis" before, but I like and enjoy the way you clearly dismantle if because of the wrong prediction of higher entropy states in the past :-)

reader Luboš Motl said...

Dear Dilaton, there was a typo in the title, as Bill Zajc pointed out. It is called Past Hypothesis, as written later. It's the hypothesis that there existed the past, roughly speaking, a claim that some people find extremely bold. ;-)

reader strictly speaking... said...

One thing that can be interesting keep in mind is that the laws of physics ban some transitions from happening instantly.

Our sun won't burn out in a matter of minutes because it is bottlenecked by weak processes. These are unlikely, which in a sense constrains the time derivative of it's entropy. Slow decay processes delay the inevitable. ; )

Summing up some basics:
1: Entropy is a (mostly) continuous function of time with a constrained time derivative depending on the system.
2: From observation, our past universe had a low entropy.
3: When propagating to any new state in any direction from a known state, the entropy is overwhelmingly likely to be as high as it is allowed to be.

You can analyze this roughly like you would have analyzed a functional equation at the IMO. You are given one value and are asked to find the other ones.

Since our initial value is low, and the Entropy should be as high as allowed at any possible point, the solution with, so to speak, the highest density of solutions is the one that allow it to grow as fast as allowed in the future direction.

In other words, the direction in which the entropy increases is based on boundary conditions. The solutions to the basic laws don't need to have the same symmetries as the laws themselves.

reader Luboš Motl said...

Dear strictly speaking,

3) is right or wrong depending on the way how "propagating" is interpreted. If "propagating" is meant so that all objects always propagate from the past to the future and we never apply the claim to the opposition propagation, it's true. If "propagation" includes both evolution and its inverse, some reconstruction, then it's clearly false.

reader Shannon said...

What cosmological proofs would Sean Carroll need to back up his claim about the past high entropy ?

reader JP said...

If the number of microstates changes, so that N(A) != N(B), i.e. the phase space volume occupied by the system changes, does it really hold that P(a->b) = P(b*->a*)?

reader Luboš Motl said...

Of course that it does. P(a to b) is the probability for microstates. It's equal to P(b* to a*) by the CPT theorem. These probabilities don't depend on the other states in A or other states in B at all - the clumping into ensembles has absolutely no impact on the calculation with the microstates.

In quantum mechanics, P(a to b) would be simply calculated as


where U is the evolution operator. This is equal to


where * is the CPT conjugation.

reader Luboš Motl said...

Dilaton, the KITP Santa Barbara supersymmetry conference is now all about Dilaton! ;-) See the detailed tweets from Matt Buckley:

reader Arun said...

In your notation is it true that for any given a, p(a -> b) summed up over all b is equal to unity? i.e., each a must end up in some b.

reader Luboš Motl said...

Yup. In quantum mechanics, the b-summation is over an arbitrary orthonormal basis.

reader Arun said...

The problem in the argument is that in Hamiltonian evolution, the phase space volume is conserved. So strictly, the vast majority of b's cannot be reached from a's, unless there is some implicit time-irreversible process smearing out the b's accessible from a's into the b's inaccessible from a's.

reader Luboš Motl said...

Dear Arun, this comment of yours is completely nonsensical. The probabilities P(a to b) are nonzero whenever the exact symmetries or conservation laws allow the transition to occur. You are apparently imagining that all the probabilities P(a to b) are either 0 or 1 but that's not how it works.

reader Ignacio Mosqueira said...

Entropy is a clear enough concept when you have enough degrees of freedom for statistics. And it clearly doesn't depend on the global evolution of the universe so there is no reason to expect that time will reverse no matter what the choices for say dark energy or dark matter. That's pretty obvious even though fairly bright scientists have had that debate in the past. Some have argued (I believe Hawking might be one but this is just hearsay) that the arrow of time has to do with initial conditions of the universe. Apparently Feynman walked out of one such debate in protest.

What is not so clear to me is exactly the manner in which entropy arises as the number of degrees of freedom increase, ie., now many atoms one needs to have a clearly defined arrow of time.

reader Dilaton said...

Those rascals :-D, how dare they be talking about me behind my back ... ?!

And I have absolutely nothing to do with the breaking of Weyl invariance that was not me, ok ? And I quite care about the mass of the higgs, mind you ... :-P :-D :-) ... !!!

reader Gene Day said...

Statistical mechanics/physics really amounts to nothing more than careful counting. If you believe in arithmetic you cannot deny that entropy always increases in macroscopic systems. Looking for cosmological “proof” of past high entropy is a 100% crackpot enterprise.

reader bbzippo said...

Luboš, thanks for taking the time to explain this yet another time. I even made a funny video some time ago, inspired by your explanations:

reader Arun said...

Let me try again. With a classical Hamiltonian the set of states A can evolve only to a subset of states B that occupies the same phase space volume. Call these states {arrived from A} The whole set B occupies a (much) larger phase space volume. It is legitimate to confuse the set of states {arrived from A} with the whole of B only if some smearing process has taken place, even though {arrived from A} may be dense in B. This smearing process, however you do it, is time-irreversible.

Conservation of phase space volume essentially means that N(A) states in A evolve to N(A) states in B; only after the smearing process do we get N(B) states.

reader Luboš Motl said...

Dear Arun, the repetition of an untruth doesn't turn it into the truth.

In a striking contrast to your proclamations, the smearing process is exactly what introduces the time irreversibility into every and any analysis of this sort. I have already explained it about 50 times.

When you smear the region of the phase space describing the allowed final microstates i.e. if you expand the region and replace it by a larger and smoother one, you must keep the transition probability the same because the probability of B1 or B2 is P(B1)+P(B2) and P(B2) simply contributes to zero if one can prove that it can't result from the initial state and it's there just to smear B1 into a smooth region.

However, if you smear the region associated with the initial state i.e. replace the region A1 by a smoother region A that is K times larger, than the transition probability must be divided by K because the prior probability that A will actually sit in the right A1 subset is K times lower and the prior probability must therefore be reduced appropriately.

For a system of 10^{26} atoms, the required factor is exp(-S/k) = exp(-10^{26}). This googplex-like factor is exactly the mistake that dense people like you are making all the time but exp(10^{26}) is probably a negligible number for you to pay any attention to it.

reader Eugene S said...

Except for this macroscopic system ... o.k. o.k. the entropy increase does not in fact reverse, what happens is due to laminar flow, low Reynolds number... but still, a cool demonstration :)

reader Shannon said...

Wow, awesome demo! It's like to unwind and then wind up a reel of thread.

reader bronstein said...

Feynman, in his book Lectures On Gravitation, says , that the fact that entropy of universe is increasing should indicate that entropy of early universe is low. Actually he says that entropy should already be maximum, so there must some physical principle or condition to make early universe to have low entropy. Or I remember it wrong.

reader anony said...

As I read some of the essays from alleged physics professionals and students on fxqi, I am forced to conclude that most of the physics community is bankrupt and completely oblivious to the fact that there time has past. The overstated confusion over largely settled questions and the mathematically ungrounded philosophical gibberish serves to obscure simple relationships that are largely tested. The physicists of the 60s and 70s have pushed our knowledge to the greatest heights and it will take time for the engineering to catch up. Eventually the basic math that has been validated for most practical applications will be fully coded into the machines of the future and only the most specialized of technicians will be able to stay in tune with how those devices are constructed. Most of the rest of the world will either not care nor be willing to cope with the level of training that will be needed.

reader simpleton said...

Dear Lubos,

my comment is a follow on to the comment of lucretius.
What you have shown is that given a system in the macrostate A with entropy S(A) the entropy S(B) of the state B to which the system evolvs must be larger than S(A). The laws of physics are time-reversal invariant so the evolution can go from t to t+dt and to t-dt. In any case if our boundary condition is the state A at t we predict that the past and future were both states with higher entropy. If on the other hand, we assume the existence of an arrow of time (given A at t, we are allowed only to compute states at t +dt, in order to arrive at A at t we should choose some state at t-dt and go forward in time) then we of course arrive at correct predictions (low entropy in the past ), or if we assume that the entropy was lower in the past then we arrive at the arrow of time. See also page 47 of Landau Liefschitz on statistical physics (page number from Russian original, or see the quote from the English translation in the post by lucretius). Landau wrote that the arrow of time could be probably derived from the non-unitary measurement process of QM. You surely know all these arguments so I am not quite sure why you are arguing with Sean Carroll...

reader Luboš Motl said...

I am pretty sure you misremember it. Statistical mechanics is discussed especially in Lecture 2 where he explicitly says that the different states surely aren't equally probable - so this contradicts your statement that the entropy should already be maximum - and otherwise, he says sensible things about stat. mech. everywhere.

That's also the case of the Messenger Lecture dedicated to the arrow of time...

reader Luboš Motl said...

Dear Lucretius, thanks for your long remarks but I find your approach irrational. You either understand the calculation clarifying where the asymmetry actually comes from or you don't.

Of course that you find lots of nonsense written in literature - like in Penrose's popular books in particular; Penrose has no clue what the second law actually means - but otherwise what I say is elementary textbook undergraduate stuff.

I assure you that every reputable physicist agrees with me. Among TRF readers, I may explicitly assure you that Bill Zajc, the boss of the physics department at Columbia, does. I know that my ex-adviser does. I don't believe that anyone in the list such as {Witten, Maldacena, Seiberg, Arkani-Hamed, Randall etc. etc.} has any genuine disagreement.

Some of the material you quoted was really attempting to be poetic, discuss borderline science-fiction-related things, but I know very well that Feynman understood statistical mechanics flawlessly, i.e. just like me, and this is true for Landau and others, too.

reader Luboš Motl said...

Right - well, to some statements. The heights reached by science are sort of impressive and they go into some indisputable details etc.

The idea that in this state, some elementary questions about the existence of heat - 19th century physics - remains misunderstood is preposterous in the comparison with the reality.

reader bronstein said...

Sorry, my fault. He says something similar in Feynman Lectures on Physics chapter 46 section 5 "Order and Entropy".

reader Luboš Motl said...

In any case if our boundary condition is the state A at t we predict that the past and future were both states with higher entropy.

Sorry, again. this is just a nonsensical question, even at the level of pure language. You can't "predict" the past - it's the point that all these blog entries are supposed to clarify - and even if you used a (generalized) word "predict" for predictions as well as retrodictions, it's nonsensical to argue that the states at two different moments are both predicted to maximize something. These two states are related by evolution equations so if one of them is assumed to be something, all the property of the other are derived from the evolution equations, not from some vague philosophical arguments. The two states aren't independent from one another.

I don't think that there is any point in your discussion about "assuming the arrow of time". In science, you have to "assume" the arrow of time because it's obviously there experimentally; it's also theoretically inevitable for any system of propositions about a system that evolves from its past states.

If you think it's OK to spend 1/2 of a discussion by assuming something else than the obviously right assumptions, why don't you also harass geologists and force them to spend 1/2 of their papers by thinking about the assumption that the Earth is carried by a sequence of turtles?

reader lucretius said...

Thanks for the reply. I argue that in my situation my approach is the most rational one. As I wrote, I was convinced by your argument but I do not trust my judgement in matters concerning physics. I find always a certain basic vagueness in almost all arguments in physics and it is only because of that that one top class physicist can write of another one that he “has no clue what the second law actually means”. I guess it is the “actually” that is the point. I assure you that this kind of thing is impossible in my area.

Also, my post was long only because of the long quote from Landau and Lifshitz and a shorter one from Feynman. Feynman clearly was being “poetic” and he really never mentions any “paradox” or “contradiction”. But Landau and Lifshitz do, and in a very respectable non-popular text. So how you would expect me to put the judgement of a mere mathematician above theirs? Appeal to authority is not in general a good approach in professional work and even worse in any creative work, but we can’t be expert in everything and there is no surer way to make a fool of oneself than to believe oneself to be an expert in something one is not. In such cases it’s not a bad idea to find out what the “recognised authorities” say before making one’s own definite judgement. Moreover when you see the word “crackpot” and “nonsense” used so freely, it makes you wonder even more. So to be reassured I would like to see some explanation why Landau and Lifshitz, of whom at least the first one was not a “crackpot”, included this passage that speaks of a “contradiction” and give an argument that is identical to the one that Penrose gives, except fo their suggestion that the solution to the apparent contradiction lies in as yet undiscovered quantum mechanics. I am sure you have their book so you can check it for yourself.

reader simpleton said...

Dear Lubos, thank you for your reply.

Indeen, I have used the word "predict" in a generalized sense. What I ment was: take state A at t, take QM (unitary evolution) and evolve A to A' at t+dt and to A'' at t-dt, both states will have larger entropy than A. I agree with all of your statements in case where arrow of time is given. (Note that it is not my belief that we have to explain it, I just wanted to clarify the situation).But then your statement is equivalent to the statement of Carroll about boundary condition of low entropy. The difference is that he wants to explain the arrow of time. Best wishes...

reader Luboš Motl said...

Dear Lucretius, I believe that to understand why the time-reversal asymmetry *actually* appears in any logical reasoning about phenomena in a spacetime is something that not just one student should fully understand. Maybe she has to focus her attention but she can get it and sharply separate it from the wrong and confusing comments about the same issue. But she can do it.

It seems that most people just don't want to understand. Spreading the confusion by purely ad hominem references to other "authorities" who are equally confused or misguided must be more attractive for most people than the desire to understand what actually goes on.

reader RAF III said...

It is due to a psychological disorder that I first identified nearly 10 years ago - RAF IIIs' Syndrome: the addiction to the feeling experienced in the belief that one has confronted or understood something Profound.
Basically, such people are junkies.
Having been made aware of this you will now see it everywhere.
I feel that I have done my part in identifying this affliction and that it is now up to the medical professionals to develop an effective and early treatment.

reader Luboš Motl said...

Apologies but the only part of your comment that makes sense is your nickname.

reader lucretius said...

But isn’t this a 100% ad hominem reply (with hints of quite unprovoked hostility) when my question was motivated only by wanting to find out why top physicists can disagree on what seem to be such basic matters? As I wrote, I found your argument mathematically without any fault, but the big issue is how it applies to physical reality - and that I am not sure about. What I wanted to know is why some other famous physicists seem to think that there should be some explicit asymmetry in the laws of physics itself rather than in the way probabilistic arguments are applied to physical reality (which is how I have understood the difference between the two sides). I thought you would say something like, that used to be the standard way to think about these matters before something or other or, they are influenced by some philosophy that you consider mistaken or whatever. But instead all I hear from you is that it should be obvious to a student, and the fact that it was not understood by Landau or Penrose (and not even by Feynman, otherwise he would have explained it in this way in his book meant for undergraduates) and that everybody who even ask you a question about this matter is an idiot. Also, note that I have actually never read any of your papers (except from some fragment of lecture notes from Harvard about entanglement - which only contained what I had already known) have not heard about your work and came to your site because I have practically identical views on politics, AGW and have a general sympathy for anything Czech. But, except for the stuff that is written about you on Wikipedia (and you know hw reliable that is) I have no reason to trust your views unquestioningly. The fact that they are convincing to me is not any reason - I am 100% sure that I would fine Lee Smolin equally convincing - it is not at all hard to be convinced by “experts” in an area that you know nothing about. So I was actually looking for some more reasons to be convinced. The idea that everyone else is plain stupid and does not want to understand is not one of them.

reader Luboš Motl said...

I am closing this thread. Idiots have won. Your numbers are impressive. Having wasted a significant amount of time with idiots here, I surrender.