Monday, March 03, 2014 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Misconceptions that Lenny Susskind, Scott Aaronson may share

Most importantly, computer science cannot be fundamental in physics

Two weeks ago, I discussed a recent breakthough in proving the Erdös discrepancy conjecture for \(C=2\). The proof is computer-assisted and not really human-checkable. It is long but doable even though you might a priori think that the problem is hopelessly difficult. Consequently, it confirmed my 2013 thesis that short questions sometimes require long answers and proofs.

The solvability of the problem in "realistic" time is a reason to think that \(P=NP\) could hold, too. A "polynomially fast solving" algorithm could be constructed for every "polynomially checkable" algorithm although the former could be much longer and more time-consuming than the letter. But both of them could still require "polynomial time".




In the comment section under my Erdös article, Scott Aaronson would say:

This result on the Erdös discrepancy problem, while wonderful, has no particular bearing on \(P\) vs. \(NP\). Your attempt to relate the two is merely amusing – like the efforts of Fox News to turn every story, no matter how irrelevant to politics, into yet another dig against Obama. [...]
The utterly stupid comment received something like 20 positive votes and almost no negative ones. This is the type of surveys that scare me and that I was routinely exposed to as a right-wing junior professor at Harvard. Someone wants to weaken some argument or result I present so they immediately say something about politics – e.g. point out that I consider Fox News more reasonable than Barack Obama – and suggest that "it must be the same thing". And the mindless left-wing mob votes this totally demagogic proclamation up.




Well, it turned out (thanks to Doug for the link) that Georgia Tech's Richard J. Lipton, the world's leading computer scientist, just wrote an article where he argues that the \(P\neq NP\) and related beliefs common among the complexity theorists could be just plain wrong and the Erdös result is a possible indication that it's so:
Practically \(P=NP\)?
The comments about the relationship to \(P=NP\) are so cute that I will copy-and-paste the whole relevant section:
\(P=NP\)?

Their paper can also be read at another level, below the surface, that reflects on complexity theory. Their paper shows that there are SAT solvers capable of solving hard natural problems in reasonable time bounds. What does this say about the strong belief of most that not only is \(P\neq NP\) but that the lower bounds are exponential?

I wonder sometimes if we in complexity theory are just plain wrong. Often SAT solver success is pushed off as uninteresting because the examples are “random,” or very sparse, or very dense, or special in some way. The key here, the key to my excitement, is that the SAT problem for the Erdős problem is not a carefully selected one. It arises from a longstanding open problem, a problem that many have tried to dent. This seems to me to make the success by Konev and Lisitsa special.

John Lubbock once said:
What we see depends mainly on what we look for.
Do we in complexity theory suffer from this? As they say across the Pond, this is bang-on. According to our friends at Wikipedia, Lubbock was:
The Right Honourable John Lubbock, 1st Baron Avebury MP FRS DCL LLD, known as Sir John Lubbock, 4th Baronet from 1865 until 1900, [a] banker, Liberal politician, philanthropist, scientist and polymath.
I know he lived many years ago, but note he was a polymath: hmmm… Meanwhile I remember when another Liverpool product came across the Pond 50 years ago, which some say “gave birth to a new world.”
Haven't you heard something like that before? Yes, it's exactly the same link between \(P=NP\) and the Erdös result that I mentioned two weeks ago; and it's exactly the same claim that the fanatic \(P\neq NP\) believers are spinning and cherry-picking the evidence to confirm their predecided conclusions; they are rationalizing their beliefs much like many religious people do. The main problem with their attitude is that the opposite assumption, in this case \(P=NP\), may also be defended by "stories" that make it consistent with the related but not quite equivalent facts.

Scott's blog post

But the new blog post by Scott Aaronson I want to discuss is
Recent papers by Susskind and Tao illustrate the long reach of computation
He has talked to Lenny Susskind at Stanford for a few hours and promotes his new paper much like the new paper by Terry Tao. Both papers are useful for Aaronson's obsession to produce hype that indicates that computer science is fundamental (and more important than it actually is) in all other branches of the human thought. This Aaronson's self-serving bias is strong, self-evident, and incomprehensible to realists like your humble correspondent who always try to assign the right importance to things and not to fool themselves.

Tao argues that the essence of the difficulty in the $1 million Navier-Stokes Clay Institute problem is equivalent to the question whether errors appearing inside computers constructed from the Navier-Stokes water may be brought under control. It's sort of interesting, technical, I bet that Tao knows what he is doing, and I also think that his is mostly a "new, equivalent way to think about the problem", not necessarily a collection of ready-to-use tools that would be useful for those who prefer to use the older terminology. I am not too sure here.

But the bulk of Scott's blog post is dedicated to Susskind's paper. Lenny combines lots of things, especially those from the "quantum algorithm" research of the black hole quantum physics, and it's interesting and partly reminiscent of the truth. Some of the things are also conceptually flawed. Lenny is a top physicist – a more careful class of people than the class of arrogant blogging complexity theorists – so he rarely says something that is clearly unjustified by the evidence. But I think that in between the lines, Lenny sometimes assumes similar wrong things that Scott quotes directly.

One question studied by the paper is whether the CFT dual of an Einstein-Rosen bridge (non-traversable black hole) in the bulk becomes "more unnatural/complex" if the two bridged black holes are moved away from one another. The answer is clearly Yes. When two black holes in a pair are created nearby one another, it's a beginning of a process, so the CFT state is a result of a "relatively small number of operations". However, if you want to separate the throats, you have to wait, and the evolution in the CFT time makes the character of the entanglement between various degrees of freedom very convoluted.

So far so good. But the quantum circuit complexity – essentially the total size of a quantum algorithm needed to calculate the amplitudes of the wave function that happen to be equal to the given ones – is proposed to be a "clock".

Is that OK? In principle, you could imagine a map \(\ket{\psi}\mapsto t\) that assigns some time to a state vector according to "how complicated the state vector is". Scott and maybe Lenny is thinking in this way. However, this is not a legitimate observable. All observables in quantum mechanics are given by (Hermitian) linear operators on the Hilbert space. The proposed "complexity" map just isn't a linear operator; the map is explicitly non-linear. To notice another argument, you wouldn't know what the eigenvalues and eigenvectors are and why they should be orthogonal to each other. So there can't be any gadget that actually measures such "time". This "complexity time" isn't an observable for the same reason why \(\ket{\psi}\) itself isn't an observable – and why it isn't observable (without the word "an"). You can't measure the individual complex amplitudes in \(\ket{\psi}\) because it is not a classical field. It is a complexified probability (amplitude) wave, stupid.

Therefore, it is very important from a physics viewpoint not to treat such "functions" as true observables. It is important to realize that they actually cannot be measured by any gadgets in the real world. But even if they could be measured, it would be important to realize that there can also be other clocks (in fact, there can only be other clocks because these "complexity clocks" don't exist). There are other related problems with Lenny's statements that could be blamed to his partial rape of the basic postulates of quantum mechanics.

But I want to focus on a more general topic, the problems with all similar papers claiming that complexity theory is important for the understanding of black hole physics etc. Conveniently enough, Scott articulately summarizes some of these assumptions in bold face fonts (although he incorrectly presents them as conclusions of a proof). One of them says:
There’s no way to sugarcoat this: computational complexity limitations seem to be the only thing protecting the geometry of spacetime from nefarious experimenters.
He said it very clearly but I think that some or many authors trying to bring the complexity theory into black hole quantum physics would subscribe to this thesis. But I am convinced that such a fundamental role for complexity theory in black hole dynamics is physically impossible. The reason? Well, in plain English, the most comprehensible reason is that physics doesn't determine moral values, as I said, so it can't label some experimenters as "nefarious". Whether some experiment is "nefarious" is just our human (and subjective) interpretation. The laws of physics (probabilisticaly) predict the outcome of all experiments, both "nefarious" and "virtuous" ones.

In other words, the important point is that
[A] computation is just another physical process.
It is a physical process that was probably "intelligently designed" by someone who had some "purpose" and who found the computation procedure "useful" for something. But it is a physical process and physics offers no opinion about who is "intelligent" or "designed" or "useful" or having a "purpose". If a computation were able to unmask a paradox (contradictory predictions for the same experiment), many other processes would be enough to see the contradiction, too.

Equally importantly,
[A] computation is a process composed of many other steps that are more physically fundamental than the sequence as a whole.
This assertion is really reductionism applied to computation.

Fundamental physics studies the individual steps that the computation may be composed of; the wisdom behind the computation is a matter of "applied science" or "engineering". One may determine the outcome of a measurement after a process that includes some "computation" by combining or composing the results predicted for the individual steps. A contradiction, if real, would have to hide in some steps.

Many steps may be different and indeed, it's the point of Papadodimas and Raju – and their followers. Locality starts to break down if we make many measurements. But the quantity that measures whether assumptions will break down is not our algorithmic cleverness. It is the number of operators inserted somewhere – a quantity that is much more impersonal, much less linked to people's creativity and intelligence etc.

One common theme that seems to attract some researchers to the dead end is their underestimation of irreversibility. Most states of a system with many degrees of freedom look "thermal". Some special states don't. So for special initial states, we may get special results. For example, we may create a "firewall" on the surface of a black hole if we arrange the initial state of the collapsing star so that some astronauts try to keep themselves above the horizon for a long time and to kill everyone who tries to enter the black hole interior. It's surely possible to have "special microstates" of a black hole that have this property – that exhibit a firewall.

But this is really not contradicting any claim about the non-existence of firewalls. When we say that firewalls don't exist, we mean that they don't exist on the surface of a black hole, and by a black hole, we either mean a generic enough pure microstate resembling the black hole, or some mixed ensemble (i.e. averaging over all, i.e. mostly generic, microstates). The initial state producing the firewall (with the astronauts) isn't really a "black hole". It must be special.

Also, I feel that people are imagining "special states" like the liquid with mixed colors in this video:



Reversible laminar flow.

The nearly homogeneous liquid in this video is obtained by mixing red, blue, and green droplets of some ink. But by turning the crank backwards, we may actually return to the nice three colorful droplets, more or less. So the mixing is reversible!

It is surely a cute experiment but this property never holds at the level of the elementary degrees of freedom, because of the (strengthened) second law of thermodynamics. Unless the system is in true equilibrium, the entropy goes strictly up. Because the reversal would need the entropy to go backwards, i.e. down, the reversal is impossible for generic systems.

This point is actually nearly equivalent to the point I was making when I criticized the "complexity clocks" above. When the state vector of a physical system becomes nearly thermal, the deviation from the "exact thermal" states is actually becoming physically non-existent. There's just no operation that "complex (or CPT) conjugates the wave function" (i.e. that reverses all the signs of velocities etc.) so that its evolution would continue backwards. Machines that complex conjugate the wave function cannot exist because all gadgets manipulate the wave function as some kind of evolution operators (obtained from Hamiltonians that include some couplings to components of a computer or apparatus) which are always unitary but the complex (or CPT) conjugation is an antiunitary operator. So they're qualitatively different. At most, you may emulate an antiunitary operator's action on some "real subspace of the Hilbert space" by the action of a unitary operator (i.e. by something that admits a gadget); on that subspace, the complex conjugation does nothing. But there is nothing natural about a "real subspace" of a Hilbert space. All nontrivial quantum systems instantly evolve any "real state vector" into generic complex ones.

If you want to reverse the evolution of a physical system to a very complicated state, you could also try to record the evolution and use some computation to figure out what to do with the final state so that it returns back. But such an addition of the "tape recorder" would change the experiment including the evolution of the original system. So none of these tricks can really make a non-existent paradox to appear or to make a real paradox disappear. All the miraculous abilities that some people are trying to associate with the "computation" are really missing the key point – namely that the computation, if embedded in the real world, is just a sequence of mundane, boring physical steps.

There's one more general reason why some people tend to think that it's important for a physicist to know whether some fast enough (quantum) algorithm exists: they think that they know what the result of an experiment involving a computation procedure should look like. But that's only because they are prejudiced about the result. In real physics, the result of an experiment involving computation – a long sequence of many boring, elementary physical steps – is always a heavily derived, composite, non-fundamental result. So if complexity theory says that something may be done, it may be done. If it is impossible, it is impossible – regardless of your expectations. But any physical theory which was able to make unique predictions for results of a succession of elementary physical steps will be able to make unique predictions for a process involving a computation as well – simply because the computation is nothing else than another collection of mundane physical operations.

I think that the attempted principles that "something cannot be computed in time" are being proposed as a hypothetical counterpart of the Heisenberg uncertainty principle. But the uncertainty principle doesn't work like that. It doesn't just say that someone will prevent us from constructing a gadget that measures \(x,p\) at the same time. The Heisenberg uncertainty principle literally says that well-defined values prescribed to \(x,p\) simultaneously are logically impossible. If \(x,p\) had particular \(c\)-number values, it couldn't be true that \(xp-px=i\hbar\) because \(c\)-numbers just can't do such tricks. I don't need to consider any "space of possible apparatuses" to derive this conclusion. The Heisenberg uncertainty principle isn't just a limitation of some engineering – our ability to build apparatuses. The principle constrains the truth itself.

That's why important physical principles cannot depend on the constraints limiting the speed of algorithms. The uncertainty-like principles restrict the truths about observable themselves, not just some gadgets. A gadget is just something that tells us a value of an observable but it makes sense to talk about a value of an observable regardless of the detailed form of the measurement apparatus.

Let me just repeat that Scott mentions a conclusion about the length of the wormholes:
[T]he quantum state getting more and more complicated in the CFT description—i.e., its quantum circuit complexity going up and up—directly corresponds to the wormhole getting longer and longer in the AdS description.
This fact isn't hard to see – the separation of the throats directly corresponds to some evolution of the black holes with inertia, and the added evolution complicates things. But it's important to know that unless we change the experiment by putting sensors everywhere, and be sure that in quantum mechanics, lots of extra sensors always make a difference in principle, the complexity of the final pure state effectively becomes "maximal" after some point.

One of the main claims of Susskind's paper seems to be that the evolution of the "complexity time" is taken literally and even this time really stops, according to the definition (which isn't given by any linear operator). And Susskind thinks that this is when the black holes starts to behave as a firewall. I don't think that this claim has any logic. Such a state is even more generic than the state after a shorter time, and the more generic state we consider, the less likely we are to see any special patterns such as firewalls!

Aaronson also asks (in the bold face):
My question is: could you ever learn the answer to an otherwise-intractable computational problem by jumping into a black hole?
No. ;-) This question is a wonderful example of his perverse, upside-down reasoning that tries to pretend that complexity theory may be fundamental in physics. How does one end up with similar ludicrous comments? How could one of the most stupid suicidal acts – the jump to a black hole – replace one of the brightest (and perhaps impossible) deeds, i.e. the programming of a very fast algorithm for a very difficult problem? Well, he links the answer to some fundamental question in physics – like whether locality holds accurately enough somewhere inside the black hole – with a question in computer science that is utterly non-fundamental in physics, typically with the existence or non-existence of some fast enough algorithm.

If such connections existed, it would be possible to deduce that e.g. a jump into the black hole interior is equivalent to the solution of an otherwise intractable algorithmic problem. However, such connections cannot exist exactly because computer science just isn't fundamental in any physical sense. So if we want to ask whether something may be computed quickly enough in physics, we must always assume that the computation is composed of many steps that are given by mundane and transparent operations.

When we do things right, it becomes totally clear that black holes are no helpful in speeding up algorithms. They may change the parametric values of time but they do so in the opposite way. If you send a computer somewhere right above the event horizon, 1 nanosecond of one Intel operation is measured in the local time but the observer at infinity may view it as a week, due to the small value of \(g_{00}\). So the calculation (performed by a NASA-Intel joint venture in which NASA shoots Intel processors near a black hole, hoping that it allows the calculation to be sped up) actually becomes much slower. The sign is no coincidence. You would need negative-mass sources of gravity for the red shift to become the blue shift but negative-mass objects would mean an instability of the spacetime – and the instability involves infinitely many degenerate "zero-energy" states (with many particle pairs whose energy cancels) which contradicts the holographic entropy bounds. Relativity bans tachyons and therefore instabilities and the holographic principle is moving us further in the same direction (as far as the information density and the speed of calculation goes) – it says that even the stable low-energy effective field theory overstates the total allowed number of degrees of freedom and/or the number of operations that your computers may do.

You may also jump to the black hole interior but then you face additional problems: you only have a finite period of time for the calculation before your computer is destroyed (and, by the way, before you are killed, too). This limited time (and space) inside the black hole also prevents you from measuring frequencies and wavelengths too accurately, and so on. So computers using the extra effects of black holes are worse than computers outside black holes. The black hole physics really tends to slow down the computation relatively to the expectations of non-gravitational physics, not to speed it up. The first example of that is the holographic entropy itself: too many RAM chips in too small a region of space inevitably collapse into a black hole. You can't have more than \(S=A/4G\) bits (well, more precisely, nats) of information in a region bounded by the surface area \(A\) although this limitation wouldn't exist for \(G=0\) i.e. in non-gravitational physics.

So all this thinking of theirs about the hypothetical "speedup" allowed by black holes is upside-down. Black holes sometimes slow the algorithms down, tell you that you can't make too many operations before you're killed, you can't have too much memory in a small volume, and so on. So black holes make physics more immune against paradoxes that could be caused by clever algorithms. But this immunity is there in a consistent physical theory from the beginning, anyway.

A feature of black holes that may attract them is the "fast thermalization" – the way how the unitary evolution operator quickly "complicates" the initial state vector. While black holes are fast in this respect, this high speed is really useless. It doesn't allow us to solve any useful problems we would a priori want to be solved. The thermalization is something that erases useful information. The evolution of a black hole in an otherwise empty space really corresponds to the matter's being sucked by the black hole singularity and the black hole interior's becoming empty rather quickly. So all useful degrees of freedom are gone. The useless, thermalized degrees of freedom are still there but the exact microstate is extremely sensitive on the initial state so that only probabilistic predictions are meaningful after some finite time, anyway (simply because you just can't prepare the initial state with the required exponential accuracy).

Whenever you want to do computation, you actually want to keep many degrees of freedom predictable and non-thermalized; that's also why we want to cool quantum computers (and even our laptop's Intel chips) down. The idea that our computational abilities improve when temperature goes up and thermalization is important is upside down, too. If I had to explain this simple point (computers need cooling) to a global warming bigot, I would probably tell him: Global warming reduces the quality of computation.

If you agree that quantum computers need a low enough temperature, (small enough) black holes make computation inevitably harder for another trivial reason: they carry nonzero (Hawking) temperature. As long as it is a (nearly empty) black hole, it just emits the radiation, so from the external observer's viewpoint, it is a source of thermal noise and decoherence.

The frantic attempts to pump complexity theory into quantum black hole physics are fashionable because they look cool and interdisciplinary but at the end, I am convinced that most of such proposed ideas are in conflict with the basic principles of reductionism – with the fact that computation in the real world is (and must be) just another physical process that is composed of more fundamental operations – and most of the intuition they have about the mutual influences between black holes and computation speed etc. are upside down. I feel that these sign errors are rather critical but many people don't seem to care – this is a hint of this interdisciplinary field's becoming a form of the New Age religion rather than good science.

Add to del.icio.us Digg this Add to reddit

snail feedback (18) :


reader Gene Day said...

I agree completely, Lubos. In fact, it seems obvious to me that computation is just another physical process rather than anything fundamental. These people are just fooling themselves but we all do that.


I’m sure you understand the red-green-blue mixing demonstration but many of your readers may not. If you were to view the demo from above (or below) you would see that the colors are never mixed together.


reader John Archer said...

Who do think will play the Scott Aaronson character when Hollywood makes the film? Jeff Goldblum?

His background as a bollocks-talking 'chaos theorist' in Jurassic Park should make him ideal for the role. :)


reader Luboš Motl said...

Exactly, Gene.


Yes, the colors simply cannot be mixed at the atomic scale because this would be inevitably irreversible.


reader Uncle Al said...

Nice video - try reversing a ten-stage static mixer. "Mixing" is stretch and fold (taffy puller, samurai steel), not stirring. Remember that when mixing quick-set epoxy. Glass plate, metal spatula. Stir it up, scrape it flat, scrape it back. Quickly repeat a few times.

"New Age religion rather than good science." Quantum gravitation and SUSY are mathematically flawless but empirically sterile. Do a geometric Eötvös experiment. The worst it can do is succeed, explaining everything including "dark matter" and matter versus antimatter abundance. Cartography failing to map an undistorted flat Earth cannot be explained within Euclid.


reader Plato Hagel said...

Instituting a experimental argument is necessary, when t comes to symmetry in the realtor of viscosity and entanglement? Light in Ftl is medium dependent?

This sets up analogue example of the question of firewalls as to imply Black holes and information?

Layman wondering.


reader Gene Day said...

Yep, you can’t forget about entropy even though many people would like to.
Your black hole analog is fascinating.
Thanks.


reader Mark Luhman said...

Lubos, I liken this way, communism was on the edge, most people did not know it, Reagan reconsidered it and gave it a kick. Would it fell into the abyss on it own, probably but I think Reagan, Thatcher and John Paul help push the rock over the edge.


reader Mark Luhman said...

recognized not reconsidered


reader John Archer said...

"I can't believe this guy is still the President."

I was hoping someone would shoot Bliar too. We had to put up with that utter c##t for a full 10 years. And he's still breathing.

Some things are just hard to fathom.


reader Luboš Motl said...

I would personally subscribe to that. appraisal.


reader David Miller said...

Lubos,

Cohen was on again this afternoon (Monday) on CNN on Wolf Blitzer's "Situation Room," on a panel with Newt Gingrich and CNN correspondent Christiane Amanpour. After Cohen spoke briefly, Blitzer turned to Gingrich, who said that Cohen had some good points, although Gingrich went on to discuss the need to restrain Putin in further actions.

This partial agreement between Cohen and Gingrich really set Amanpour off, who turned on both men. However, the real fireworks came when Blitzer reported on what Russian UN ambassador Vitaly Churkin had claimed at the UN: Amanpour went ballistic, interrupting and yelling at Blitzer and claiming that
Blitzer was reporting Churkin's claims as “facts,” although Blitzer was clearly not doing so.

Cohen smiled and suggested that there seemed to be a civil war within CNN.

It was all very revealing: A savvy
politician, Gingrich, was willing to take Cohen's expertise seriously. A senior anchor, Blitzer, let everyone have his or her fair say. And, Ms. Amanpour, in her eagerness to demonize Putin (and, derivatively, Cohen, Gingrich, and her own colleague, Blitzer) displayed herself as an out-of-control and very rude hysteric.

I think they are over-reaching. Sensible people are starting to see that the demonization of Russia and Putin is being motivated by something other than an attachment to reality.



Yes, it is reminiscent of the global warming fraud, but I think they have over-reached there, too. It's a bit of a rough winter for the eastern US, and the global-warming dogmatists are starting to look pretty silly.


Dave


reader Luboš Motl said...

Thanks for this interesting report, Dave. I didn't know the name Amanpour although via Google images, I see that I clearly know her face.


It would be interesting to try to quantify how big fraction of the policies etc. is actually being constructed by media people like hers.


reader NikFromNYC said...

“But the uncertainty principle doesn't work like that. It doesn't just say that someone will prevent us from constructing a gadget that measures x,p at the same time. The Heisenberg uncertainty principle literally says that well-defined values prescribed to x,p simultaneously are logically impossible. / The Heisenberg uncertainty principle isn't just a limitation of some engineering – our ability to build apparatuses. The principle constrains the truth itself.”

But explain to a physics layman where a principle “says” something like this. How is such a strong statement supported by a few equations on a page seemingly about the limits of measurement that this layman can rationalize as being due to small probes perturbing small systems. How does logic dictate what sounds like a mere philosophical interpretation? Do the numbers anywhere change under this versus a pure measurement limit interpretation, and if not then what is the actual linguistic nature such a strange sounding claim about underlying reality?


reader Luboš Motl said...

Well, the verbal interpretations of the uncertainty principle *should* be essentially non-mathematical formulations of the fact that


x p - p x = i hbar.


Or, more weakly, the claim that x,p don't commute with each other. From this fact, it directly follows that x,p can't be "equal" to two ordinary well-defined numbers.


The principle including the equation for the commutator above is backed by the evidence. The commutator may literally be measured, with some thinking what it means. And it is completely independent of any particular apparatuses.


So one may always say a weaker statement and call it the uncertainty principle but the scientific evidence actually does agree with the principle in the stronger form. This claim is not about philosophies or linguistic preferences. It's about the truth that may be expressed mathematically. The mathematical incorporation of the principle does work independently of any particular apparatuses.


The mathematical formulation of any principle is needed to make it well-defined, and when it's done, one may see that x,p literally can't be well-defined numbers at the same moment. Whether it "sounds strange" according to someone's expectations based on language or classical physics experience is completely irrelevant from a scientific viewpoint. It's his psychological problem only.


reader kashyap vasavada said...

Very interesting debate Lubos. So if I understand some of the explanations of BH information paradox end up violating 2nd law somewhere if you consider all the degrees of freedom.


reader Luboš Motl said...

Well, I wouldn't go quite that far. The essence of the "black hole information" problem is probably independent from issues of the 2nd law. But people tend to overlook the 2nd law when they talked about white holes. Lenny seems to overlook it when he effectively says that he creates a special state just by waiting. People generally overlook the growth of entropy when they think that they may measure too detailed information inside a thermalized, stable black hole. But most of theinformation is irreversibly lost and one needs an exponentially huge gadget - which must therefore be outside the black hole - to detect the exponentially tiny correlations etc. after the black hole grows old.


reader NikFromNYC said...

You just gave me about a year of serious spare time homework, with a clear direction to it now.


reader Peter F. said...

You, Lumo, are the most firmly philosophical and fierce fighter against faulty thinking and falsehoods that I have come across. :-)
On the other hand, it also feels good for me to recognize that not even you can be consistently mistake-free! :D
Not that I was in any specific way reminded of this "other hand" by this article (other than that the trivial trip up that caused the word latter to be spelt letter.)