Saturday, May 04, 2013 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Aaronson's anthropic dilemmas

This text has been expanded and covers the rest of the book... Originally posted on May 1st

If you read my previous observations on Scott Aaronson's book including all the comments, you will see my remarks about all the chapters up to Chapter 15 about the quantum computation skeptics – where I agree with almost everything Aaronson writes although he seems to focus on the dumb criticisms and writes too little about the more intelligent ones (and e.g. about the error-correcting codes).

Chapter 16 is about learning; perhaps too much formalism if we compare it with the relatively modest implications for our understanding of the process of learning.

Chapter 17 is the most hardcore "computational complexity" part of the book and hopefully the last one that is intensely focusing on the complexity classes. It's about interactive proof systems. Aaronson often wants to present all of computer science as a "fundamental scientific discipline" so he tries to apply these superlatives to aforementioned "interactive issues", too.

I have a lot of trouble to get excited about these problems.

In the interactive proof system, two beings – a verifier and a prover – are exchanging messages whose goal is to ascertain whether a given string belongs to a language or not. The prover cannot be trusted while the verifier only has finite resources. It looks like an immensely contrived game – from game theory – to me. Detailed questions about such a game seem about as non-fundamental to me as the question whether chess is a draw.

The only true reason why I would want to prove \(P=NP\) or its negation (or even the numerous less important results of this sort) would be to get a million of dollars.

Needless to say, I think that Scott Aaronson is a world's top professional in the computational complexity theory – and I think that the quantum aspect is an optional cherry on a pie for him, an extra X-factor that he adopted to feel rather special about the computational complexity theorists themselves.

But for me, this is a portion of mathematics that is completely disconnected from fundamental problems of natural sciences. I like to think about important scientific problems. But the complexity papers aren't really about the beef, about particular problems. They are thinking about thinking about problems – and they don't really care what are the "ultimate" problems and whether they're true (e.g. in Nature). In this sense, suggesting that this is a fundamental layer of knowledge about the world or the existence is as silly as the proclamations of anthropologists who study dances of wild tribes in the Pacific but who also try to study the interactions among scientists. These anthropologists are trying to put themselves "above" the physicists, for example, even though in reality, they are inferior stupid animals in comparison – people who completely miss the beef of physics and who may only focus on the irrelevant, superficial, sociological makeup on the surface. In some sense, Scott as a computational complexity theorist is doing the same thing as the anthropologists but with more mathematical rigor. ;-)

Moreover, the computational complexity theory seems to be all about a particular "practical" quantity I don't really care about much – namely computational complexity. I am probably too simple a guy but I primarily care about the truth, especially the truth about essential things, and I don't really care how hard it is to find or establish the truth. So the whole categorization of problems to polynomially or otherwise easy ones – and Aaronson defines dozens of complexity classes and discusses their relationships – is just something orthogonal to the things I find most important.

But let me stop with these negatively sounding remarks about the discipline. Computer science is surely a legitimate portion of maths and Aaronson is talking about it nicely.

Chapter 18 is about "fun with the anthropic principle".

This part of the book doesn't need any physics background – because this principle used by some physicists isn't about any scientific results, either. It's about their emotional prejudices and unsubstantiated beliefs in proportionality laws between probabilities and souls (which boils down to the fanatical egalitarianism of many of these folks).

The chapter is at least as wittily written as the rest of the book. The end of the chapter talks too much about complexity again but let's focus on the defining dilemmas in the early parts of the chapter. After a sensible introduction to Bayes' formula and its simple proof, Aaronson talks about some characteristic problems in which people's attitudes to the anthropic reasoning dramatically differ.

Hair colors in the Universe

At the beginning, God flips a fair coin. If the coin lands heads, He creates two rooms – one with a red-haired person and one with a green-haired person. If it lands tails, He creates just one room with a red-haired person.

You find yourself in a room with mirrors and your task is to find the probability that the coin landed heads. Well, you look into the mirror that's a part of each such room. If you see you are green-haired, the probability is 100% that the coin landed heads because the other result is incompatible with the existence of a green-haired person.

What about if you see you are a redhead?

A natural (and right!) solution, one mentioned at the beginning, is that the probability is 50% that the coin landed heads. The existence of a redhead is compatible with both theories (heads/tails) so you are learning nothing if you see a redhead in the mirror. You should therefore return to the prior probabilities and both theories, heads and tails, have 50% odds by assumption.

In my opinion (LM), this is really the most correct calculation and justification one may get. I tried to "improve" Aaronson's justification a bit.

Now, one may also (incorrectly!) argue that the probability of heads is just 1/3 instead of 1/2 if we see a redhead. The tails hypothesis is twice as likely, 2/3, than the heads hypothesis because – and again, this is an explanation using my language – it makes a more nontrivial, yet correct, prediction of the observed hair color. The heads hypothesis allows both colors so the probability that "you" will be the person with the red hair color is just 1/2.

But I believe this argument is just wrong. It doesn't matter how predictive the hypotheses are! By assumption, the prior probability of heads and tails were 50% vs 50%. The tails hypothesis is more predictive because it allows you to unambiguously predict your hair color – it has to be red because you're the only human in that Universe. But we know that this doesn't increase the probability of heads above 50%.

For that reason, we also don't need an additional "adjustment" of the argument – and this adjustment is wrong by itself as well – that returns the value 1/3 back to 1/2. We may return from 1/3 to 1/2 if we give the Universes with larger numbers of people – in this case, the heads Universe – a higher "weight". There is no reason to adjust these weights. The point is that the prior probabilities of heads and tails are completely determined here by an assumption so any inequivalent "calculation" of these prior probabilities based on the number of people in the Universe is wrong. We just know it to be wrong. We're told it is wrong!

Aaronson "calculates" the value 1/3 of the probability by Bayes' formula. But the calculation is just conceptually wrong because the prior probabilities of heads/tails are given as 1/2 vs 1/2 at the very beginning and the observation of a redhead provides us with no new data and no room to update the probabilities of hypotheses. The observation of a greenhead does represent new data. The arguably invalid update in the case of the observation of a redhead plays one role: to counteract the update from the green observation so that the probability of heads weighted-averaged over the people in the Universe will remain equal to the probability of tails. But it's not the redhead's "duty" to balance things in this way. By seeing his red color, he just learns much less information about the Universe than the greenhead (namely nothing) so he has no reasons to update.

Using slightly different words, I may point to a very specific error in the Bayesian calculation leading to the result 1/3, too. Aaronson says that the probability \(P({\rm redhead}|{\rm heads})\) is equal to 1/2 – probably because in the two-colored heads Universe, there are two folks and they have "the same probability". But that's a completely wrong interpretation of the quantity that should enter this place of Bayes' formula. The factor \(P(E|H)\) that appears in the formula should represent the probability with which the hypothesis \(H\) predicts some property of the Universe we have actually observed, \(E\) i.e. the evidence. And what we have observed isn't that a random person in the Universe is a redhead. Instead, we have observed that our Universe contains at least one redhead; in particular, the predicted probabilities \(P({\rm redhead}|{\rm heads})+P({\rm greenhead}|{\rm heads})\) don't have to add to one because both "redhead" and "greenhead" refer to the observation of at least one human of the given hair color so these two colorful observations are not mutually exclusive. (You should better avoid propositions with the word "I" because this word is clearly ill-defined across the Universes; there's no accurate "you" or "I" in a completely different Universe than ours because the identification of the right Universe around you is a part of the precise specification of what "I" or "you" means; you should treat yourself as just another object in the Universe that may be observed, otherwise you may be driven to spiritually motivated logical traps.) The probability of this actual observation – evidence – is predicted by the heads hypothesis to be 1, not 1/2. With the correct value 1, we get the correct final value 1/2 that the heads scenario is right!

I must mention the joke about the engineer, physicist, and mathematician who see a brown cow outside the train. The first two guys say some sloppy things – cows are brown here (engineer); at least one cow is brown here (physicist) – but the mathematician says that there's at least one cow that's brown at least from one side in Switzerland. This is the correct interpretation of the evidence! The situation in the previous paragraph is completely analogous. (There's a difference: people are less afraid to say unjustifiable and/or wrong propositions that are probabilistic in character, e.g. "I am generic", than Yes/No statements about facts that are "sharply wrong" if they're wrong. But probabilistic arguments and conclusions are often wrong, too!) I am surprised that even Scott Aaronson either fails to distinguish the different statements; or deliberately picks one of those that actually don't follow from the observations! This is the kind of the elementary schoolkid's mathematical sloppiness that powers most of the anthropic reasoning.

At the end, the error of the Bayesian calculation may also be rephrased as its acausality. It effectively assumes that the probabilities of different initial states are completely adjustable by some backward-in-time notions of randomness even though they may be determined by the laws of physics – and by the very formulation of this problem, they are indeed determined by the laws of physics in this scenario!


A madman kidnaps 10 people, puts them in a room, throws 2 dice, and if he gets 1-1 (snake-eyes), he kills everyone. If he gets something else, he releases everyone, kidnaps 100 other people, confines them, and throws again. Again, 1-1 means death for everyone, another result means that 100 people are released and 1,000 new people are kidnapped. And so on, and so on.

You know the rough situation and you know that you're kidnapped and confined in the potentially lethal room (but you don't know whether some people have already been released). What's the probability that you will die now?

Obviously, you know the whole mechanism of what will happen. He will throw dice. The probability to get 1-1 is obviously 1/36. That's the chance you will die.

Aaronson presents a different, "anthropic" calculation telling you that the chances to die are vastly higher, essentially 8/9. Why? Well, the madman almost certainly releases the first 10 people and then probably the 100 people as well etc. but at some moment, he sees snake-eyes so, for example, he kills 100,000 people and releases the previous 10,000+1,000+100+10 = 11,110 people. Among the folks who have ever been confined to the scary room, about 100,000/111,110 = 8/9 of them die. So this could be your chance to die; the ratio doesn't seriously depend on the number of people who die as long as it is high enough.

Which result is correct? Aaronson remains ambiguous, with some mild support for 8/9. I think that the only acceptable answer is 1/36. The argument behind 8/9 is completely flawed. It effectively assumes that you're a "generic" person among those who are kidnapped on that day – there's a uniform distribution over those people. But that's not only wrong; it's mathematically impossible.

The average number of people who will die is\[

\sum_{n=1}^\infty 10^n \zav{\frac{35}{36}}^n \frac{1}{36}

\] but this is divergent because \(q=350/36\geq 1\). Chances are nonzero that the madman will run out of people on Earth and won't be able to follow the recipe. At any rate, the reasoning behind \(p=8/9\) strongly assumes that the geometric character of the sequence remains undisturbed even when the number of the hostages is arbitrarily large. It effectively forces us to deal with an infinite average number of people and there's no uniform measure on infinite sets because there exists no \(P\) such that \(\infty\times P = 1\).

I think that this is not just some aesthetic counter-argument. It's an indisputable flaw in the calculation behind \(p=8/9\) and the latter result must simply be abandoned. In this case, we know very well it's wrong. If the madman tries to causally stick to his recipe as long as it's possible, the probability for each kidnapped person to die is manifestly \(p=1/36\).

The wrong, anthropic results often make unjustified calculations based on the "genericity" of the people – assumptions that some probability measures are uniform even though there is absolutely no basis for such an assumption and in our scenario, this uniformity assumption explicitly contradicted some assumptions that were actually given to us! And the anthropic arguments also tend to make acausal considerations.

Doom Soon and Doom Late

This is also the case of the "doomsday is probably coming" argument. Imagine that there are two possible worlds. In one of them, the doom arrives when the human population is just somewhat higher than 7 billion (Doom Soon). In the other one (Doom Late), the population reaches many quintillions (billions of times larger than the current population).

Again, just like in the hair color case, if we have reasons to expect that the prior probability of both worlds are equally or comparably large, then we have no justification to "correct" or "update" these probabilities. The existence of 7 billion people is compatible both with Doom Soon and with Doom Late. So both possible scenarios remain equally or comparably likely!

The totally irrational anthropic argument says that Doom Soon is 1 billion times more likely because it would be very unlikely for us to be among the first 7 billion – one billionth of the overall human population throughout the history. This totally wrong argument says that we're observing something that is unlikely according to the Doom Late scenario – only 1/1,000,000,000 of the overall history's people have already lived – and our belief that we live in the Doom Late world must be reduced by the factor of one billion, too.

That's wrong and based on all the mistakes we have mentioned above and more. The main mistake is the acausality of this would-be argument. The argument says that we are "observing" quintillions of people. But we are not observing quintillions of people. We are observing just 7 billion people. If the Doom Late hypothesis is true, one may derive that the mankind will grow by another factor of one billion. But if we can derive it, then it's not unlikely at all that the current population is just 1/1,000,000,000 of the overall history's population. Instead, it is inevitable: \(p=1\). So the suppression by the factor of 1 billion is completely irrational, wrong, idiotic, and stupid.

The only theory in which it makes sense to talk about quintillions of people – the Doom Late theory – makes it inevitable that the people aren't distributed uniformly over time. Instead, they live in an exponentially growing tree. So there's manifestly no "intertemporal democracy" between them that could imply that we're equally likely to be one of the early humans or one of the later ones. We're clearly not. It is inevitable that in most moments of such Universe's history, the number of people who have already lived is a tiny fraction of the cumulative number of the people in the history (including the future).

Aaronson offers another idiotic argument that may sometimes be heard. A valid objection to the Doom Soon conclusion is that it could have been done by the people in the world when the population was just 1 million or another small number – e.g. by the ancient Greek philosophers. And they would have been wrong: the doom wasn't imminent. Aaronson says that it doesn't matter because "most" of the people who make the argument are right.

But again, this is completely irrelevant. Whether most people say something is an entirely different question from the question whether it's right. And indeed, in this particular case, we may show that the probability is very high that the "majority" that uses the anthropic arguments is wrong! What's important is that the methodology or logic leading to the "doomsday is coming" conclusion is invalid as a matter of principle. It doesn't matter how many people use it! One can't or shouldn't invent excuses why these arguments are flawed by saying that some quintillions of completely different (and much less historically important, per capita) people at a different time would reach a valid conclusion. I don't care. I want to reach a correct conclusion myself and I don't give a damn whether some totally different people are right. Of course that they're mostly wrong.

Anthropic principle and a loss of predictivity: what is the real problem?

At the end, it's mentioned that the anthropic principle is often criticized for its inability to predict things. It's indeed unfortunate if a theory makes no prediction. But it's not a valid logical argument against a theory. The correct theory may make much fewer or much less accurate or unambiguous predictions than some people might hope!

The actual problem – one that may be used as an argument against the anthropic principle – is sort of the opposite one. A valid argument is that the alternative explanations that are more accurate, tangible, and predictive have not been excluded. There may be an old-fashioned calculation of the value of the cosmological constant, \(\Lambda\sim 10^{-123}\). And science proceeds by falsification of the wrong theories, not by "proofs" of correct theories.

We know that the anthropic explanation would have been wrong as an explanation of – now "materialistically" understood – features of Nature simply because we have better explanations that we know to be much more likely to be true than the anthropic one. And the same thing may happen – and, I think, it is likely to happen – in the future, too. If you can't really show that this expectation is wrong, you shouldn't pretend that you have proved it!

Perhaps, science will be forced to switch to anthropic arguments because beyond a certain point, there just won't be any old-fashioned explanations. Maybe quintillions of people will live in that future world and the claim that the "open problems are explained anthropically" will therefore be true for a "majority" of the mankind that will have lived throughout the history. But that won't change the much more important fact that the anthropic principle will have been wrong throughout the whole previous history of physics.

Aaronson is clearly close to all the anthropic misconceptions discussed above – which may be correlated with herd instincts, mass hysterias, "consensus science", and other pathologies. This is also manifest in his humiliating comments about the role of Adam and Eve. Well, I don't want to discuss the literal interpretation of the Bible which I don't believe, of course. But he wants to suggest that there is some uniform measure that makes it less likely to "feel that I am an early human".

But this is just totally wrong. There is absolutely no justification for such a uniform measure and because the population was growing pretty much exponentially (demonstrably so), this fact indeed pretty much allows us to prove that each early human was exponentially more important than the current ones and we're more important than the future ones.

In recent years, I got sort of interested in the history, e.g. the local history, and I studied the villages etc. that existed on the territory of Pilsen and in its vicinity. There were just hundreds of people and a few lousy houses and the folks didn't have almost anything but they were clearly very important because the hundreds of thousands of people who live here today have arisen from the small number of ancestors. So each of the ancestors is just much more important than an average contemporary human in the overall historical scheme of things. "Adam and Eve" were clearly even more important, if I express it in this way.

If we divide some consciousness or soul or something based on the spiritual importance, it's totally plausible to say that Adam and Eve (plus Jesus and His close relatives, or whoever counts) have 50% of it and the rest is divided among later humans, if you allow me to express the point more concretely than what is really possible. The argument "I can't be special or early because it is unlikely due to some uniform measure on the history's humans" is completely wrong. It is acausal, it uses mathematically non-existent measures, and it uses uniform measures that have no justification and that sometimes contradict legitimately calculable measures.

So I agree with Scott Aaronson that the anthropic reasoning may be defined as some part of probability theory that is more about feelings and opinions than about solid results. Well, most of the people – including Aaronson himself – clearly end up with completely wrong arguments and results which is just another way to disprove the anthropic principle. ;-) I can't be generic because almost all people seem to be morons. In fact, even these would-be generic people may use the same arguments because almost all of these stupid folks are still much smarter than generic insects and bacteria that are far more numerous. The whole idea of "considering oneself generic in a set" is just a way to contaminate a correct or rational argument or result by an incorrect or irrational one that is believed by the inferior life forms.

Chapter 19 is about the free will.

I found it amusing and agreed with what he had to say. He starts by pointing out some errors of free will supporters as well as foes – the "absence of free will means that all criminals have to be liberated" (silly: we sometimes punish toxic machines even though they don't have a free will!) and "undetermined implies random" (not the same thing).

Then he discusses childish examples with a Predictor who knows how you will act (that shouldn't be possible if the free will really exists) – Robert Nozick has played with such things. And finally, he gets to the Conway-Kochen free-will theorem which I was waiting for. There's an elementary, useful explanation what it says and how it guarantees quantum-certified random numbers.

Chapter 20 is on time travel.

Time machines are primarily wanted because they could speed up computation, something that isn't too important for your humble correspondent, especially because time machines and (macroscopic) closed time-like curves are impossible. The complexity chatter is legitimate maths but it doesn't turn me on. I still see that discipline as a conglomerate of many largely disconnected results with connections that are at most ad hoc. It must be easy to get lost in that jungle.

I must say: it's crazy that people like Peter Shor listen to these talks, how you compute something quickly by constructing a time machine and allowing your grandfather to have sex with your grandmother. Then he leaves the MIT seminar room and criticizes string theorists for not being sufficiently down-to-Earth and connected with the observations. Holy cow! Who is disconnected here?

The closed time-like curves are impossible for various deep reasons and one could discuss it. But Scott Aaronson chooses a different attitude, one of a spoiled frat who screams "I want them I want them I want them because I want fast computers!" The links to physics – which are an important motivating theme of the book – seem mostly bogus to me because he's ready to ignore the physics insights whenever it's appropriate to study abstract problems about "computation in the worlds with totally different laws of physics than ours".

Chapter 21: cosmology

This chapter shows that Aaronson has a pretty good background in cosmology. In most cases, it's pretty manifest that he got this knowledge from conversations with physicists and cosmologists (in many cases folks I know rather well) but that doesn't change my feeling that his presentation of the energy density in the universe, expansion, entropy of the Universe, and the holographic principle including things like Bousso's light sheet is more accurate, complete, and meaningful than what most actual experts would be able to write down.

In this chapter I noticed, perhaps even more strongly than in the previous chapters, that the switching between these physics topics and the omnipresent topic of complexity classes is somewhat unnatural – that Aaronson must also realize that even this chapter is constructed out of two largely disconnected topics. Physics and cosmology impose certain laws and limitations on all the objects inside – humans, computers, cucumbers, and everything else. Everyone must respect them and someone who proposes new computers or cucumbers should better learn about the laws. But one can't learn a sufficient amount about the laws just by discussing what kind of a computer we would like. The limitations imposed by the mathematical insights summarized in computer science belong among the limitations but they don't exhaust the full list because there's also physics that imposes constraints on what the mathematicians and computer scientists label as indisputable (and pretty much arbitrary) axioms.

As long as we talk about computers in the real world, the laws of physics/Nature are always primary and fundamental. Aaronson seems to implicitly ignore this fact at many places.

Chapter 22: answering all students' questions

The last lecture in 2006 – when he began to write the book – had the same format as the last lectures in courses by Feynman: the instructor could have been asked any question by students and was turned into an oracle. There are fun questions in the list. In some of them, Aaronson just reiterates his opinions about unsettled conjectures on the complexity classes (will they be proved or disproved in the future?). But there are also speculations on laws transcending quantum computers and their limits and so on.

In his answer to the last question, Aaronson suggests that computer scientists could be working in physics departments. It's just a historical accident, we hear, that they're not there. Well, I don't think so. It's applied maths. They're not really learning mechanics, field theory, and so on, because it's not needed. And they don't really care much whether some axioms they build upon may be realized in the Universe around us. So it's not physics. QCD or string theory are very far from mechanics but people doing it still start by learning and then are building on foundations that do include the characteristic subdisciplines in physics. They have to because the subdisciplines of physics are tightly connected to a compact whole. Computer scientists are doing something else – not trying to find out the ultimate underlying laws ("axioms" in the language of mathematicians) but choosing arbitrary axioms, regardless of their agreement with the empirical data, and getting interesting results and conclusions out of them. So it's maths, not physics.

Of course that with some very inclusive definition, all quantitative thinkers or all scholars or perhaps all employed people are doing some "generalized physics". But I don't think it's right to promote this inflation and degradation of the word "physics".

My (undergraduate) Alma Mater, the Faculty of Mathematics and Physics of the Charles University in Prague, is vaguely divided to the sections Physics – Mathematics – Computer Science – Teaching of M/Ph/CS. So computer science is de iure put on par both with mathematics and physics as an independent branch. Still, it shares the floors and buildings with some kind of mathematicians (and especially philosophers of mathematics and set theorists), not with physicists, and for good reasons.

Add to Digg this Add to reddit

snail feedback (49) :

reader Scott Aaronson said...

I loved the following counterargument to anthropic reasoning, which ought to be enshrined as the Fundamental Principle of the Lubosian Worldview: "I can't be generic because almost all people seem to be morons."

reader George Christodoulides said...

Scott although there are things that Lubos says that i do not agree with, it would be nice if people from the sciences told their real opinion about others and how scientists are more intelligent like Lubos does.

this is one of the reasons we take orders from idiots, why we let idiots give us orders and why scientists and science does not have more power. it is also one of the reasons physics education in my country only goes backwards the past 10 years and one of the reasons the country went bankrupt.

i am not saying that i agree with Lubos but we need more people in science like him.

despite the negative comments i am sure he advertised your book a lot!

although he is not that good about what he advertises if you have seen the shitty Oxford string theory website he still has the link of on the blog.

reader Luboš Motl said...

Hi Scott, I understand why it sounds funny and it was partially a goal of the formulation, too. But the main message was completely serious.

One may always construct "classes of objects in the real world" such that "I" am one of the elements in the class but the class is numerically overwhelmed by elements that are much less intelligent and much less rational and much less true, much less conscious, much less important for the evolution of the Universe, much less anything, or - if none of the things before works - at least much more rare, unusual, and original. It is absolutely unjustifiable to assume that "I" am generic in any of these sets. It's totally important for the existence of the world that things aren't uniform, there are rare things, special periods, special places, special mechanisms, special everything.

As I said, people don't feel too guilty when they say any of these bogus and manifestly wrong "genericity" propositions or any proposition of probabilistic character for that matter - because they still leave a nonzero (albeit small) probability for the right answer. However, the way how they distort the probability distribution, e.g. towards the uniform one, is qualitatively equally wrong and untrue as 2+2=5. Using the indefensible "genericity" claims, someone may "prove" that all numbers between 1 and 1 billion are equally likely results for 2+2 which makes the particular result 4 excluded at more than 5 sigma. But this argument is totally wrong because it's simply not true that all numbers are equally likely results for 2+2 and claims to the contrary inevitably involve cherry-picked evidence that deliberately distorts the truth in some direction. Numbers, people, or anything else are not created equal and everyone who believes otherwise is a generic moron.

reader Paul Parnell said...

I for one welcome our new Lubos overlord.

reader Scott Aaronson said...

OK, some more "serious" responses:

Regarding the anthropic principle: I'll simply note once again that, as far as I can see, all the "trivial points I fail to understand" enumerated in this post are points made in the chapter itself! What Lubos doesn't like is simply that, BECAUSE (as Lubos and I both agree) anthropic reasoning is a "part of probability theory that is more about feelings and opinions than about solid results," I see my role as a teacher as just to present the best arguments for each side, rather than declaring in Lubosian fashion that one side is composed of imbeciles.

Regarding the anthropologists who study scientists, I think what pisses off some scientists (including me) about them is that many subscribe to a radical ideology according to which you're never allowed to say: "I believe the reason these scientists claimed X is true is that they figured out that X is true." Rather, some ulterior motive must always be sought (bonus points if the motive is "patriarchal" or "hegemonic"). If anthropologists could get past that and similar problems, then I really wouldn't care too much if they wanted to thump their chests and claim that anthropology is the "most fundamental" human inquiry. From their perspective it would be, even though it wouldn't be from mine.

Regarding complexity theory: obviously, if you're not excited by it, then you're not excited. I can't argue against an emotional reaction---any more than you could argue against someone who, while agreeing with everything you said about the content of modern physics, regarded it all as "boring low-level encoding details," and claimed that everything truly interesting happens at higher levels of organization. I guess I should just be grateful that not everyone feels as you do. :-)

However, when you say that questions about the power of interactive proofs seem "as non-fundamental as the question of whether chess is a draw," there's an extremely simple counterargument: results like the IP=PSPACE theorem told us something new, not merely about chess, but also about Go, Hex, and every other possible such game that could ever be invented. That, in a nutshell, is the difference between "recreational math" and the kind of math you get paid for. :-)

Regarding quantum error-correction: I think I'll indeed add some more material about it if there's a second edition of QCSD. That's probably your first "constructive" suggestion in this entire series of posts, so thank you! :-) But there's a reason why I downplayed it in my discussion of QC skepticism. Here's how I put the point in my recent talk at Microsoft Research, "So You Think Quantum Computing Is Bunk?":

Many discussions of the feasibility of QC focus entirely on the Fault-Tolerance Theorem and its assumptions. My focus is different! For I take it as obvious that, if QC is impossible, there must exist a deeper explanation than "such-and-such error-correction schemes might not work against every conceivable kind of noise."

If one particular error-correction scheme doesn't work, one can just try another! And if a skeptic believes no possible error-correction scheme can work, then from my standpoint the burden is on the skeptic to explain what it is about the laws of physics that underlies that impossibility. In particular: is there then a polynomial-time classical simulation for all "realistic" quantum systems? If so, then how does that simulation work?

reader Gene Day said...

The "brown cow” example can be taken a step further. It is only true that there is at least one cow in Switzerland that has been brown on at least one side "-on at least one occasion-".

reader anony said...

Not to challenge your hypothesis, but I would bet against the notion that many of those in power are idiots, aside from some well publicized instances of poor judgement. The counter question is if scientists are so smart why are they not in power? Clearly the answer is that people in power are not generally considered scientists, since science is a discipline just like politics or any other professional undertaking. I think a quick survey of people in positions of power would show they generally have fairly competitive resumes. In the grand scheme of things, science is a means to an end, and must compete for resources just like all other human undertakings. By its nature it will never command the majority of human effort, because the return on investment usually extends well past most planning horizons. As such if one is interested in power, science is not usually the best career choices.

reader anony said...

Having not read the book, it isn't apparent whether Benford's law has been properly applied in many of the cases discussed by Lubos. A quick review of some of the explanations for why the law holds in many instances reveals that exponentially growing sets of numbers over several orders of magnitude cause very naturally explained non-uniform distributions of expected values.

It is somewhat interesting that fraud detection is aided because most naive culprits want to uniformly distribute data.

That aside, there is a computable entropy associated with unknown values of prior variables. Each prior event has some set of possible outcomes, meaning each event serves as a variable in some chain of events. Although it might not be possible to determine the set of specific values leading to some current state, there is a natural measure of the entropy associated with one's current state. This "information" entropy simply accumulates as one pushes further into the past. Any attempt to stop this entropy, or analogously any error correction scheme must therefore effectively stop "time" in some sense, (and in the exact determination of prior values, one must reverse time in some sense).

It isn't hard to see the similarities to thermodynamic cycles and the laws of thermodynamics. This is well known. Following this logic, it is at least qualitatively highly certain that quantum error correction is likely, if not certain, given sufficient reserve of resources.

The only scheme that I can conceive of whereby one can find a polynomial time simulation of quantum systems is one where the system resources required to explore independent paths of TSP reduces as one explores paths. One would have to effectively show that a clever system could reduce the resources needed to check a path at least as fast as the growth in the number of paths to be explored. This is a logical statement, but it is only potentially achievable with the assumption that there is a process of learning about the paths so that they can be checked quickly. Or in other words, as each path is checked there is knowledge gained about other paths and whether they can be excluded quickly. This implies that there is mutual information shared by paths.

Hopefully that generates some thoughts.

reader Luboš Motl said...

LOL, this extra improvement is actually sometimes important in the cosmological considerations, too.

reader Eugene S said...

First of all, may I commend Prof. Aaronson for his continued presence on The Reference Frame! The guest blog did not have an auspicious start, with its perfunctory obeisance to the supposed overwhelming consensus on impending catastrophic man-made climate change, but went uphill from there during the discussion. And I like the spirited defense in the comments below Dr. Motl's multi-installment book review. However, I am missing a detailed response to the critique of the probability arguments (redhead/greenhead, doomsday, madman). While not possessing any expertise myself, I find probability a fascinating topic ever since reading John Allen Paulos' book Innumeracy when it first came out.

reader Peter F. said...

Thanks, anony, for the paragraph:
"It is somewhat interesting that fraud detection is aided because most naive culprits want to uniformly distribute data."

I thoroughly enjoyed being reminded of this Schadenfreude-forecasting&promising&ensuring insight! ;-))

reader T H Ray said...

" Numbers, people, or anything else are not created equal and everyone who believes otherwise is a generic moron."
Really? Zero does not equal zero?

reader Luboš Motl said...

Zero and zero are not numberS; zero and zero is one number pronounced twice.

I was talking about numberS, note the plural.

reader T H Ray said...

Zero, in fact, is a number, according to the Peano-Dedekind axioms. There wouldn't be numberS unless it were.

reader Scott Aaronson said...

Lubos, I'm deeply flattered that you considered my presentation of cosmology and holography to be "more accurate, complete, and meaningful than what most actual experts would be able to write down."

There's an amusing irony here: the stereotypical book reviewer says, "many parts of this book might be fine, but the parts I'm the most expert in are riddled with errors." You, on the other hand, seem to have loved the parts of QCSD you're the most expert in, and to have reserved your scorn for other parts! I guess I should be grateful for that. :-D

Regarding the time travel chapter, though, you seem to have misunderstood my perspective completely. You write, "Scott Aaronson chooses a different attitude, one of a spoiled frat [sic] who
screams 'I want them I want them I want them because I want fast
computers!' " If anything, I'd say my attitude is the exact opposite: it's "I don't want them I don't them I don't want them because I don't think the laws of physics ought to allow such extravagant computational power!"

reader Luboš Motl said...

Dear Scott, I think yours is a good book even in aspects I don't know well. There are lots of things I disagree with, and things in between - open-minded discussions about things I wouldn't be that open-minded about, and so on.

Concerning your "exactly oppositely spoiled frat", it may sound opposite but if you understood what I *actually* complain about, you would understand that the two screams of the spoiled frats are effectively equivalent.

The point is that the laws of physics in the Universe around us allow the set of computers that they allow and they forbid those that they forbid. It's up to the laws of physics to decide what is allowed, possible, or likely, not up to human prejudices about which computational powers are extravagant!

Whether a computer doing something may be constructed in the real Universe is a complex question whose answer depends on many floors built upon each other and, ultimately, on the foundations given by the laws of physics. If a human approaching this structure wants to learn the answer, he has to enter the building through the front door at the ground floor, walk and respect the foundations, then climb the staircase and/or elevator, take into account all the floors in between, and see whatever he does in the 23rd floor.

Your approach seems to be the opposite one. You want to start with a floor slightly beneath the 110th floor, decide that something shouldn't be there because of your beliefs and prejudices, and then continue down to derive what it implies for the lower floors and foundations of the buildings. Sorry, this is not the attitude of an impartial, rational worker or researcher: it's how the 9/11 terrorists deal with the Twin Towers, and it doesn't matter a single bit whether the words you want to write on the 110th floor with an airplane are YES or NO!

reader Eugene S said...

Psst, it's "spoiled brat" not "frat". You may have been thinking about "fraternities" and about Animal House, but I doubt that Scott was a "frat brother" in college except possibly in Phi Beta Kappa... and now please delete this insignificant, nitpicking comment :)

reader Shannon said...

"I don't want them I don't them I don't want them because I don't think the laws of physics ought to allow such extravagant computational power!". I don't get this weird sentence. Isn't it like saying "I don't want to win the lottery because I don't think I have enough chance", what does that mean ?

reader Shannon said...

..or "prat" ? ;-)

reader Luboš Motl said...

LOL, thanks, and exactly, I wanted to sound global so "brat" (which is Russian for a brother, and the Czech word is "bratr"), I used a more Western or Latin "frat" which I knew was the root of fraternities. At any rate, I used this thing for the first time in my life, and incorrectly. ;-)

reader T H Ray said...

Zero is a number in the Peano-Dedkind axioms. Were it not, there would be no numberS.

reader Eugene S said...

The second edition of the book should also add this endorsement to the dust jacket: "This book doesn't totally suck. --L. Motl"

Higher praise you cannot get.

reader Scott Aaronson said...

I meant something that strikes me as so ordinary, at least for a scientist, that I'm always surprised when people consider it weird or controversial.

Suppose someone showed that X, if true, would lead to a perpetual-motion machine, or faster-than-light communication. What would you conclude from that about the likelihood of X itself? You might say: "wow, then X would be awesome, if true! And I can't prove it's mathematically impossible. However, X would violate what I take to be such basic principles about how the universe is put together, that I'd be willing to bet an extremely large amount of money that X is false." And that's exactly what I would say, if someone showed that X would let you solve NP-complete or PSPACE-complete (or worse yet, uncomputable) problems with only polynomial resources. An efficient solution to NP-complete problems would be MORE shocking in many ways than a perpetual-motion machine; go read my book if you want to understand why. :-)

So for me, the basic principle here is almost unobjectionable. The more interesting question is just how far it gets you in understanding the world. I'd argue that it does get you a nontrivial distance, certainly enough to be worth studying if you want to understand the world -- which is probably the claim of mine to which Lubos objects the most strongly.

reader Scott Aaronson said...

LOL!! "If you try to use higher-level principles to deduce anything about the character of lower-level laws, then you're no better than the 9/11 hijackers."

Lubos, it's not that I don't understand what you're trying to tell me -- it's that I do understand and I think you're flat-out wrong. To give one famous example: in the 19th century, biologists and geologists deduced that whatever the physical laws governing the Sun are, they must be such as to have allowed it to shine for billions of years. Physicists of the time, like Lord Kelvin, ridiculed their presumptuousness. But we all know what happened next: the physicists on the first floor were wrong, while the biologists and geologists up on the 23rd floor were right. And there are thousands of more ordinary examples: for example, whatever are the laws of QCD, they must be such as to reproduce the observed mass of the proton, which is a much more complicated, "higher-level" fact about Nature.

In my case, I'm not even using computation to make any particularly radical claims about physics! (Here I'm not counting the "discrete vs. continuous" issue, which is probably more a matter of perspective than of the nature of the fundamental laws themselves.) Most of the facts that I think one can use computational considerations to help explain or justify (e.g., why QM is exactly linear, why there are no closed timelike curves, why the holographic principle holds...) are things that Lubos already agrees with anyway, and that I agree can also be justified on other grounds (but unlike Lubos, I do take computation to be one very basic aspect of reality).

reader Luke Lea said...

"his humiliating comments about the role of Adam and Eve. Well, I don't want to discuss the literal interpretation of the Bible which I don't believe, of course . . ."

Depends on what your definition of "literal" is. E.g., what is the "literal" meaning of an allegory in the original sense of the word?

From Greek allos meaning "other" and agora meaning gathering place (especially the marketplace). In times past, it was common to do one's chatting at the marketplace. Some of the topics discussed were clandestine in nature and when people spoke about them, for fear of being punished, they would speak indirectly. That is to say, they would speak about one thing in such a way as to intimate the actual information to the listener. Thus, the persons discussing clandestine matters were said to be speaking of "other things" in the marketplace. Eventually the words joined and became associated with the act of speaking about one thing while meaning another.

reader Gordon Wilson said...

Well, Scott, Lubos' principle must have at least 5 sigma statistical support.

BTW, FWIW, I also "think yours is a good book, even aspects that I do not know well."
Plus, it is even something that often technical books are not : fun.

reader Luboš Motl said...

Dear Scott, a good example but I would say that the geologists who knew that the Earth was more than hundreds of millions of years old were simply being better physicists than Lord Kelvin!

What I mean is that they were using valid physics arguments; Lord Kelvin was using an invalid model of the Sun. It was a very clever model but it was also wrong.

You're trying to do something completely different than the 19th century geologists who were approximately right about the age of Earth. You're not using empirical evidence at all.

reader Luboš Motl said...

No, your explanation why we know that perpetual motions machines are impossible is completely wrong.

The laws of thermodynamics - the non-existence of perpetual motion machines of the two kinds - started as an empirical observation summarizing a collection of diverse failed attempts to construct such helpful devices.

At some moment, the belief in the non-existence of such things was justified by nothing else than a collection of failures, by a naive Bayesian prediction.

In thermodynamics, this non-existence got formalized into the two laws of thermodynamics which are *axioms* of the discipline. However, the axioms can't be proved from "nothing" (no axioms can). So if someone said that there are new physical effects that invalidate a law of thermodynamics, thermodynamics could only say that it contradicts the claim. But it wouldn't be clear who is right!

One would actually have to compare the evidence supporting the claims about the new effect with the evidence backing the laws of thermodynamics! Of course that the evidence in favor of thermodynamics and its general validity would probably win but not by a great margin.

The actual, stronger reason why we believe the laws of thermodynamics is that we have a much more rigid framework to de facto prove them today. The first law of thermodynamics is the conservation of energy that is known to follow from the time-translational nature of the laws of physics via Noether's theorems. The second law of thermodynamics is explained in terms of an increasing entropy which may be proved by Boltzmann's H-theorem and its variations. The assumptions of these theorems seem to be obeyed by absolutely everything we know and lots of generalizations that don't even operate in the real world; so the conclusions follow very generally and are very convincing.

These are the actual reasons we believe that a new claim - especially a mundane enough claim - about a new perpetual-motion machine is wrong.

You don't have the counterpart of Noether's theorem, you don't have the counterpart of the H-theorem, you don't have anything. Instead, you have just some emotional feeling or prejudice that some problems in a class BPQBWPBPBSPACE^BQQPPEPQP shouldn't be possible to be calculated faster than some f(N). This is pure prejudice, an unproven belief, a religion, so if you use it to "deduce" something about the laws of physics, you are a religious bigot, and if you compare yourself with Boltzmann and Noether, you are a crank.

reader Scott Aaronson said...

Lubos, you seem to be at risk of committing the true Scotsman fallacy: so it turns out those 19th-century geologists can get redefined as "physicists," and indeed better physicists than Lord Kelvin, simply because they got the right answer! Yet the fact remains that they were using the observed features of the upper floors of the building to deduce something about the likely character of the lower floors. What they were doing is logically unobjectionable -- if A implies B, then you can indeed learn something about A by checking whether B holds -- and is no different in principle from what I'm talking about.

Because of the phenomenon of computational universality, if you changed the computational properties of the lower floors, we'd expect that change to percolate upward and strongly affect what we see on the upper floors. Or maybe not -- but I'd say the very possibility means that we need a two-way dialogue between upper and lower floors.

Yes, what the quantum computing skeptics say is similar in form to what I say! They say: all our experience with computation -- not merely with digital computers, but with analog computers and everything else than anyone has ever tried to build -- leads us to believe that the class of efficiently-solvable problems in this universe should be BPP. Therefore, if a new theory comes along and says the class is BQP, then something is probably wrong with the theory.

Now, I don't object to the logical form of their argument. I simply say that, in this particular battle, I strongly believe the lower floors are going to win!! I think the QC skeptics severely underestimate the massive amount of evidence for QM, and I also think they overestimate the amount of evidence for the Extended (i.e., polynomial-time) Church-Turing Thesis.

But when it comes to the original, computability Church-Turing Thesis, or (say) the hypothesis that NP-complete problems are intractable, I'd be willing to stick my neck out further, and put these things head-to-head against claims about physics (by modern-day Lord Kelvins?) that might initially appear to contradict them. (I explain the point in more detail, and give several examples of its application, in this video.) In other words, I think we need to do something Lubos will intensely dislike: namely, distinguish between different computational assertions, some stronger and some weaker, rather than just considering them all an indistinguishable alphabet soup.

In summary, while Lubos will intensely resist the comparison, I'd say that in his own terminology, theoretical computer scientists like me are trying to be "physicists" no less than the geologists who went up against Lord Kelvin in the 19th century. We're simply trying to use some observed features of the universe to reason about other features of the universe, with particular interest in the computational features. And it's the very nature of the "logical elevator" that it can in go either direction, from lower floors to upper or from upper floors to lower.

reader Luboš Motl said...

Sorry, Scott, but it isn't a fallacy. It is the right way to classify people! Your approach is a linguistically justified ad hominem fallacy.

Whether someone is actually a credible physicist and is getting the right arguments doesn't depend on his being called a physicist by the laymen, or wearing a T-shirt saying "I am a physicist". It is given by the quality of the arguments and his or her methodology. The geologists clearly had valid physical arguments to constrain the age of the Earth, whether you like it or not. Whether someone called them "physicists" is completely irrelevant!

At any rate, when it comes to the essential thing - the quality of the arguments and their being backed by the empirical facts - there is no analogy between you and those who did determine the age of the Earth from the geological data!

reader Scott Aaronson said...

Hahaha, let me see if I've got this all straight! Obviously, the only people authorized to discuss fundamental aspects of the real world are "physicists" -- anyone else who tries to do so is an imbecile or even an intellectual terrorist, trying in vain to emulate his superiors. On the other hand, even if you call yourself a "geologist," and everyone else on earth calls you a "geologist," you can still become a "physicist" by definition, IF your studies of geology lead you to make a novel, verified prediction about the nature of physical reality. After all, who's a "physicist" (and therefore, who's authorized to discuss nature) can't be determined by consensus, but only by actual accomplishments.

Here, though, we confront an awkward fact: Lubos himself hasn't made any novel, verified prediction about the nature of physical reality! In that particular respect, he's about as far removed from those 19th-century geologists as I am! So then what other criterion can we find, to explain why Lubos is qualified to discuss Nature while I'm not qualified?

Well, Lubos obviously knows certain subjects, like QFT and string theory, vastly better than I do. So maybe that's the difference, and all I need to do to join the club is to learn those subjects? (Obviously no one's suggesting that Lubos needs to learn the quantum information scientists know as well as they know them to join the club -- that would be silly!)

OK, but here's the problem: in Lubos's generous estimation, I already managed to explain holography and cosmology better than most experts in those areas. However, because I'm not a physicist, we can deduce that I must not have "really" understood those subjects; instead I must have simply parroted phrases that I picked up from others. So even I did learn QFT or string theory, why wouldn't the same objection apply in those cases?

If so, then no matter how much physics I someday learn -- and no matter how many refereed papers I publish in physics journals, how many invited talks I give at physics conferences, etc. etc. -- even if, for the last 7 years I've done far more of those things than Lubos has (!) -- still I'll never qualify as a physicist, unless perhaps I make a spectacularly-confirmed empirical prediction, of the sort that Lubos has also never made.

And by itself, I'm fine with that. After all, I've always called myself a theoretical computer scientist, and will only ever call myself that. What's unfortunate is simply that, unless Lubos someday decides that I'm "really" a physicist (despite my continuing to call myself a theoretical computer scientist), everything I say will continue to be irrelevant to the real world by definition.

I think I get it.

reader Scott Aaronson said...

You write: "The second law of thermodynamics is explained in terms of an increasing
entropy which may be proved by Boltzmann's H-theorem and its variations ... [but] you don't have the counterpart of the H-theorem, you don't have anything"

Lubos, I've seen you repeat this claim about the H-theorem in forum after forum. For the benefit of your readers, it might be worth pointing out that you seem to be ALONE among physicists in believing that the Second Law can be derived from Boltzmann's H-theorem, without having to postulate that the universe had an extremely low-entropy initial state. (I'm thinking for example of the comments on your scathing review of Sean Carroll's book, where Joe Polchinski and numerous others tried to explain why you were wrong about the H-theorem, and you responded by simply repeating your original claim even more vitriolically.)

Now, why does this matter? Because if you're wrong about this (as just about every other physicist thinks you are), then the Second Law is "just" an observed regularity of Nature that seemed so pervasive and fundamental that it eventually got codified as a "law." So then we're forced to the conclusion that trying to codify observed regularities of Nature is great when physicists do it, but weird and arbitrary when computer scientists do it -- even if the latter more modestly refer "theses" rather than "laws"!

reader Luboš Motl said...

Sorry, Scott, but you are living in some 1984-style regime where you get your permissions from a Big Brother and the truth is defined by stamps by this Big Brother, too.

Holy cow, I am not saying you are not "authorized" to write things. I am just saying that your methodology to determine whether there may be closed time-like curves - or any physical question of interest - is just *wrong*.

I don't care about your fucking authorizations you may (and less likely you may not) obtain from your fellow leftists authorizers, OK?

reader Eugene S said...

Intermission, gentlemen, in this boxing match! This gives us time to
admire the "numbers girl" strutting across the ring on high heels -- woo
hoo, it's Adriana Lima! In a string bikini! Does that mean she is
secretly rooting for...? And while I have your attention, gentlemen, may
I ask you a question that came to mind the other day as I was reading
2006 exchange
between mathermatician Gregory Chaitin and
computer scientist Cristian Calude. (They are grappling with some of the
same questions discussed in Aaronson's book.) An excerpt:

We seem to have concluded that mathematics depends on physics, haven't
we? But mathematics is the main tool to understand physics. Don't we
have some kind of circularity?

Yeah, that sounds very bad! But if math is actually, as Imre Lakatos
termed it, quasi-empirical, then that's exactly what you'd expect. And
as you know Cris, for years I've been arguing that information-theoretic
incompleteness results inevitably push us in the direction of a
quasi-empirical view of math, one in which math and physics are
different, but maybe not as different as most people

This view of mathematics being dependent on physics and an anti-Platonist understanding of math as a quasi-empirical science, is that something that our esteemed host and his esteemed guest can both subscribe to?

reader Luboš Motl said...

LOL. Just to be sure, Eugene, I don't believe that mathematics requires physics. On the contrary, physics builds on mathematics.

However, the existence of closed time-like curves in the Universe is physics, not mathematics, so it can't be established just by doing mathematics such as computer science without physics. Closed time-like curves do depend on physics.

reader Dilaton said...

Huh, what is Bousso's light sheet in cosmology?
I've never heard about that before ...

reader Dilaton said...

I always thought frat is something liken "frecher Fratz" (a rascal) in German ...:-D

reader Smoking Frog said...

One could also say "shat," which some people, however imbecilic, use as the past tense of "shit." They might not even be so imbecilic, since Spanish commonly uses past participles as nouns. :-)

reader RAF III said...

Scott, I can assure you that Lubos is not alone among physicists in his understanding of the

2nd law. In the review you linked he was of neccessity brief but to the point. He has been

far more expansive on this subject in many posts on this blog. Polchinski's 'explanation'

had nothing to do with Caroll's book or Lubos' review. The other responses simply gave

references to other versions (various texts, including Wikipedia) of the same misconceptions

Lubos was criticizing.

In any case, What does it matter whether he is alone or part of a multitude? Is Michael

Duff correct about whether or not the speed of light varies over time or are the many who

disagree with him correct? This is such an elementary question of physics that it is quite

extraordinary that there could even be a disagreement. How will you decide?

The economists who foresaw the sub-prime crisis coming were few and far between. Who did you

believe? And why?

If this were, for example, an urgent medical matter where time was of the essence I could

understand the neccessity of relying on trust.

But in this case why not evaluate the arguments? Why do you feel compelled to take a

position on the basis of a head count?

reader Luboš Motl said...

Hi Dilaton, it's an attempted generalization of the entropy bounds from static to general time-dependent situations.

Bekenstein tell you that for a static configuration with objects surrounded by area A, the entropy in the region is at most A/4G. Bousso allows you to take a general time-dependent situation and calculate how much entropy is crossing through a null surface, and it's again given by A/4G where A is the maximum area of a surface on the null hypersurface where the area is maximized, assuming that the hypersurface isn't growing in some particular sense.

reader Marcel van Velzen said...

Hello Lubos,

There we go again: first quantum mechanics and now statistical mechanics. Everything you say about both is simply correct. The Boltzmann H-theorem
is derived beautifully in the first volume of "The Quantum Theory of
Fields" by Weinberg without assuming time reversal invariance. And all
the properties in thermodynamics can by now we derived from statistical
physics but it has to be done in the correct way and correct logical and
pedagogical order, where you stick to statistical physics as the explanation of thermodynamics. This is surprisingly well done in the
book "Statistical Physics" by Alonso and Finn, that is no longer for
sale (I think) and is one of the few books I have in a Dutch
translation. I also don't understand why it is so surprising that the universe started in a "low entropy" state. Yes, it is surprising the
universe started but given the fact that it started, it is not
surprising that it started with an entropy that was lower than the
entropy the universe has now. That is not extra "interesting"
information. Is that indeed what you mean to tell us?

reader Carl Lumma said...

Boltzman's H-theorem is a version of Landauer's principle. Both are ways of stating the 2nd law, which may be derived from the 0th, 1st, and 3rd laws:

0th law: There exists a metric called "energy".

1st law: Energy is conserved.

3rd law: Energy is equivalent to "information".

2nd law: Information is conserved. Reversible computers are called "closed systems". Irreversible computers dissipate information.

reader R. P. F. said...

In phase space the volume occupied by states with maximun entropy is much grater than the volume occupied by states of low entropy, therefore is nontrivial to explain why the universe started in a low entropy state.

reader Luboš Motl said...

Hi, RPF died in 1988. It's therefore nontrivial to explain how RPF could be posting on TRF.

In general situations, the probability of states (in the phase space or Hilbert space) is not proportional to a volume in the phase space - they have almost nothing to do with each other and there is no reason why they should - so it's completely wrong to suggest that there is something confusing about their not being proportional to each other.

The only situation in which the probability is proportional to the volume in the phase space and/or the number of microstates is when we consider the probability of states in equilibrium. But equilibrium may only be achieved in the "final state", in the future, an only after a period of time that is (much) longer than the thermalization time. It is completely wrong to apply the proportionality in any other situation.

reader Jose Ignacio said...

Let S be the statement of the Hair colors in the Universe problem. H (T)=the coin tossed lands heads(tails), R (g)=in the mirror we see a red(green)-haired person.

P(H | S)=P(T | S)=1/2

P(R | HS)=P(G | HS)=1/2

P(R | TS)=1, P(G | TI)=0

P(R | S)=P(RH | S)+P(RT | S)
=P(R | HS)P(H | S)+P(R | TS)P(T | S)
=1/2 * 1/2 + 1 * 1/2


and finally

P(H | RS)=P(R | HS) P(H | S) / P(R | S)

=(1/2 * 1/2) / (3/4)


Scott Aaronson calculation is right.

reader Luboš Motl said...

Dear Jose, this is not "Scott's calculation". It's one of the possible calculations he proposes and not the first one. The first one that he describes is the calculation I consider correct.

And I have already written - although you ignored it - what's the mistake in this calculation. The mistake is that you think that the evidence is that "we" see a red-colored or green-colored person. "We" is ill-defined if we talk about different scenarios in which everyone can be "we".

The actually observed evidence (after looking to the mirror) is that we see at least one green-haired or at least one red-haired person in the Universe. So the right values are

P(R | HS)=P(G | HS)=1

and not 1/2. The rest of the calculation is therefore fixed as follows:

P(R | TS)=1, P(G | TS)=0
P(R | S)=P(RH | S)+P(RT | S)
=P(R | HS)P(H | S)+P(R | TS)P(T | S)
=1 * 1/2 + 1 * 1/2

and finally

P(H | RS)=P(R | HS) P(H | S) / P(R | S)
=(1 * 1/2) / 1

reader bhawan kumar said...

Earn Money Online without any investment or Charges, now you can earn unlimited with Just Clicking Advertisements, Earn with Clicking Advertisments without Investments
Just Visit the Start and Make Account Easy and Free

reader Joe Shipman said...

Mathematical truth does not require physics but (some) mathematical knowledge requires physics. Epistemologically, not all of math is prior to physics. At the most fundamental level, some math is simply a kind of reorganized logic; a possible boundary for this (though one can argue for other boundaries) is the theorems of Peano Arithmetic or "Finite Mathematics". For some of these theorems, physics can assist us in learning their truth with high confidence. There are also statements of arithmetic which are not theorems from the Peano axioms, for which physics can also assist us in gaining confidence about their truth, The same goes for more complex and abstract math, but eventually a new epistemological boundary arises and we reach a realm of mathematical statements about which physics can't help us either (for example, statements about the arithmetic of very large infinite cardinals). We can't attain knowledge of those statements in this life (except by acts of faith or divine revelation).

reader elo said...

I thought the situation was just stipulated to be: there's God plus either one 'or' two people? Hair comes in two shades; each person gets 'one.' Just wondering but does he say the coin is fair in the setup? I'm not even clear when we're supposed to be predicting hair colors or coin flips or both sometimes?