Friday, April 08, 2016 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

How Tyson and Randall simulated Zack and Sheldon

Update: a two-hour video of the debate is available. Lisa first speaks at 12:00 and then at 30:00 and has to clean lots of mess from the previous discussion, indeed. See also IBTimes. Hat tip: Willie Soon
I have written several blog posts arguing that we don't live in a simulation in the past. The reasons are numerous. But let me return to the topic because it was discussed as the topic of the 2016 Asimov Memorial Debate at AMNH in New York.

This scene from The Big Bang Theory has become my favorite portrait of the scientific illiteracy of some laymen. I've embedded this scene in recent weeks to pick Zack as a metaphor showing what the people who say "the LIGO discovery isn't exciting etc." look to me.

But at the beginning, there is another conversation. Zack asks how can the boys know that they won't blow it up. The laser? No, he means the Moon. All the boys are stunned and switch to a type of a diplomatic silence. Except Sheldon, of course, who is the boy observing the emperor without clothes. Sheldon tells Leonard: "You see, this is the man for Penny." ;-)

Because the laymen's stupidity is a holy cow that the society has been trained to worship, people immediately tell Sheldon to be "nice". And Leonard says that Zack's was a "great question". But Sheldon doesn't give up the truth, and neither would I. "No, it is not a great question. How can someone possibly think that we're going to blow up the Moon? That is a great question!" Exactly.

Neil deGrasse Tyson starred as Zack and Lisa Randall was the Sheldon in the debate.

As Clara Moskowitz summarized in Scientific American,
Are We Living in a Computer Simulation?
the public intellectuals had very different opinions about the question whether we live in a simulation. But that was the result of a fortunate fact that a real practicing no-nonsense physicist, namely Lisa Randall, was invited as a participant. Most of similar debates about the fringe topics in science only invite the people who pay lip service to these "pet topics" of the would-be pro-science laymen who actually love all kinds of New Age superstitions much more than they love science.

To simulate Zack's worries about the fate of the Moon, moderator Neil deGrasse Tyson started with the assertion that he's worried that we're living in Matrix and he estimates the probability to be 50% that we are. Holy cow. Are you going to blow up the Moon?

Sylvester Jim Gates suggested it could be true because the mathematics of some error correcting codes may be seen in supergravity. This sort of reasoning seems to depend on a high dosage of drugs. Is the innocent and omnipresent fact that Nature recycles mathematical patterns enough to conclude that there is a higher intelligence that has "created us"? I think that these two claims have logically nothing whatever to do with one another, so Gates' reasoning is utterly irrational. Max Tegmark said some of his usual vacuous things about the mathematical Universe. Gates said:
This brought me to the stark realization that I could no longer say people like Max are crazy.
Well, I still think that the claim "Tegmark is not a nutcase" is an extraordinary claim that requires extraordinary evidence.

But it was Lisa Randall who brought something that I consider the genuine scientific attitude to these matters:
And the statistical argument that most minds in the future will turn out to be artificial rather than biological is also not a given, said Lisa Randall, a theoretical physicist at Harvard University. “It’s just not based on well-defined probabilities. The argument says you’d have lots of things that want to simulate us. I actually have a problem with that. We mostly are interested in ourselves. I don’t know why this higher species would want to simulate us.” Randall admitted she did not quite understand why other scientists were even entertaining the notion that the universe is a simulation. “I actually am very interested in why so many people think it’s an interesting question.” She rated the chances that this idea turns out to be true “effectively zero.”
The other folks said that it was a great question whether we live in a simulation. Lisa went full Sheldon and asked how some people may possibly be so stupid to even seriously think about the theory that we live in a simulation. That is a great question.

As she sketched, the "we are a simulation" movement (and others) loves to make many assumptions that are extremely far from being established facts – and many of them are rather likely or very likely to be wrong.

You may see that Lisa has challenged the assumptions as well as methods of "Bostrum's" 2003 argument that we are in a simulation (which I have known as a translator of Brian Greene's third major popular book). Bostrum said that as the technology explodes, computers become increasingly numerous. At some moment, there are many simulations of you, and because you are equally likely to be a particular biological copy of your brain+body or a simulated electronic copy of your brain+body, you are much more likely to be a simulated version on someone's hard drive because those numerically beat the biological ones in the whole spacetime.

Some people are satisfied with that. Tyson is satisfied because he's a stupid man but even folks like Brian Greene indicated that they're satisfied with a similar argument. But the argument is completely wrong because of at least two classes of problems (one would be enough to ruin it):
  1. wrong assumptions about the evolution of technology and the way how the users will use it
  2. conceptually flawed methods to apply the probability calculus
Lisa has noticed both although she didn't go into details.

Concerning the first class of problems, we don't really know whether in the year 2800, there will be zillions of simulations of Leo Vuyk running on lots of computers in the Solar System. ;-) With all my respect to this randomly chosen TRF commenter, I find it unlikely. Why would the future users try to simulate the biological Leo Vuyk? They won't know much about him but even if they will know something, won't they have more interesting things to do with the computers? I agree with Lisa that the answer is "probably yes". So the one arguably biological Leo Vuyk whom we know will still dominate in the ensemble of entities whose brains basically feel like Leo Vyuk's brain.

But this potentially incorrect prediction about the future is just one aspect of the "wrongness" of the arguments about the simulations. There are the deeper, more fundamental aspects of this way to think. The whole way of thinking seems to be wrong. Lisa starts by saying something I've written many times in the past – especially in various discussions about the anthropic principle – that there is no statistically valid argument that could allow you to conclude that the "more numerous" types of your brain (let's assume it's the simulations) will "win" i.e. become more likely than the less numerous (biological) ones.

Aside from some acausality (what may happen in the future is supposed to influence what we are already now – such acausality is a huge can of worms by itself), this reasoning requires some kind of "egalitarianism". Each copy of your brain – biological or simulated – in the whole spacetime gets an equally large "piece of the pie". But that's simply not how the probability calculus works. Years ago, John Oliver talked to Walter Wagner, an LHC alarmist, and Wagner said that the probability that the LHC would destroy the world was 50% because there were two possible answers, Yes and No, so they share the pie 100% in this way. Oliver responded that he wasn't sure this was how the probability calculus worked.

But the people who argue that we would be "very likely to be simulated" – assuming that the simulations numerically exceed the biological copies – are using the probability calculus exactly in the same way as Walter Wagner. They (and various defenders of some "very strong forms" of the anthropic principle) are doing something that is exactly as idiotic as Walter Wagner's treatment of the LHC probabilities. If we can divide the possible outcomes to several "boxes" that may imagined to be "equally good and analogous", even though they are inequivalent, it just doesn't mean that their probabilities are the same.

Actual numerical values of probabilities may be predicted by the laws of physics (either statistical physics, or quantum mechanics, from the squared probability amplitudes etc.) but the idea that "a simulated copy is as likely as a biological copy" clearly doesn't follow from such an argument. Or there exist "ergodic" arguments in statistical physics that if a physical system evolves for a long enough time, it thermalizes and all microstates (points in the phase space, or basis vectors in the quantum Hilbert space with the same values of conserved quantities) become basically equally likely after some time.

But the "probabilistic equality" between a biological body and a simulated body can't be derived from any thermal equilibrium of this sort, either. There is clearly no equilibrium between the biological copy or copies of Leo Vuyk in 2016 and the simulated copies in the year 2800. There can't be an equilibrium because these two objects don't even exist at the same moment. They can't evolve into each other but this back-and-forth evolution is necessary for the mechanism that makes the probabilities equal.

Finally, some probabilities that make sense in rational reasoning are Bayesian probabilities. They start with the prior probabilities and the values are refreshed according to Bayes' formula whenever new evidence arrives. It's desirable to allow all possibilities a priori. All qualitatively different hypotheses must be given nonzero – and perhaps "comparable" – prior probabilities to be sure that none of them was eliminated. As the evidence piles up, some hypotheses become more likely.

This Bayesian inference is used to determine the "laws governing a physical system or Nature" or "the initial conditions" (or anything in the past, in the sense of retrodictions). Bayesian inference always produces probabilities that depend on the prior ones and those are unavoidably subjective to some extent (but the more evidence we collect and evaluate, the less important the prior probabilities become). This is very different from the quantum mechanical probabilities predicted for the future measurements – their values are (for a given initial state and a well-defined question about the future measurement) totally objective and calculable.

Clearly, the argument that "a biological Vuyk is as likely as a particular simulated one" requires some kind of Bayesian inference because we want to compare probabilities of "locating ourselves" where the different explanations don't even co-exist at one moment. So the dependence on the priors is unavoidable. Because there is no equivalence and no symmetry and no back-and-forth evolution between the two Vyuks, there can't be any "solid" way of saying that two possible answers are exactly equally likely.

But there may be some "sensible enough" priors. But the "sensible enough priors" must always give a chance to every qualitatively different theory. We can't assign the prior probabilities as being proportional to the "main divine entity's power" because God is omnipotent, He could get a 100% prior probability, and you could eliminate all the Godless alternatives. In the same way, you surely can't start with the prior probability of the initial state of the Universe that gives all microstates the same odds. If you did so, you would eliminate the low-entropy initial state a priori, and this is simply a totally unscientific, prejudiced way to start to approach scientific questions.

(That's what would-be scientists such as Sean Carroll love to do. But in science, you simply can't eliminate qualitatively different hypotheses a priori in this way. You have no evidence that the early Universe didn't have a low entropy, so every "discourse" that makes you think that this option may be eliminated – although it's obviously true – is simply fallacious.)

So the right scientific way isn't to count the bodies – one biological Vuyk, 1 trillion simulations of Vyuk, and so on – and assign them the same probabilities. This prescription could be inconsistent for many other reasons, anyway. The number of simulations of Vuyk in the whole spacetime could very well be infinite, and \(1/\infty\) aren't well-defined numbers that may produce normalizable distributions etc.

The right scientific way is to admit that "we are biological" and "we are simulated" are the two really competing, qualitatively differing paradigms. So both of them may get comparable priors, let's say 50% and 50% (independently of the numbers of copies of biological and simulated Vyuks), and we must collect evidence for both to refine our guess about the probability of each.

What happens when we do so? Well, we immediately see that the "simulated Vyuk" hypothesis is basically infinitely disfavored. The conventional scientific "biological Vyuk" hypothesis implies that the laws of physics that we (or Vyuk) will observe at different places and moments will be continuous and pretty much universal, and they won't be plagued by too many exceptions, doubled "deja vu" cats we know from Matrix, and tons of other defects (or "spectacular signatures", if you want to use the language of someone obsessed with the idea of proving that he is in Matrix).

On the other hand, the hypothesis that "we are in a simulation" predicts that it's almost certain that there should be all the discrete effects, all the errors with doubled "deja vu" cats, spatial and temporal dependence of the laws of physics, exceptions, observable discreteness, and so on. None of these predictions is confirmed by the experiment, so the "we are in a simulation" hypothesis is scientifically refuted. Period. You don't need to spend years by thinking about it or organizing magnificent debates about this "deep" question in the American Museum of Natural History.

The "two big hypotheses", the natural biological one and the simulated one, simply give differing predictions about what we should observe, and it's clear that the natural biological hypothesis vastly more accurately agrees with our observations, while the simulated hypothesis fails, or needs to be fine-tuned in a huge way, so the simulated hypothesis is heavily disfavored. It's exactly the same scientific reasoning we always apply to choose the preferred and disfavored hypothetical explanations of things!

How is it possible that some people are so stupid that they can't figure out these basic things? Well, every "systemic" failure of people to think scientifically always boils down to some "infinitely strong" prejudices, some beliefs that they will keep regardless of any amount of evidence that contradicts it. The idea that the "simulated Universe" remains a viable option is just another example of this infinite dogmatism.

The true defenders of flawed ideas such as "we are a simulation" exhibit an infinite amount of this dogmatism. Softcore semi-defenders of these flawed ideas only display a finite amount of this bias and dishonesty. Let's look how Sylvester SUGRA physicist Jim Gates and David Chalmers, an NYU philosopher, imagine the evaluation of the evidence relevant for the simulation hypothesis to proceed:
That evidence might come, for example, in the form of an unusual distribution of energies among the cosmic rays hitting Earth that suggests spacetime is not continuous, but made of discrete points. “That’s the kind of evidence that would convince me as a physicist,” Gates said. Yet proving the opposite—that the universe is real—might be harder. “You’re not going to get proof that we’re not in a simulation, because any evidence that we get could be simulated,” Chalmers said.
That's very interesting. The two hypotheses predict different things and both predictions may turn out to be right. But for some reasons, these men believe that it's "harder" to falsify the simulation hypothesis and strengthen the natural biological one than to do the other thing.

How do they justify the strange assertion that it's "harder"? It's clearly all about their extremely strong bias. They say that they could get a proof if a discreteness of the spacetime were observed. But centuries of observations that remarkably and nontrivially agree with theories working on top of a continuous spacetime (plus numerous very particular experiments that have refuted the predictions of the discrete spacetime theories) doesn't seem to be the evidence for them at all!

Now, yes, you could naively argue that the discrete spacetime effects may be "small" – much like quantum mechanical effects are small in most macroscopic situations due to the smallness of \(\hbar\). However, when you actually calculate how large some of these signs of a discrete spacetime should be, you will realize that they can't really be small. In a discrete spacetime, the frequency-dependence of the speed of light and millions of other things are predicted to be substantial – in some units, "of order one" – and the observations clearly show that such large Lorentz-violating or similar discreteness-indicating terms simply do not exist. Even light and gravitational waves from sources (dramatic events such as black hole mergers) that are billions of light years away arrive during the same second. That's not what any particular natural discrete-spacetime theory predicts.

In hypothetical theories with a discrete spacetime and/or Lorentz violation, some dimensionless coefficients have been measured to be smaller than \(10^{-10}\) or even \(10^{-30}\). The idea that they're nonzero – and they have to be nonzero in discrete theories – requires one to believe that there is an additional fine-tuning in Nature that dramatically reduces the value of a coefficient that should really be naturally of order one. The probability that a number somewhere between \(0\) and \(1\) – uniformly or quasi-uniformly distributed – has the value of \(10^{-30}\) is comparable to \(10^{-30}\) itself. That is the reason why we heavily disfavor the "uglier" theories that simply postulate new, otherwise unnecessary effects and immediately demand these effects to be suppressed by tiny coefficients in order to avoid contradictions with the experiment. In some cases, Occam's razor works well and there exists a logically valid quantitative explanation why the "ugly" theories with unnecessary structures are (vastly) less likely to be true.

My main point is that if you honestly look at predictions of a discrete spacetime, you will see that such a discrete hypothesis predicts many effects that should "almost certainly" be strong enough to have been observed, but they haven't been observed yet. So that's why sensible people have abandoned such theories as serious contenders. If Jim Gates is willing to claim that the probability of a "discretized spacetime" remains comparable to 50% or something like that, it simply means that he is totally overlooking a certain important kind of evidence that is relevant for the question.

But the NYU philosopher Chalmers said something tougher – something that pretty much implies that he is willing to ignore all scientific evidence, both currently existing scientific evidence as well as any conceivable scientific evidence that could arrive in the future. Why do I say so? Because:
“You’re not going to get proof that we’re not in a simulation, because any evidence that we get could be simulated,” Chalmers said.
Wow. So even though the "simulation hypothesis" almost certainly predicts many effects that are observed not to exist, it doesn't matter because the simulator could deliberately fool us and fabricate the evidence so that the world looks different than what it actually is.

Great. At some academic level, you may consider it a "possibility". But can a scientist seriously pay attention to such "possibilities"? Let me tell you another example of an argument whose logical structure is absolutely isomorphic to Chalmers' reasoning:
You're not ever going to get proof that the animal species weren't created by God in seven days because any fossil or any other evidence in favor of competing and heretical (e.g. Darwin's) theories could have been planted by God in order to mislead us.
Great. You can say that. But if you're thinking in this way, you are clearly a "maximal" bigot who simply denies the validity of any evidence we have in science – and, in fact, any evidence that we could ever conceivably have.

Let us use the term \(H_G\) – G stands for God – for the hypothesis that basically implies that you shouldn't ever pay attention to any kind of evidence because it could have been planted by God or Ms Simulator to mislead you. What's the probability \(P(H_G)\)?

You may argue that it's a qualitatively different explanation of the world, so you assign it the prior probability \(P(H_G)=1/2\). Great. How does it evolve when you collect evidence? The essence of \(H_G\) is that you ignore all the evidence because it could be planted by God or Ms Simulator. So all the evidence will be "compatible" with the evidence and you will have \(P(H_G)\geq 1/2\) forever. \(H_G\) is a classic example of an unfalsifiable theory.

However, if you're at least slightly open-minded, you will also have the competing hypothesis that "we're not living in a conspiratory simulation". And all the actual scientific progress will be clearly affecting the "details" of this "big hypothesis". It's the only one where the scientific progress will take place. The probability that evidence-based science is correct, namely \(1-P(H_G)\), will be simply a coefficient by which your time and efforts dedicated to evidence-based science get reduced.

We know that \(H_G\) can't lead to any insights because whatever we ever observe or learn may always be an illusion caused by someone who manipulates us. So it's enough for a scientist to demand some progress, at least after hundreds of years, not to pay attention to the possibility \(H_G\) at all. It's simply not scientifically interesting, it's not really scientific.

Let me correct this conclusion with two possible loopholes.

One of them is that you may assume \(P(H_G)=0.999\) but you realize that this possibility is "fully mastered" and there's nothing more to be found about \(H_G\). So you focus on "non \(H_G\)" and assign probabilities to "sub-versions" of it. They will clearly be the same as in normal science, just divided by \(1,000\).

The other loophole is that a believer in \(H_G\) could be interested in the details of this theory – namely in the question how exactly God or Ms Simulator wants to mislead us and what these divine entities want us to think. With this attitude, even the believer in \(H_G\) could build all of science just like the scientists who say that the "simulation hypothesis" is claptrap. The only difference would be that this believer in \(H_G\) would present all his results not as "how the world works at the fundamental level" but "what God or Ms Simulator exactly wants us to think". But the content would be exactly the same! We usually don't assume the believers in \(H_G\) to approach their "pet hypothesis" in this way – simply because the very reason why \(H_G\) is being hyped is for some people to dismiss (and stop) all the "detailed" research and evidence. They just want to claim that a one-sentence unfalsifiable religious slogan is more important than all the "details" that the scientists are actually discovering.

Proper science simply has to pay attention to the evidence. We observe lots of things and we may construct new experiments to observe things we couldn't observe in the past. With all of that, we may still ask the question "which of the observations are really important" to shape our beliefs about the important questions in science, to determine the direction in which science evolves.

It's really a matter of talent and "art of science" to be able to find the right phenomena or patterns or relationships that turn out to be "important" and that can teach us something really deep. And even when the dust settles, people have (somewhat?) different opinions about the importance of different results and different evidence. But at the end, science should be able to say something about all observations of the world we can make or we have made. Pretty much everything we observe may be calculated to have a reasonably high (nonzero) probability by the laws of Nature; while everything that we don't observe although we may imagine that we could has some explanation why we don't observe it – a calculation that shows that \(P=0\) or \(P\ll 1\).

The hypothesis \(H_G\) is going in the opposite direction. It doesn't want a scientific picture that will be increasingly capable of making us understand an ever greater percentage of the observations and patterns in them. \(H_G\) wants you to decouple from the world of evidence, to pay less and less attention to the observations, to increase your belief that all that matters is that everything is a giant conspiracy and everything you observe is just a proof that it's a conspiracy and you shouldn't care whether the details go in one way or another. Because of their openly declared tendency to overlook evidence, it's also fair to say that if the "simulation hypothesis" believers ever referred to any evidence, it would be a heavily cherry-picked evidence. The scientifically dishonest methodology is totally analogous to what most other religious cults are doing.

Sorry but this is simply not a scientific approach. The actual scientific approach allows you to discuss bold hypotheses including the hypothesis that we live in a simulation. But it also gives you valid tools that almost instantly reduce the probability of this "simulation hypothesis" to "effectively zero", using Randall's words. It's too bad if a scientist doesn't know how to derive this correct answer to this fundamental question about the Universe. If someone can't derive such things but plays with some details about some terms in SUGRA or whatever, you could very well call such a person a Fachidiot, an overspecialized person who can't really do even basic things outside his narrow field.

Because I have mentioned creationism, let me formulate another, slightly different but perhaps related (because the argument below is still about the proximity of the "simulation hypothesis" to creationism and religions), fundamental defect of the whole "simulation hypothesis" thinking. This thinking really contradicts the whole paradigm that leads us to believe that Darwin's theory (and similar explanations) represent scientific progress. Why?

Darwin's theory is an explanation for the seemingly incredible complexity and diversity of plant and animal species that we observe. How could have these things arisen in a chaotic world of elementary particles where everything seems to evolve into disorder? Well, they have gradually arisen from simpler forms through the evolution, reproduction involving mutations, and natural selection. These are "emergent concepts" that, when properly applied, really allow you to reduce the observed complexity of life forms to the basic laws of physics plus billions years of evolution.

The "simulation hypothesis" is going in the opposite direction. It "explains us" as a consequence of more complicated life forms, a higher species. They have to be created first, and then they could run their simulations of Leo Vuyk. It's clearly a similar attitude as religions – one needs to start with God and then, it's trivial to produce the stinky humans.

Why do scientifically inclined people normally say that this creationist paradigm is less satisfactory than Darwin's evolution? Aside from other reasons, it is because it doesn't really explain the things such as the complexity of the human bodies. It assumes that it's straightforward to get them – and even much more perfect, smarter, and omniscient entities. That's true in the Christian or Islamic creationism. But it's true for the "simulation hypothesis", too.

The "simulation hypothesis" just postulates that it's straightforward and trivial to produce trillions of huge quantum computers and run Leo Vyuk simulations on them. But the fact is that it's simply not straightforward and trivial to produce objects that are "much more advanced than us". In any internally consistent probabilistic framework, it's unavoidably "extremely unlikely" that a very intelligent thing is born right away. And that's why it's always much more likely that the complexity of the "dominant life forms" is basically an increasing function of time. We want to deal with as great products etc. as we can get which is why we're not obsessed with restoration of products that existed 800 years ago. The progress has an arrow.

So the "simulation hypothesis" is in conflict with the arrow of progress. And a necessary condition for their thinking that it's "trivial" to start with the assumption of some "really maximally advanced life forms" is the proponents' logically flawed version of the probability calculus. I've mentioned that this flawed parody of the probabilistic calculus suffers from acausality problems; and it imposes logically indefensible and therefore almost always incorrect "equalities" between options. But there are other bugs of this flawed calculus. The probabilities just never add up to one. They are just obsessed with "many things that will exist in the future" and assume that "probabilities are proportional to the number of things" that they end up thinking that the sum of probabilities of all options may be inflated above one, to arbitrary huge values. Sorry but it never can be.

When you ask what an initial bowl of soup with amino acids (and perhaps some bacteria) etc. evolves into after 1 million years, you may get various final states and with some care, their probabilities add up to 100%. It's clear that some final states similar to the initial one will have some probabilities. But you will also have "slightly more evolved" species of the bacteria etc. But the "simulation hypothesis" wants to count probabilities "from the whole spacetime" in some way which is inconsistent.

In this way, they may think that statements such as "trillions of Vyuk simulations on advanced quantum computers are likely" but there exist no logically consistent axioms that would govern such probabilities. The total volume of the spacetime is or may be infinite. So it's clear that a probability measure with a density bounded from below will have a divergent integral instead of one. Moreover, even if such a "far future containing" probability measure existed, it would be dominated by the "infinitely complex" things in some asymptotic future, and when you admit it, you can see that the whole logic is acausal, and arbitrarily brutally so.

Sorry but that's not the kind of probability calculus that may be used by a scientist who isn't high. A sane scientist knows that it's always hard and unlikely to produce much more complex life forms (or technological products) and an explanation must be given first. And as long as it can avoid contradictions with the empirical data, a shorter chain of explanations of a given species (which avoids "super advanced if not divine intermediate steps") is almost always much more likely than a longer one.

The natural history without "Gods and Simulators" is shorter and simpler, and that's really just another reason why it ends up being vastly more likely and preferred as a explanation of a particular species than a longer history with "unnecessarily advanced intermediate steps". This is intuitively due to Occam's razor but how do we establish such claims by a detailed logically robust argument? Well, if there is any history with "very advanced" links, e.g. Simulators with trillions of quantum computers, those "advanced intermediate stages" are able to produce many things including those that are not indistinguishable from the "old life forms". For this reason, the probability that a simulation shows something entirely different than a copy of a natural biological body is basically 100% (the simulation hypothesis basically predicts with certainty that the objects in the simulation will be able to see that they're not biological), and because we observe something that is also compatible with a natural, properly evolved, biological body, it reduces the probability that the right explanation is a "simulation" to effectively zero.

All the people who hide or deny this conclusion are basically refusing to acknowledge that in science, it matters whether the observations look more like the predictions of one hypothesis or another. They overlook all the relevant evidence.

By the way, I've linked to this 4-minute video above. Two years ago, Richard Dawkins and Brian Greene discussed the simulated Universe during the World Science Festival. Greene mentioned Bostrum's argument. Dawkins sensibly said, at the end, something I mentioned above, too: That the pimpled kid who runs the simulation in the garage is unlikely to be disciplined enough to keep all the simulations also compatible with all the natural and continuous laws of physics. That's one of the things – but not the only one – that makes the "simulation hypothesis" unlikely.

I can't resist to mention another thing. In the discussion, Greene acknowledged that this "simulation hypothesis" he was sort of positive about was mostly isomorphic to religion and Bostrum's argument was a great argument for God. Maybe Brian didn't notice this self-evident fact himself because he suggests that he only learned it from readers of his book (maybe religious readers). Well, I think you should have realized the similarity yourself, Brian, because it's just so obvious.

But the strangeness continued. Brian said that the picture with a "pimpled futuristic kid in a garage" who runs the simulation makes the whole Creation story less mysterious than a picture with an old white God on a cloud who was supposed to perform a similar task, so that's probably why it's more likely, too. Now, I can't believe. Are you serious, Brian? Whether God has pimples or a garage and whether He writes the date to be 3000 AD and whether He has some ancestors can't possibly seriously influence the probability that we believe that the God hypothesis is correct. Do the pimples matter? The Bible has never claimed that God had no pimples, anyway, did it? Or the fact that we call the "age of the garage owner" to be a future? Genesis has never claimed that God had no garage. Or the pimpled kid had parents etc. The Holy Scripture has never claimed anything about the absence of God's ancestry.

Obviously, the very word "future" was used by Brian in a logically inconsistent way. If we were running on a simulation in someone's garage, the life of this guy in the garage would be our present, not the future! What Brian would have said if he were just a little bit more careful is that the life of the pimpled kid who "runs us" may resemble people whom we could reasonably imagine to exist in the future of our simulation. But he wouldn't be the same entity.

But as I have argued, the very fact that you imagine "something isomorphic to our predictable future" to be a necessary prerequisite that had to exist "in the past" to create us makes the hypothesis extremely unlikely because the whole particular evolution from now to the future (which replaces all of us by the hi-tech pimpled kid) is incorporated as an extra assumption. In some sense, such a hypothesis is as unlikely as a history with a time machine that was constructed because the inventor donated the know-how to himself during a visit from the future. If that's the case, then the time machine had to arise as a whole, so its evolution hasn't really been naturally explained by steps, and that's what makes the hypothesis as unlikely as the sudden appearance of an advanced life form on the newborn Earth.

The degree of logical inconsistency in this thinking seems huge to me. Is it possible that folks like Brian don't see any of the problems of their reasoning?

Add to Digg this Add to reddit

snail feedback (0) :