Monday, March 27, 2017

Pizza and simulations vs renormalization

Physicist Moshe Rozali has challenged Aaronson's fantasies about the simulation of the Universe. Let me begin with his traditionalist complaints that are more comprehensible, to make sure that the number of readers of this blog post will monotonically decrease with time:
Incidentally, my main problem with the simulation story is not (only) that it is intellectually lazy or that it is masquerading as some deep foundational issue. As far as metaphysical speculation goes it is remarkably unromantic, I mean, your best attempt as a creation myth involves someone sitting in front of a computer running code? What else do those omnipotent gods do, eat pizza? Do their taxes?
Right. The "universe as a computer simulation" should be viewed as a competitor of Genesis and in this competition struggle, the "simulation" loses to Genesis because it's a superficial kitschy fad, an uninspiring work of socialist realism.



Genesis according to Scott Aaronson. I don't want to revolt against our overlords but the sticky fingers just suck, Ms Simulator. Incidentally, the pizza is a computer case. Click at the picture to see a video by Aaronson's twin brother who explains all the details.




Aaronson responded as follows:
You should at least credit it with being a creation myth for our century. Nowadays, it’s hard to be so impressed with stories about gods battling each other with axes or bequeathing humans the gift of fire: why don’t they just use nuclear weapons, and hand out Bic lighters?
You can see a difference in their tastes. Moshe Rozali is a male feminist – beware male feminists – but he still has some respect towards the traditions and immunity against the cheesiest fads of the day. After all, the Bible has been around for over 2,000 years and there's no good reason to think that "the universe as a simulation" will come close. On the other hand, Aaronson enthusiastically embraces the P.R. of the day. The Creator should be one of us, a community organizer with dirty hands from pizza and stinky nose from cigarettes that he lights by Bic lighters, someone who babbles about nuclear weapons even though he hasn't ever held an ordinary axe in his hand.

Sorry but I don't need all axes in novels, theater plays, and movies to be replaced with nukes and I think that the people such as Aaronson who simply have to replace all the old tools by some fashionable or contemporary ones have an extremely bad taste.




Moshe's real opinion is somewhat ambiguous but in between the lines, I think that Moshe agrees that this new idea "what heroes and gods should look like now" is rather disappointing. Moshe wrote:
Oh, I could imagine many powers I’d want to bestow on my creator (or vice versa), but imagining your deity as someone no better than yourself, with no special powers or insight, does seem like a good creation myth for this century.
And the picture of God as the "average bloke" will get even more typical for the 22nd century if the mankind keeps on evolving towards the idiocracy which is what it seems like now.

OK, those were the less technical comments. The rest is – and the first comment by Moshe was – about the renormalization and related issues.

You know, Moshe basically says that the computer scientists and players of video games who say "it's straightforward to simulate the Universe" start with the naive expectation
that the observables you calculate have a finite continuum limit, so at every value of the cutoff you approximate them to a finite precision.
In other words, just like one can shoot a scene on a camera with a certain resolution, these naive people are imagining that physics in the spacetime may be obtained simply by discretizing the spacetime using a lattice of lattice spacing \(a\) and taking \(a\to 0\). All deviations from the "perfect smooth world" go to zero in the limit \(a\to 0\), they think.

Well, it's not the case in modern physics. The deviation of the "quantities computed in the discretized approximation" and the "idealized finite quantities in the smooth real world" actually differ by terms that go to infinity for \(a\to 0\). These unwelcome "infinities" have to be subtracted in the definition of the theory. Moreover, at the very end, we must only look at the observables (operators) for which some continuum limit exists at all. And it won't exist for everybody.

So when you simulate the world using some very small lattice spacing \(a\), most of the quantities in your computer program will be divergent, dominated by terms such as the inverse powers \(k / a^m\) for some positive exponent \(m\) where the coefficient \(k\) has pretty much nothing to do with the interesting dynamical observables that describe the "world as we normally understand it". All these leading terms have to be subtracted in some way. If you're lucky, it can be done and the much smaller deviations from these things will correspond to the density of electromagnetic energy in the field or any other quantity you want to talk about.

And the final outcome "it can be done after lots of work" is the lucky one which is not guaranteed. There are rather deep problems with the discretization of certain aspects of physics. In many of them, physicists remain uncertain "whether it may be done at all", even if you decide to make an arbitrarily huge amount of work. The classic problem of this kind are chiral fermions on a lattice. I think that if you organized a poll among the lattice gauge theory experts, you would get rather split answers to the question whether "the general theories with chiral fermions may be completely accurately and universally computed by lattice methods" at all.

All known elementary fermions – leptons and quarks – are chiral i.e. left-right-asymmetric. The part of the field that evolves like a left-handed screw behaves differently than the right-handed part. They have different electroweak interactions. It's difficult to get this feature from the lattice because a lattice – e.g. a cubic lattice – is clearly left-right-symmetric. At the end, the very basic fact that the laws of physics are not left-right-symmetric – which has been known for more than half a century – is morally incompatible with the very idea of a discretization or a lattice. The observed violation of the CP-symmetry makes the things even worse or harder for the lattice.

Even if you succeed to emulate chiral fermions using a lattice, you face additional problems such as the gauge anomalies. In theoretical physicists' jargon, anomalies are quantum effects that violate classical symmetries – including gauge symmetries – that should hold naively. But the switch to the quantum theory makes it hard to obey all the symmetries at the same moment and when you add a generic collection of chiral fermions, quantum mechanics strictly implies that the symmetries just can't be preserved in the quantum theory. The explanation of all these things in terms of the discretized, lattice formalism is very hard.

Let me mention the Casimir effect. Conductive parallel plates at distance \(A\) are predicted to attract with the force per unit area\[

{F_c \over A}=-\frac{d}{dA} \frac{\langle E \rangle}{{\rm Area}} = -\frac {\hbar c \pi^2} {240 A^4}

\] This force is calculated as a derivative of the energy \(E\) of quantum fluctuations of the electromagnetic field. The simplest similar example is one in string theory where the string carries some zero point energy proportional to\[

1+2+3+4+5+\dots = -\frac{1}{12}.

\] Uneducated people often love to say that it's nonsense and they don't have to pay attention to string theory because a famous crackpot in their city told them so. Well, these ideas don't depend on string theory in any way. You may talk about the well-known 3+1-dimensional world and the Casimir force between the parallel plates that has been experimentally verified. The theoretically calculated energy \(E\) in the formula above ends up being proportional to the sum\[

1^3+2^3+3^3+4^3+5^3+\dots = \zeta(-3)= +\frac{1}{120}.

\] You can see that it's totally analogous to the sum of positive integers except that we get the sum of cubes of positive integers instead (you get them from summing over momenta \(\vec k\) or the corresponding Fourier modes of the electromagnetic modes in between the plates) – the third power appears because we have three spatial dimensions, it's no coincidence. Well, the sum is equal to a positive number in this case but a finite one and not an integer. In this case, it's \(+1/120\).

Now, just to be sure, some physicists would agree with me that it's morally right to write that the naively divergent sum of the cubes is equal to \(+1/120\). Others would say that the equation is just heuristic and it isn't true literally and they would offer fixes. But what are the fixes? These fixes would include various additions and complications and all of those – with the exception of the finite term \(+1/120\) – would exactly cancel at the end, whenever you would calculate a physically meaningful quantity.

There are many ways to calculate the "regulated" sum of the third powers of the positive integers. They are analogous to the ways to calculate the sum of integers. The cancellations work in various ways and nothing ultimately depends on the way you choose. So the finite residual term \(+1/120\) is the "only thing" that these discretizations and other "rigorous justifications" have in common. For this reason, it makes sense to say that \(+1/120\) is the only physical part of the sum and everything else is an unphysical artifact.

But in a computer simulation that tries to discretize physics, these unphysical artifacts completely dominate. Most of your RAM memory would contain "almost infinite", unphysical numbers of this form. Let us look at yet another elementary enough example: the density of the electromagnetic energy in the field in our Universe – which we try to simulate.

In a 2012 blog post about the Feynman's path integral explanation of the uncertainty principle, I derived that the generic trajectory contributing to the path integral for non-relativistic particles has the velocities of order\[

\Delta v \sim \frac{\sqrt{\hbar}}{\sqrt{\Delta t \cdot m}}

\] where \(\Delta t\) is the minimum time in our "discretization of time", \(m\) is the particle mass, and \(\hbar\) is the reduced Planck's constant. You may see that in the continuum limit \(\Delta t\to 0\), the velocity of the particle is infinite at each point. Almost all trajectories – according to the Feynman's path-integral measure – are non-differentiable almost everywhere. And this fact (perhaps "ugly fact" according to some people's arbitrary aesthetic judgement) is absolutely essential for the path integral not to contradict the Heisenberg uncertainty principle, the defining principle of all of quantum mechanics.

The same argument may be derived in \(D\)-dimensional spacetimes and the corresponding velocities of the bosonic quantum fields, such as the electric and magnetic vectors \(\vec E\) and \(\vec B\), will scale like\[

\abs{ \vec E } \sim \frac{1}{(\sqrt{\Delta t})^D}

\] It's no coincidence that the power of \(\sqrt{\Delta t}\) is the same one that you obtain from the dimensional analysis assuming the canonically normalized kinetic terms in the action. Just to be sure, the world around us has\[

D=4

\] large spacetime dimensions, so \[

\abs{\vec E} \sim \frac{1}{(\Delta t)^2}

\] and the magnetic vector \(|\vec B|\) scales in the same way. What happens if you substitute it to the density of electromagnetic energy?\[

\rho = \frac{ |\vec E|^2 + |\vec B|^2 }{2}

\] You will obviously get\[

\rho\sim \frac{1}{(\Delta t)^4}

\] The field density of the electromagnetic field energy diverges and scales in this way. Imagine that you have a computer program that discretizes the reality in a similar way and you want to know what is the density of the radio waves coming from a nearby antenna or something like that. You would think that the answer is proportional to the density of the electromagnetic energy except that if you substitute the actual typical histories – or, equivalently, the operators for the electric and magnetic vectors – you will get the leading term that scales like that and diverges for \(\Delta t \to 0\).

In this case, it doesn't mean that the finite physical result cannot be obtained from a lattice calculation. It may be obtained. But you need to know what you're actually calculating. You need to know that your computer simulation is basically "overwhelmed by infinities" at every point but there is a clever "pattern in the infinities" or a clever way to subtract various infinities in such a way that the leftover resembles the "reality as we conventionally imagine it".

In the case of the energy density, the divergent piece is nothing else than the contribution of the harmonic oscillators' \(E_0=\hbar\omega_{\vec k}/2\) zero-point energies in the momentum space attributed to each point of the position space (or each lattice site). It can be subtracted. It's more natural to consider supersymmetric theories where bosonic fields and their superpartners, fermionic fields, produce exactly cancelling contributions to the zero-point energies. Supersymmetry is pretty and at least reduces the dominance of the unphysical infinities – but that's also why supersymmetry itself is at least "hard" on the lattice, too. The opposite relationship of physics and computer simulations to supersymmetry is just one major example of the fact that physical and computer-science principles seem to be in a strong tension against each other, to say the least.

Perhaps you could compare the generic situation in the discretization or simulation of the physical world to a film that is completely dominated by excessive brightness or by some very strong noise but that still allows you to subtract the brightness or noise in a clever enough way that allows you to see the ordinary movie hiding "somewhere" inside the seemingly unusable film. Yes, all these things – which Moshe calls post-processing – can be done but the user of the discretization or simulation must know what he should do and why. You may say that the user is nothing else than an observer in the quantum mechanical sense and observers have some cool talent to pick the physically relevant observables that have successfully jettisoned the unphysical divergent pieces.

The addition of all the divergent artifacts of the lattice isn't "physically natural" in any way – and the methodology to do these things is in no way unique. There are infinitely many ways to regularize a quantum field theory – and the diversity gets even more technical and wider because of the plethora of the "renormalization schemes" you may choose from – and we're never doing these things for the sake of the simulation itself. We're doing these calculations because of the result that all the simulations, discretizations, or renormalization schemes have in common.

In other words, all the specific additions of a particular discretization and simulation must be understood as garbage that we're not interested in and we want to throw it away. It's just totally wrong to assume that these artifacts of the regularization are "fundamental" in any sense.

I want to end with a reaction to the last paragraph of Moshe's first comment about the renormalization issues in a discretization:
So my point in all that is highlight that what you mean by simulation is different from just discretizing your model and taking the results as approximations to the true physical quantities. It is only this narrow definition of “simulation” which I think is incompatible with known low energy properties of the world. The full process, including post-processing, does give you finite approximation to physical results.
I agree with the statements as Moshe wrote them but I disagree about the relevance of the last one. To be more specific, I agree with him that the "simulation without the post-processing" cannot work at all, the "simulation with the post-processing" can be done (assuming that the chiral fermion, anomalies, and other technicalities won't stop you). But I disagree with the implicit suggestion that "because the simulation with the post-processing" is possible, the hypothesis that our Universe is a "simulation with post-processing" is viable.



Well, one of the conventional ways to argue so that one can save time is to embed one of the favorite Feynman videos. Here, in the video about the flying saucers, Feynman rightfully reminded us that the purpose of science isn't to say that things are possible or impossible all the time. Instead, science says that some things are more likely and other things are less likely. That's how the scientific approach operates.

The "simulation with all the post-processing" that Moshe basically claimed to be doable is indeed "possible". But what's more important is that as a physical theory, it remains extremely unlikely. The reason and the logic are absolutely analogous to the case of flying saucers that Feynman discusses in this video. Can you prove that it is impossible that there are flying saucers? Can you prove that it is impossible that we're living in a simulation?

No, I can't prove it but it's just very unlikely. (I would mock the intonation of the arrogant laymen – e.g. Aaronson – in the same way as Feynman did.) In the case of the simulation, it's unlikely because if someone writes a computer game (or shoots a movie), it's very likely that he won't deal with all the renormalization issues correctly, to preserve the agreement with the effective quantum field theory. After all, can you show me at least one Hollywood film director – and even one programmer of first-person shooting games – who can calculate quantum field theory in at least two renormalization schemes?

And the Hollywood folks' mastery of renormalization techniques in quantum field theory is getting worse, not better, so the technological improvements of computers aren't any helpful. For this reason, it's much more likely that if someone wrote a simulated world, it would only follow a caricature of the laws of Nature – much like catastrophic movies from the Hollywood only respect caricatures of the physical laws – and if it were so, we would be able to notice these violations of physics.

We haven't seen any which makes it extraordinarily likely and almost certain that our world is natural and not a simulation written down by anybody who at least remotely resembles the currently active programmers or filmmakers. Period.

No comments:

Post a Comment