**...up to all orders in the pertubative expansion...**

The first hep-th paper today,

On the Singularity Structure of Maximally Supersymmetric Scattering Amplitudeswas written by Nima Arkani-Hamed, Jacob L. Bourjaily, Freddy Cachazo, and Jaroslav Trnka and exploits their vast knowledge of the \(\NNN=4\) gauge theory amplitudes that they have accumulated when they were successfully uncovering new methods to calculate them – via recursive relations, twistors, Grassmannians, amplituhedrons, and similar transcendent animals.

In this paper, they claim that one-loop, two-loop, as well as multi–loop amplitudes have a rather special form. All power-law singularities at infinity cancel out.

They observe that the singularities want to be of the "dlog" type, i.e. the integrands are resembling\[

{\rm d}\log \alpha_1 \wedge {\rm d}\log \alpha_2 \wedge {\rm d}\log \alpha_3 \wedge \dots=\\

\frac{{\rm d}\alpha_1}{\alpha_1}\wedge \frac{{\rm d}\alpha_2}{\alpha_2}\wedge \frac{{\rm d}\alpha_3}{\alpha_3}\wedge \dots

\] where \(\alpha_i\) are (not completely arbitrary) ratios of polynomials of the external momenta.

The evidence supporting the absence of the singularities to all orders is circumstantial but the claim is likely to be right. What is much harder for anyone who isn't as experienced as the authors is to determine whether this important but seemingly technical result is really hard or not. Can't it be proven by a simple rigorous proof? Isn't it easily seen to be equivalent to some other property of the theory or a combination of two or several properties?

Note that the poles may be

*a priori*expected at higher loops. This boils down to the fact that the function\[

\log (X+g\cdot f(X))=\dots

\] may be expanded around \(g=0\). The leading term is obvious while the next one arises as a derivative, because of the usual rules of the Taylor expansion:\[

\dots = \log(X) + g\frac{f(X)}{X} +\dots

\] In principle, the higher derivatives can produce higher powers i.e. higher-order singularities. If those terms cancel, it may be related to the ultraviolet finiteness (or really fancy properties involving transendentality) but it may also be linked to various more general non-renormalization theorems. In the toy example above, \(f(X)=0\) means that the argument of the logarithm wasn't shifted as you increased \(g\).

The number of people in the world who can calculate the \(\NNN=4\) amplitudes using the extremely modern methods is comparable to a dozen or at most two dozens. Maybe their special expertise is needed to prove the conjecture in the paper, or perhaps even to sensibly estimate the degree of its importance. But maybe this special expertise isn't needed – and dozens if not hundreds of physicists in the world have a chance to clarify whether the hypothesis is right and if it is, where it comes from.

This "uncertainty about the depth" is something that seems present in this whole "industry of amplitudes". Science in principle wants to find and apply rules to predict, explain, and calculate things, and if they involve some difficult, uninstinctive mathematics, that's how things are. (People who demand scientific theories to be "psychologically pleasing" for them are just wrong.) Whether the usual Feynman rules to calculate the amplitudes are instinctive is already a matter of taste. But if one studies a whole alternative framework to calculate, it's much harder to become sure whether the alternative framework is really "vital" for anything or whether it is a curiosity of a sort, e.g. as if you learn Chinese. I assume that Chinese pushes you to express things differently, some things are easier, some things are harder. But at the end, it's pretty clear that the Chinese may perform the same "broader or general tasks" that we can and vice versa. Isn't the same statement true about the unusual new quasi-Feynman rules to calculate the amplitudes?

Some results about the (near) maximal helicity violating (MHV) amplitudes are clarified by the new twistor-plus-recursive approaches – so they are giving us some "new human understanding". However, is that true about all the ramifications of the twistor minirevolution (or "uprising", as e.g. David Gross would say)? Shouldn't e.g. the amplituhedron paradigm produce straightforward proofs of similar claims about singularities? Maybe it does, doesn't it? How much sure one can be that he isn't getting lost in an ocean of non-essential technicalities?

## snail feedback (29) :

Not so long ago I studied Mandarin with Chinese teachers one-on-on via Skype twice a week and enjoyed it very much, but what I crucially failed to (or could not) do was to immerse myself in the language in between each lesson.

Hence, I wonder if my underestimation of the difficulty of acquiring Mandarin can be used as a weak or muddy metaphor for what almost every mathematical physicist is now up against;

IOW: Is Nima and his co-authors and a handful of others either 1. an endangered species of scientists, or

2. or will they speciate away from 'the rest' of this already highly specialized subspecies (of the mostly silly homo sapiens sapiens sort of simians) - including the very best of "the rest" (where I place you, Lubos)?

Am slightly worried that 1. is more likely than 2..

Just a typo: in the first equation it should be da_i / a_i = d log a_i, I think.

Right, Peter, there are surely lots of unexpected difficult things one collides with when he learns Chinese - or anything else that is complex enough. But I still think that there exists or should exist at least some vague way to distinguish things that are difficult deep, and difficult just technical.

Endangered species, indeed. Nima is surely doing well and perhaps even his collaborators do, too. I am worried that there is a clear trend in the society that it doesn't really want to "grow" and "see" similar people in the future.

Thanks, fixed. I have a TeX macro "ddfrac" that types the "d" automatically in the numerator and the denominator, to write a derivative, and failed to realize that this wasn't what I wanted! ;-)

What would it mean from a physics point of view, if the hypothesis is right ...?

Dear Dilaton, it was exactly my point that it's pretty hard to explain in "words" what the consequence of the finding is if it is true.

There are some singularities in the functions representing the cross sections or amplitudes, they have various types, and each singularity means that something special is going on. Only the "most moderate" type of a singularity appears at each order.

My inability to say it in any other words that would avoid this mathematical terminology may mean that the result is just a relative technicality, or it may mean that I (or even they) don't fully understand it.

Exciting paper Lubos. Im happy to have it brought to my attention on a day I check the TRF feed before arxiv feeds, and Im very glad to not be the only one excited about this.

As far as I know NAH et al would claim that the amplituhedron provides a convincing geometrical account of what happens – probably not using the word proof – for the planar theory, but it would not apply to the non-planar theory investigated here.

The amplituhedron seems like a phenomenal insight, and I have high hopes that NAH will also be able to figure out what the analogous story is for the non-planar theory, hopefully allowing for some better understanding of all the "miracles" lying around.

Im personally still investing a lot of energy in learning standard QFT, in particular taking the edX class on effective field theory going on right now...

Agreed, Cliff!

The amplituhedron looks like a big enough beast to squeeze into one's head and I think that non-authors of these papers need lots of time to learn it. But maybe just should just trust that what's been said to work indeed works, and go beyond it. It's plausible that the amplituhedron may be generalized naturally to get all the non-planar, and perhaps non-perturbative, contributions. That it means something clear.

Many of the previous pictures seemed unready to deal with the non-planar part. But there still seem to be exceptional properties of the N=4 theory over there.

I'm glad I'm not the only one that didn't see anything essentially new in Zurek's article. Already in equation 1 he splits his world into a system S and a measurement apparatus. Then the system doesn't change after measurement and the apparatus does. Even though he calls his measurement apparatus 'quantum' to me this looks very similar to Copenhagen interpretation, where you need a classical measurement to collapse the wave function. If he claims to be building a fully quantum theory, with no need for classical observers, why does he need this split between the system and apparatus, where only the apparatus is changing?

That touches something that I'm interested in. Is there something conceptually important for strings in all these. Forgive me for I haven't yet read the paper, but Nima seems to be more interested in extending QFT point of view not touching stringy stuff. That's why I was confused a bit first time he said that this was the result of "going beyond dualities" (obviously he talked about dual conformal symmetry but from his advertisments one would expect going beyond ads/cft) He mentions supergravity in the abstract but I suspect this is about supergravity as "squared" SYM?

My message could seem to be a bit aggresive to Nima. While in reality I'm very excited about his work and eager to see whether it could give us something bigger.

Totally agreed. It's really exactly the same split that the Copenhagen folks would talk about. Whether the new (Zurek's) things said about the split are "more correct" is partially up to the taste. The probabillities describing properties of the Copenhagen apparatus were also meant to behave as the "probabilities in a classical theory". It's an open historical question whether the fathers of QM would ever realize that this "classicality" is equivalent to the "diagonal form of the density matrix" in the right basis -and they arguably knew the right basis in every situation - is an open historical question for me.

Dear OON, quantized GR is inconsistent at loop level and supergravity, even the maximally supersymmetric one, is inconsistent non-perturbatively without the right, stringy/M completion.

Nima himself recognizes that those things are trying to build a "third pillar" of the AdS/CFT/his-stuff duality/triality, and it's just the on-shell part.

The stringy insights or machineries are virtually unused here.

But my point is really that this whole machinery of producing the amplitudes may be just "another vacuum of string theory" or some of its straightforward generalization or cousin.

Among papers that are out there, you may find many of those by Freddie Cachazo (who is on the today's paper as well) that are getting very far in making this interpretation explicit. See e.g.

http://arxiv.org/abs/arXiv:1206.6511

http://arxiv.org/abs/1207.0741

Thank you for your answer, I'll try to read the references you gave. What I was thinking was connecting their picture with some stringy objects that AdS/CFT gives for the strong coupling. I mean something in the spirit of e.g. this paper of Maldacena and Berkowits,

http://arxiv.org/abs/0807.3196v1

Good points. Maldacena+Berkovits are more general in the sense that they talk about the whole off-shell N=4 gauge theory, and the symmetries of the scattering (on-shell) amplitudes are just a special case or a "reduced" consequence.

On the other hand, the business of Nima et al. is intrinsically on-shell. If this is an unavoidable part of the framework, then they really don't study any theory of gravity e.g. in the new paper because gravity only emerges from the full *off-shell* gauge theory. On-shell reduction "kills" one more dimension while holography has to use all the dimensions you have plus add one. So it's not surprising that all the "quantum-gravity(in-the-spacetime)-related" insights of string theory seem unused in this gauge scattering amplitude business.

OT but have you seen Edward Frenkel new popular science book, Love & Math and if so what do you think of it. I thought it was great. Gave me -- me, humanities major! -- first inkling of what gauge theories, sheaves, special unitary groups, dualities, and the Langlands program are all about. Also Frenkel's personal story of being discriminated against as a Jew, and finding a way around it, in the old Soviet Union, is a fun story.

But the result is not surprising. I mean the theory is believed to be finite at all orders anyway; or am I missing something?

It's not that off topic; The connection of the (Geometric) Langlands duality to Physics is based on the S-duality of N=4 MSYM :-)

"The shortest path between two truths in the real domain passes through the complex domain," Jacques Hadamard.

Don't worry. They just get born. Of course the next collider, that' a different matter.

Very clear and concise explanation of decoherence. I wish I had taken my QM course from you, but then again, you had not been born quite yet and were just an idea in the Mind of Hilbert Space...

Hmm, after reading your excellent didactic article, I was pushed ads promoting "Quantum Jumping"--more Yogic twaddle :) It does seem that often declaring something to be a new idea just requires inventing some jazzy new words for the same thing---preferably something that sounds deep and includes "information" "quantum". Yes, I do wish that people would stop referring to QM as "weird" because it makes it seem mystical and supernatural and brings out the fruit bats and scam artists.

Thanks a lot, Gordon.

Nice and clear explanation of decoherence.

I know you have robust views on the 'naturalness' of QM, and describe anyone who thinks QM is a bit strange as some kind of uneducated moron (or words to that effect), but I will have to put myself unashamedly in this category because I do think there are 'problems' with QM.

It seems to me that the fundamental issue boils down to how we get irreversibility from a theory in which everything evolves unitarily. We can apply the same kind of fudge as in statistical mechanics to give us the 'arrow of time' - which is essentially the decoherence approach, but there are still (for me anyway) unanswered questions. When does a measurement actually occur? How much time has to elapse - of course strictly speaking the off-diagonal elements never decay precisely to zero. How big does our measuring system have to be?

QM is logically consistent, provided we just accept 'measurement', but decoherence can never be more than a FAPP 'resolution'. If everything in the universe is described quantum mechanically then measurements (in the sense of the projection postulate) never happen!

I don't see how interpreting a quantum state as being related to subjective knowledge helps much here either - we have a mathematical entity that is purportedly giving us 'what can be subjectively known' that obeys a time evolution equation that is only a Poisson bracket/commutator away from classical mechanics. Personally I find that a bit weird for something that is so subjective. On the other hand thinking of the state vector as somehow being 'real' carries a whole host of difficulties that are even worse.

Perhaps at some point you could explain (probably once again) for all of us morons why we would expect 'states of knowledge' to interfere and why our 'state of knowledge' lives in a complex Hilbert space, and why things which act upon our 'state of knowledge' do so in a linear fashion. I don't have any natural explanation that works for me.

Hi Simon, it is not true that "everything" evolves unitarily in QM. The wave functions - or the operators in the Heisenberg picture - evolve by unitary transformations - or the conjugation by a unitary operator in the Heisenberg picture.

But the wave function and the operators aren't "objects in the real world". They are mathematical objects that are interpreted as knowing something about properties of objects.

If I comment on the Schrodinger picture, the wave function is a complexified form to calculate probabilities and probability distributions. The predicted probability tells us that an observable (property) is uncertain at time "T minus epsilon", but it becomes certain at "T plus epsilon", and what it becomes is given randomly according to the predicted probabilities or probability distributions.

Note that this detailed description of what the probability means is actually and completely time-reversal-asymmetric. While the wave function (and similarly the phase space probability distribution in classical statistical physics) undergoes unitary evolution (or the Liouville equation evolution which is also "time-reversal-symmetric" in a way), the *interpretation* of the wave function or the probabiliities is completely time-reversal-asymmetric.

Probability determines the result observable at a later time of something that is uncertain at an earlier time. Probability is never determines the known value at an earlier time of an observable that becomes uncertain at a later time!

There is absolutely nothing strange or problematic about this time-reversal-violating meaning of the wave function or probabilities or probability distributions. Instead, it is absolutely essential for *any* science or rational thought, for the way how Nature operates, and your suggestions that there is some problem about it are totally and absolutely irrational.

<>

Yes, but haven't you assumed that a measurement occurs here? Still doesn't really tell me much about what a measurement is. I agree with what you say about the asymmetry - but that asymmetry only arises upon this thing we call 'measurement'.

If no 'measurement' happens then we can predict or retrodict with equal success - we can start with |psi(0)> and predict |psi(t)> or we can go the other way. The asymmetry here applies to statements about probabilities - so we have to say "IF I measured such and such an observable at time T then these would be the probabilities of obtaining the eigenvalues". Of course once a measurement is performed and we know our result then we'd have to use the relevant eigenstate to describe things mathematically (or if we remained ignorant of the result we'd have to use the mixed state description).

As far as I know the 'full' description of QM involves the evolution of state vectors (or observables if you want to work in the Heisenberg picture) - yes, of course we can write down an expression for the evolution of probabilities, but that's not the whole picture in QM - everything interesting is happening at the level of the evolution of the amplitudes.

<>

:-)

quite probably - but I still don't find the fact that we have to describe everything in terms of vectors in a complex Hilbert space with associated linear operators to be immediately intuitively obvious!

Dear Simon, if you check my comment, you will see that I haven't used the word "me*surement" once. I haven't used the word because I didn't need it for anything. It is only used by confused people like you to make themselves even more confused. For example, your latest comment contains 6 copies of the word.

But yes, you may use the M-word for the event before which the observable was uncertain, and after it is is certain. Have I assumed that there is such a moment? Yes and I had to because I was asked a question about the situation around that moment. So I had to assume that the moment exists.

What's your problem? If I am asked a question about Barack Obama, I must also assume that Barack Obama exists.

To say that "me*surements exist" in this sense is an absolute triviality. It just means that one is often uncertain about a property of the external world while he learns what the property is right afterwards. It's great. One may sell it as a mysterious thing. But it's surely essential that this "phenomenon" of learning about something about the real world may exist, otherwise it wouldn't be possible to talk about anything in world, right?

:-)

Indeed you did not explicitly say 'measurement' - but it was implicit. At one level I agree with you - after all it's kind of 'obvious' when a measurement is performed - the measuring device he go 'ping'. The problem is really how that can be made consistent with a mathematical description in which the fundamental entities (the states) evolve unitarily. If our system is described by QM and the measuring device is described by QM then they interact - where does the irreversibility come from? How big does our measuring device have to be to ensure that we get something approximating a 'measurement' as described by the axioms of QM?

At some point this M thing happens - the machine goes ping - it certainly doesn't happen in the maths but as an extra thing we tack on - one of the axioms of QM. I find it a bit weird that we need 'classical' physics to formulate QM (we need to assume a 'classical' measurement) - because whatever this M thing is it can't be described within a unitary formalism without doing some kind of coarse-graining approximation a la decoherence.

I would like to say that the M thing happens whenever some information about what we've measured exists in a classical form - as a classical bit requiring energy to erase - but that's a bit vague and woolly too.

But as you said before - all of this is mostly window-dressing - it's pretty clear that "shut up and calculate" is really what's important. Just assume the axioms and get on with it.

I wrote a new blog post answering all these elementary questions.

Post a Comment