Saturday, August 01, 2009 ... Deutsch/Español/Related posts from blogosphere

Why perturbation theory remains paramount

Breaking news: Jim Parsons wins the Television Critics Association's 2009 award in the category of highest individual achievements in comedy. Congratulations to my imitator!
What is the method

Perturbation theory is an approximate method that describes a physical system in terms of a similar, simple enough, solvable, and often non-interacting physical system, "H_0", and a small perturbation of this system, "lambda V".

If you assume the letters to represent the Hamiltonians, or the formulae for the total energy (and evolution) of the systems, the total Hamiltonian is
H = H0 + lambda V.
Here, the perturbation, "V", is normalized in such a way that it is comparable to "H_0". On the other hand, "lambda" must be smaller than one. Ideally, it should be much smaller than one.

While "H_0" may be solved exactly, the full "H_0 + V" cannot. But there exist sophisticated mathematical techniques that allow us to quantitatively answer any question about "H" by calculating appropriate Taylor expansion in "lambda". We are continuously moving from "lambda=0", which corresponds to "H_0", to the actual value of "lambda", which corresponds to "H".




Historical successes

Centuries ago, such techniques would be used e.g. for the calculations of the Earth's orbit under the influence of other planets such as Jupiter. The influence of the Sun had been known since the victorious achievements due to Kepler and Newton: our star directs the planets to move along exact ellipses. The attractive force from the other planets may be added as a perturbation. The first nontrivial term in the perturbative expansion is usually everything you need.

Not only the influence of Jupiter but even the effects of Einstein's general relativity could have been added as perturbations to the simple Kepler system that was solved by Newton using his (or their) newly developed differential calculus. Einstein was able to predict the precession of Mercury's perihelion. That's quite a general lesson: the predictions of a new, more accurate theory for a familiar situation can often be expressed as the old, approximate theory with some new perturbations being added.

Perturbation theory became even more important with the boom of quantum mechanics. It has been used to calculate the spectra, cross sections, and many other things. Quantum field theory has really escalated this process because perturbation theory became the main technical procedure, the key "routine", behind most of the calculations. Feynman diagrams became the ultimate symbol - and the main actual tool - of perturbation theory in quantum field theory.

The perturbative method is playing a similar role in string theory, too. For quite some time, it has actually been the only method how to calculate things in string theory (via the stringy Feynman diagrams, i.e. the Riemann surfaces representing the joining and splitting worldsheets). I will postpone the discussion of the last 15 years - which were dominated by the "nonperturbative research". But before we get there, we must understand the limitations of this method.

Why perturbative expansions are not the whole story

Many functions, including "sin(x)", "exp(x)", "sqrt(1+x^2)", and many others can be written in terms of power law expansions, i.e. Taylor series. These expansions converge to the desired result. You might think that it is a general fact: the correct answer may always be obtained arbitrarily accurately if you sum a sufficient number of terms in the Taylor series, at least if you're within the convergence radius.

However, this proposition is untrue. Equivalently, it is strictly speaking true but the convergence radius is typically zero in quantum field theory and string theory. If you want to know some terminology, which is really nothing else than terminology, such divergent Taylor expansions are called asymptotic series. Why do they fail to converge to the right result?

First, consider e.g. the function "exp(-1/x^2)". Clearly, it is very small if "x" is small because "x^2" is small and "-1/x^2" is large and negative, producing a tiny exponential. We define the function to equal zero at "x=0", in order for it to be continuous. However, the function is clearly nonzero for all other values of "x".

Now, let's try to calculate the Taylor expansion. The function itself vanishes at "x=0". What about its derivatives? Well, if you think for a while, the "(N-1)th" and "Nth" derivatives will keep the exponential factor but they will be adding various ratios of polynomials in front of the exponential. But near "x=0", the exponential goes to zero faster than any powers of "x" can go to infinity, so it will win in the product: the "Nth" derivative vanishes at "x=0" for every "N"!

In other words, the Taylor expansion of the function is "0+0x+0x^2+...". It is zero even though the exact function is clearly not.

Divergences in QFT and ST

The second point we mentioned is that the convergence radius is typically zero. Why? Imagine that you have a quantum field theory, such as the "lambda phi^4" theory with a scalar quartic self-interaction in "d=4", and you compute the Feynman diagrams at "L" loops.

One can see that the number of diagrams with "L" loops will be close to "L!" times subleading factors. So the coefficient of "lambda^L" in the Taylor series will be dominated by the factor of "L!" which diverges faster (for very large "L") than the speed with which any "lambda^L" can suppress it. Recall that by the Stirling formula, "L!" is close to "sqrt(2.pi.L) (L/e)^L", and the factor "L^L" is what makes it larger than ordinary powers or exponentials.

(The number of diagrams goes like "L!" in any dimension. There are also integrals over the momentum space inside the L-loop amplitudes. Their behavior and/or renormalizability depends on "d" but in this discussion, we are interested in the "combinatorial factors", purely numerical coefficients that don't depend on "d" and that are actually more important for the "extremely large L" behavior.)

However, you may consider a small value of "lambda" and try to achieve the maximum accuracy in your computation of the amplitudes: sum all the initial terms of the Taylor series, up to the minimum one (where they start to blow up again, because of the growing "L!"). It turns out that the minimum term is roughly the "Lth" one where "L=1/lambda", and this term is comparable to "exp(-C/lambda)" where "C" is a constant. You don't know exactly whether you should still add this smallest term (and the next one) or not, so "exp(-C/lambda)" is the minimum error you will have.

Note that "exp(-C/lambda)" is very small. Asymptotically for small "lambda", it is smaller than "lambda^L" for any fixed exponent "L". And this estimate, "exp(-C/lambda)", is also the magnitude of the largest nonperturbative contributions to the same quantities. In the case of the scalar quartic self-interaction, nonperturbative physics is actually ill-defined, as we will explain.

However, the previous paragraphs may be applied to pretty much every coupling constant in quantum field theory and string theory. You must be careful what "lambda" means because in many cases, it means the squared coupling constant. But once you do the map properly, you get the right estimates for the high-loop coefficient as well as the nonperturbative terms.

The correct "lambda" should be compared to the following functions of coupling constants in important theories:
lambda quartic scalar ≈ y2Yukawa ≈ e2QED
≈ g2Yang-Mills ≈ g closed string ≈ g2open string
This uniform treatment of scalar couplings, Yukawa interactions, gauge theories including QED, open strings, and closed strings is just about a style of presentation (due to your humble correspondent) that unifies your reasoning about all the cases.

You can see that in gauge theories, the leading nonperturbative corrections are predicted to be close to "exp(-C/g^2)", matching the effect of solitons (such as magnetic monopoles) and instantons (in some sense, loops with monopoles). In string theory, the winners are close to "exp(-C/g_{closed})" which corresponds to D-branes and D-instantons, as expected from the "(2L)!" behavior of the L-closed-loop diagrams.

This function of "g" looks greater than the result in Yang-Mills theory - and people often say that the stringy nonperturbative effects are "larger" than they are in field theory - but if you realize that "g_{closed} = g_{open}^2", it will have the same form as a function of "g_{open}" as the Yang-Mills result has in terms of "g". With the right parameterizations and analogies, the behavior is universal in field theory and string theory. In the same way, the "(2L)!" behavior for the L-loop closed string amplitudes looks "more divergent" than "L!" in field theory. But you shouldn't forget that "L" closed-string loops are equivalent to "L' = 2L" open-string loops, so the behavior is fully analogous if you want to see the analogy.

(A mystery seems to arise for heterotic strings, but let's not discuss it here.)

You should understand that the divergent character of the perturbative expansion doesn't imply that the full exact function doesn't exist. It does exist and it is finite but it just can't be quite accurately expressed as a Taylor series. Still, the Taylor series becomes extremely accurate for weak coupling. You can check all these statements by taking specific examples.

Perturbative success of QED

Quantum Electrodynamics (QED) was the first successful quantum field theory, developed since the late 1920s, and it remains the most accurately tested theory in the history of science. If you use the natural (quantum relativistic "c=hbar=1") units, the interaction is determined by the fine-structure constant, "alpha=1/137.036..." which plays the role of "lambda". That's a pretty small number that makes the perturbative expansions very potent.

The single most accurately verified prediction is that of the magnetic moment of the electron. The coefficient "g/2" was measured by Gabrielse et al. in 2006 to be
g/2 = 1.001 159 652 180 85 (76).
The error is just 0.76 parts per trillion (I mean the American trillion which is modestly called "bilion" in languages like Czech). Moreover, the result may also be obtained theoretically and they match, within the error margin. In fact, the accuracy is on the verge of discovering some discrepancies that should be there because of the new physics near a TeV.

The theoretical calculation leads to a Taylor series in "alpha" which looks like "1 + alpha/2.pi + harder terms": the "alpha/2.pi" term has been known for more than 50 years. All these things are calculated perturbatively while the nonperturbative terms are completely neglected: that can't spoil the huge accuracy. Why? Because "exp(-1/alpha)" is "exp(-137)" which is close to "10^{-60}", so even once you realize that it shouldn't have been "1/alpha" but "C/alpha", you're still safe.

Clearly, nonperturbative effects become important if the coupling is larger. And if it is very large, they qualitatively dominate the physical phenomena. Perturbation theory loses the steam once the interactions become of order one. For example, QCD perturbation theory is the right answer for high-energy QCD effects while you need different methods to study long-distance physics such as the interactions of protons and neutrons in the nuclei (AdS/CFT is among the helpful tools here).

Mysterious strong coupling limits

For a weak coupling, the nonperturbative effects may be nonzero but they're so tiny that you don't expect them to change the qualitative features of the physics (and not even the measurable quantitative ones). But as the coupling is getting large, you might expect very mysterious things to happen.

In the past, people knew nothing about the nonperturbative or strong-coupling behavior of many field theories and vacua of string theory. And if humans don't know something, this something is full of sea monsters, dragons, extraterrestrial aliens, and nearly infinite water falls falling at a huge turtle.

However, when Christopher Columbus sailed to His India (now known as America), he found something completely different and more familiar.

Also in physics, the research has actually shown that if a theory is well-defined in the first place, its strong coupling behavior simply can't be completely new, unusual, or mysterious. The strongly coupled regime may be far from your starting point but it is just another place that must obey the laws of Nature (and logic). The physical phenomena over "there" are as sensible as those you could expect "here". And the strongly coupled limits of many theories are actually equivalent to some other (or the same) well-known weakly coupled theories.

S-duality and the Atlantic Ocean

Let me return to my metaphor from the era of Christopher Columbus. The oceans that were far from Europe could have been thought to be full of mysterious objects. But Columbus' discovery has reduced the mystery of the Atlantic Ocean. Europe and Africa are on one side while America is on the other side. One can get there from the continents which reduces the living space for mysterious monsters.

Don't get me wrong: there's still a lot of water in the ocean and it's hard to swim to the middle of it. But because it's known to be surrounded by known continents, it can't be too mysterious.

The situation is similar in quantum field theories and in the moduli spaces of string theory. For a very large value of "g", many of them were found to behave exactly as another (or the same) theory whose coupling constant is chosen to be "1/g" which is small: a weakly coupled theory. I don't want to go into details here. But it turned out that the "g=infinity" limit must be thought of as being analogous to the "g=0" limit. And this analogy is often an exact equivalence.

Whenever the "S-dual" description is found - another theory that is indistinguishable at "1/g" from the original theory with the coupling "g" - the true mystery is expelled to the middle, near the "genuinely strongly coupled" points such as "g=1". But the "g=1" region can't be that different from "g=0" or "g=infinity" because it's pretty close to both, if you use the right metric: you can get there by your "ships", namely the perturbative expansions that start at "g=0" or "g=infinity" (the continents).

Perturbative expansions lead to qualitatively correct physics

What's important is that the "vicinity of Europe" (the small "g" region) can be understood in terms of objects that are known in Europe, i.e. that you can identify at "g=0", and the European laws (how these objects interact near "g=0"). The same comment applies to America and "g=infinity".

There are no "huge" regions of the Atlantic Ocean that would be completely inaccessible both from Europe or Africa and America. So even if you don't know the precise results, including the nonperturbative corrections, you're pretty much guaranteed that you qualitatively know the physics. And in many cases, you can even add some "transperturbative", i.e. well-defined nonperturbative terms that give you more accurate results than the perturbative expansion (which remains the first part of the full result).

Joe Polchinski's D-brane calculus extends the perturbative expansions of string theory in the same way as ordinal numbers generalize the concept of integers (the ordinal numbers may be identified with the powers of "g", and "g^omega" is a symbol for the D-brane factor "exp(-C/g)": my analogy/map). D-branes are surely heavy, non-perturbative objects and D-instantons are tiny effects at weak coupling. On the other hand, all their properties are determined by perturbative calculations involving the good old perturbative objects, the strings (in this case, open strings with new boundary conditions).

Other nonperturbative methods

Besides S-dual descriptions, there are many other methods that have allowed us to study nonperturbative physics - including lattices and RG flows in the space of theories. It is fair to say that all these checks have confirmed the general statements above, especially the proposition that the qualitative conclusions of the perturbation theory should be trusted if the coupling constant is weak.

Since the mid 1990s, the duality revolution in field theory and especially string theory has expanded our knowledge of nonperturbative physics considerably. Most of the conceptual stringy papers since 1995 have been concerned with nonperturbative phenomena, in one way or another.

But one thing surely hasn't happened: the perturbative approximations to many questions have not been proved wrong. In fact, we have seen quite the opposite outcome.

Nonperturbative physics is just a refinement of the perturbative method and the main lesson that the nonperturbative insights have given us is that physical phenomena that admit familiar perturbative expansions are also relevant for the extreme regions that were thought to be full of dragons and sea monsters. In typical theories, America (the "g=infinity" region) has become another continent analogous to Europe. In its vicinity, one can do the usual things that you can do near the continents, e.g. to swim (and calculate scattering amplitudes as Taylor series).

Landau poles

But not all theories can be extrapolated to define meaningful and exact full theories that also cover the strong coupling. The progress in the Renormalization Group has clarified most of these questions.

Nonrenormalizable field theories, such as Fermi's four-fermion interactions, quantized pure Einstein's general relativity, or all stable relativistic field theories with a well-known type of an interacting Lagrangian above "d=4", admit infinitely many possible higher-derivative terms. Any choice of their coefficients is a priori as good as any other. There's no general way to pick the right values, so the theories always contain infinitely many unknown continuous parameters. All of them matter but you can't measure infinitely many numbers. The theories remain permanently unpredictive concerning any detailed physical questions about the regimes where the coupling is strong.

Renormalizable theories are different: they can actually be extrapolated up to the extremely high energy scales where they're still consistent. The requirement of consistency - and finiteness of the coupling constants - at these very high energy scales determines all the unknown parameters, up to a finite number of exceptions. Once you measure a couple of masses and couplings, you can predict everything, at least up to small errors of order "(ExperimentEnergy / HighCutoffEnergy)^k" with some positive exponents "k".

String theory which can never have any short-distance divergences has very different mechanisms that do a similar job - and that actually determine all the non-dynamical continuous couplings that would look adjustable in the field-theoretical approximation.

But let's return to quantum field theory. There are renormalizable and non-renormalizable theories. Theories with dimensionless constants - including Yang-Mills theory and the scalar quartic self-interaction in "d=4" - are either marginally renormalizable or marginally nonrenormalizable. What does it mean?

Because the coupling constant is dimensionless, you wouldn't know whether the parameter behaves as a positive power of mass (renormalizable) or a negative power of mass (nonrenormalizable). It seems to be "somewhere in between".

However, a more detailed treatment shows us that the coupling constant is usually not quite dimensionless. Quantum effects (starting with one-loop diagrams) modify the dimension: the exact dimension slightly depends on the coupling constant which is the same thing as saying that the coupling constant depends on the scale, something that wouldn't happen for exactly dimensionless constants. We say that the coupling constant "runs". It is no longer "somewhere in between": it falls on one side or another (the exactly conformal theories such as the N=4 supersymmetric gauge theory are the rare exceptions).

With this knowledge, you can actually decide in which way it goes. If the "dimensionless coupling" increases with the energy of the scatttered particles, the theory is nonrenormalizable. Even if it only increases logarithmically, the theory will be marginally nonrenormalizable and qualitatively analogous to the nonrenormalizable field theories (such as most of those in higher-dimensional spacetimes).

So it's very important to find out whether an interaction gets stronger or weaker at higher energies. It turns out that Yang-Mills (non-Abelian gauge) theories are the only ones whose dimensionless coupling constant gets weaker at higher energies (if the number of charged matter species is low enough), an insight that has earned some well-deserved money to Gross, Wilczek, and Politzer. All other types of classically dimensionless interactions grow stronger at higher energies.

The latter include QED and the scalar quartic self-interaction. It means that while these theories look completely consistent and renormalizable in the perturbative optics, up to L-loop diagrams with arbitrarily high "L", their exact nonperturbative behavior is analogous to the nonrenormalizable theories, e.g. those in higher dimensions. At the energy scale where the dimensionless coupling gets comparable to "1" or bigger, the theory starts to break down in the same way as the nonrenormalizable theories. However, in the "d=4" marginally nonrenormalizable case, all the dangerous unknown terms (infinitely many of them) are nonperturbative.

That's the reason why you never want the interactions in quantum field theories to grow strong at high energy scales. Non-Abelian gauge theories are the only major class of theories that may be considered acceptable at high energies in "d=4": they're typically asymptotically free, meaning that the coupling goes to zero, or (rarely) conformal which means that the coupling can converge to a fixed nonzero constant.

Theories with any other interactions - QED couplings, Higgs self-interaction etc. - have to break down at a sufficiently high energy scale (and be superseded by some very nice gauge theories or string theory). What happens above this "cutoff" is not determined by the original theory itself, and a better theory - its "UV completion" - has to be found if you want to answer such questions.

For the quartic scalar self-interaction, the coupling may be defined as the probability that two quanta (particles) of energy "E" interact. It can't exceed 100%, if you use this pedagogical definition, so "lambda" is bounded from above if the theory is consistent. Perturbation theory implies that it runs stronger as you increase the energy and this conclusion has to be right if "lambda" is low enough: one can't return to small "lambda" that would suddenly run weaker because we know that if "lambda" is small, it runs stronger. ;-)

The only way how this interacting theory could remain consistent above the Landau pole, the point where you expect the coupling constant to blow up, would be for "lambda" to converge to a finite constant (because of some nonperturbative stabilizing effects). There would have to be a new scale-invariant theory, a new fixed point. In this case, it almost certainly doesn't exist, but even if it did, you couldn't quite see that this fixed point is the "same theory" as one you started with. You needed some new information: you needed to look in a more general space of theories and the RG flows in between them. You couldn't have defined the fixed point theory by a straightforward procedure rooted in the original classical Lagrangian. In particular, you couldn't have put the original theory on a lattice.

Anti-perturbative religion

Everyone who either knows the history of physics or who has actually done some at least slightly important work in theoretical physics (or both) must know how important the perturbative methods actually have been and still are. They have been important in the development of all key theories in physics as well as in their verification.

I have already mentioned that Einstein had to write the planetary orbits in GR as the Newtonian result plus small perturbations, to verify his new, "nonperturbative" theory. But he was far from being the last man who had to use a similar trick. If you (or at least me and DVV) discover the first nonperturbative description of interacting strings (matrix string theory), the appearance of the free spectrum and the leading interactions known from the perturbation theory are the first things you naturally have to check.

For many phenomenological questions whose answers are important, the perturbation theory is the final word. And even if we're trying to find more general, nonperturbative results and principles, perturbation theory is quite universally the main tool that tells us whether we are on the right track.

Without this method, physics would become an unscientific guesswork in the darkness. The perturbative methods are important for simple reasons.

First, because virtually all the progress in physics is gradual and all temporary insights (both about observed phenomena as well as their theoretical generalizations) are approximate; most of the refinements are guaranteed to look like perturbations of the quasi-established theories in the regimes where they have been established as useful approximate descriptions. And physics has developed sophisticated tools that confirm that - and explain why (and when) - the qualitative conclusions of perturbation theory are reliable in the appropriate regions.

Second, a majority of complex enough physical systems can't be solved or understood exactly, so complicated or approximate techniques are necessary, but the network of the solvable or understandable systems inside the space of theories and ideas is dense enough to be relevant in virtually any situation, so the complicated and approximate techniques are pretty much sufficient, too.

So you can be absolutely certain that all the people who are vitriolic about the whole perturbation theory per se, the people who suggest that it should be removed from the main toolkit of theoretical physics, who think that it should never be trusted or that you should even think that the truth must universally contradict the conclusions of the perturbation theory, are simply deluded amateurs, detached from the actual research, regardless of how many other deluded crackpots are praising them.

Yes, I was led to write this essay as a reaction to an irrational anti-perturbative hysteria by one of them and his name was surprisingly neither Lee Smolin nor Peter Woit.

Add to del.icio.us Digg this Add to reddit

snail feedback (13) :


reader binkley said...

Love posts like these!

Blogging question -- why did you choose not to use HTML entitles? I.e., Λ (Λ) or λ (λ)


reader Lumo said...

Thanks, Binkley! If I had decided to write the Greek letter, I would write λ directly from the character table. View the source to see that there shouldn't be any HTML sequences - directly one character from the UTF-8 Unicode set used by these blogs now. ;-)

I avoided Greek letters because

1) some people wouldn't know how to read them and it would be distracting if they can't get through the text ;-)

2) I am still afraid of character set inconsistencies and breaking HTML, having been educated by two decades of such kind of hassle (those problems with the diverse character sets including Czech diacritical signs were just horrible before they got fixed!)

3) it's still faster to write "lambda" than any of the replacements that create the Greek letter.


reader Brian G Valentine said...

Very informative summary, Lubos.

In atomic physics we have, of course, the atomic spectrum of helium given by the (quantitatively exact) spectrum of hydrogen by perturbation expansion to the energies up to the Pfund series to a part in a thousand, and accounting for the finite mass of the nucleus, to a part in a million.

Care must be taken to distinguish an asymptotic from a power series expansion. The latter always has a radius of convergence, the former has none defined.

But an asymptotic expansion is always more useful to the approximation of functions - as Hamilton is quoted to have said, "the series diverges - now we can do something with it!"

Your essay gives a useful introduction to renormalizations and their (Co) semi-groups.


reader Anonymous said...
This comment has been removed by a blog administrator.

reader Anonymous said...
This comment has been removed by a blog administrator.

reader Pbef said...

I agree with the accolades. To me this is by far the best blog of a deep, fun, interesting and broad-ranging (whilst physics focused) kind. Be sure it is a clear reflection of the blogger behind it!
Baeh-aeh-aeh-aeh!


reader Lumo said...

Dear Brian, that's very right! Atomic physics and chemistry shows very powerful examples of perturbation theory.

For simple functions, the power expansion and asymptotic series are the same thing. It's just that the functions are not as simple in QFT (and ST).

Dear Mike and Pbef, thanks for your kind words. Mike, I am not trying to maximize the RSS subscriptions. In fact, I prefer readers of the HTML form - and people who don't behave as sheep - or at least people who have at least something else in their minds than baeh aeh. ;-)


reader Brian G Valentine said...

Ah, careful.

An asymptotic expansion of a function of a single variable x, depending on a parameter ε (0 < ε << 1), is written

f(x, ε) ~ Σ ν_i (ε) f_i (x) .......... (1)

where ν_i (ε) is an asymptotic sequence of ε (which may be a power series in ε, but all it means is that [ν_i+1 (ε)/ ν_i (ε) ] -> 0 when ε ->0 )

Terms in the series in (1) get increasingly “small” in a sense, but the point is that the series in (1) doesn’t need to converge to the function f(x, ε) it is supposed to represent (or converge to anything for that matter).

Many solutions to (differential, integral, functional, ...) equations are of course written as asymptotic expansions, but they aren’t “solutions” in the ordinary sense because, these series defining them aren’t convergent to anything. But on the other hand they are frequently easy to work with and represent SOMETHING when a person knows what they are doing with them.

As in the case of a non perturbation descriptions of interacting strings, ONLY the case of perturbations of general solutions to the (contracted) Riemann-Christoffel tensor representing the relativistic curvature of 3+1 dimensional spacetime are known.

That is to say, the 2 body problem is unsolved in general relativity (as is a complete solution of the 3 body problem in Newtonian mechanics)


reader Lumo said...

I completely agree with you, Brian! ;-) Do you think I don't?

Well, except that I don't know without thinking or searching for papers whether the particular expansion for the 2-body problem in GR converges or not. I honestly don't know.

The first trans-Newtonian correction was what I ever saw calculated (and reproduced).


reader Brian G Valentine said...

yes i do think so.

I just thought of something this morning, somebody else must have thought of it too, maybe you would know.

The mass defect of the Yang-Mills equations is still "unsolved" (as a complete description of compact simple groups representing Yang Mills in R^4 with a positive mass defect) - but is there a meaningful (generalization of an) adiabatic invariant to this (if this question makes sense to you)?

That won't get you a million dollar prize for the mass defect because it isn't a complete characterization of the mass defect, but still useful if it is something that requires only lower order, and not higher order, approximations for it


reader Brian G Valentine said...

In a certain sense, the question "does the series to the GR equations of gravitational interaction of 2-bodies converge" is entirely equivalent to the question, "are gravitational waves necessary for gravity in GR theory"

So bye bye for this day and THANK YOU for your efforts to keep the free world FREE

[Your President in the Czech republic knows what "freedom" actually is. The US president has no clue, none]


reader Lumo said...

That's a very interesting equivalence, Brian, if true! Could you please tell me more about its proof?

I always imagine the gravitational waves as the linearized waves, which is surely extremely close to reality for the real waves that fly around us. Aren't they protected i.e. exact in some sense?

Because one can obtain different energy waves by boosts, they seem to be described by no parameter at all. Is the solution really different?

Well, what I am really imagine is the monochromatic wave with infinitely many peaks etc. Clearly, a finite wave packet will have to see the nonlinearities. But how do you see that the full solution can't be written as a convergent Taylor series?


reader Brian G Valentine said...

It's a decomposition of series according to a convergent and (divergent) parts and looking at the Fourier spectrum of the divergent part.

email me to go into that if you want, not of general interest here