## Thursday, September 05, 2013 ... /////

### A universal derivation of Bekenstein-Hawking entropy from topology change, ER-EPR

I have been intrigued by topology change in quantum gravity, especially its Euclidean version, for 15 years or so. Since the beginning, I liked a sketch of a derivation (that I invented) of the Bekenstein-Hawking entropy of a black hole that was based on a wormhole connecting two tips of the Euclidean black hole in the $rt_E$ plane.

Ignore the wormhole-related captions.

Before the ER-EPR correspondence, I would interpret the two planes on the picture above (lower, upper) as spacetimes in the ket vector and the bra vector, respectively, and this need to double and complex conjugate the whole spacetime made the details of the argument confusing because the thermal calculation (which is inevitably connected with the cigar-like Euclidean black hole pictures) inevitably involves a trace over ket vectors (or bra vectors but not both).

Fortunately, one may now present the whole argument without any bra vectors. Thanks to Maldacena and Susskind, the doubling of the spacetime (note that there is the upper and lower plane on the picture above) may be interpreted as the presence of two distinct spacetimes or two faraway regions of one spacetime – or two faraway regions of the same spacetime; it won't really make a difference. With this reinterpretation of the pictures, I am more satisfied with the argument.

Try to calculate a thermal correlation function in a spacetime (or a pair of spacetimes if you really view the two planes as disconnected) of temperature $1/\beta$ which will be chosen to agree with the black hole temperatures below. The operators in the correlation functions don't matter; assume that they are low-energy operators far away from all the celestial bodies we will consider.

We want to know how much the states with two black holes at places $A,B$ (in arbitrary microstates) contribute to the correlator; and how much the states with two neutron stars at the same places $A,B$ contribute. The ratio of the two contributions should be $\exp(S_A+S_B)$ where the terms in the exponent are black hole entropies times some subleading corrections (all neutron stars' entropies will be negligible). Just to be sure, the contribution from two black holes should be exponentially larger. I will take the two celestial objects to be macroscopically the same so the ratio should be $\exp(2S)$ where $S=S_A=S_B$.

To confirm the Bekenstein-Hawking formula means to prove that the contribution from the two black holes is $\exp(2A/4G) = \exp(A/2G)$ times greater than contribution from the two neutron stars.

By the neutron stars (a nickname chosen for the sake of simplicity), I really mean a celestial body that is on the verge of collapsing to a black hole. I want $g_{00}$ to be very small right above the surface of this body. Because $|g_{00}|$ wants to be even smaller in the stellar interior which makes it impossible for $|g_{00}|$ to be near zero above the surface, I really need to consider a hollow star – a shell that is protected against the collapse by some skeleton or light gas inside or whatever. I hope that these awkward technicalities don't really matter and can be replaced by a less problematic treatment. Maybe it's enough to compare the two-black-hole contribution with the contribution having no objects at those places at all.

For the sake of clarity, let's assume that the black hole radii are equal to a few miles (a solar-mass black hole). The thermal correlators may be calculated from the path integrals$\langle \cdots \rangle = \int {\mathcal D}\,{\rm fields}(x,y,z,t_E)\,\exp(-S_E)\, (\cdots )$ over the Euclidean geometries with Euclidean field configurations in a spacetime whose Euclidean time coordinate $t_E$ has the periodicity $\beta$.

Now, we don't want to study the detailed microscopic physics of the neutron stars. Their entropy (and any non-black-hole celestial object's entropy) is negligible in comparison with the black hole entropy. We don't even want to specify what exact short-distance degrees of freedom are responsible for the black hole entropy. Indeed, the goal is to derive the Bekenstein-Hawking formula "universally", for every quantum theory that resembles quantized general relativity in a limit.

But yes, in this geometrized picture of the degrees of freedom, all the entropy is carried by some degrees of freedom – field modes and their generalizations – that may be attached to the stretched horizon, a Planckian vicinity of the region that will host a throat in a minute.

To neglect the short-distance physics, why don't we integrate out all the field modes with wavelengths shorter than 1 millimeter (to be specific again)? When you do so, the two-neutron-star contribution looks like two disconnected pieces, essentially two planes (the upper and lower plane) not connected by the throat shown on the picture at the top. Even if there has been some entanglement between the stars, it was way too weak to produce the smooth throat. Instead, the thin tunnels disappeared as we integrated the high-energy degrees of freedom out. The stellar interior isn't clearly shown on the picture – the picture only shows the stellar exterior – but it's somewhere and the Ricci scalar $R$ is essentially zero everywhere. Again, maybe I should replace the neutron stars by empty regions of space throughout this argument; I wanted the two compared situations (with and without black holes) to be as similar as possible, however, so that the difference may be blamed on the throat, as we will see momentarily.

Maldacena and Susskind taught us that the Hilbert space of 2 similar black holes – essentially $\HH_{2BH}=\HH_{1BH}\otimes \HH_{1BH}$ – is isomorphic to (really the same as) the Hilbert space of an Einstein-Rosen bridge geometry that connects them. Despite the apparently different topologies of the two descriptions, they're the same Hilbert spaces. The bridge-based description is better for highly entangled states in the Hilbert space; the 2 isolated black hole description is better for the nearly unentangled states of the two black holes. (Note that "highly entangled states" and "almost unentangled states" don't form linear spaces because the properties "entangled" and "unentangled" aren't closed under addition.) The two-black-hole states that strongly entangle the two black holes look like smooth bridges; however, there are highly excited, unsmooth bridges that must describe all the other two-black-hole microstates as well.

In general, the two planes – see the picture at the top – are connected by "some" throat. When you integrate all the field modes shorter than one millimeter out, you also do it for the gravitational modes so the geometry can't be too thin or curved. In effect, the gradual integration out thickens the throat in the black-hole case while it cuts the throat(s) in the stellar case. When you're finished, the throat itself is about one millimeter thick. It was a randomly chosen distance scale that is much longer than the Planck scale but much shorter than the black hole radius.

Looking at the two-neutron-star and two-black-hole Euclidean geometries, they look very similar. The only difference is the throat near the event horizon (or near event horizon in the case of the stars). In that region, the $(d-2)$-dimensional area of the angular variables is constant, $A$, which simply enters as an overall factor to the difference of the actions, and the major components of the curvature tensor only exist in the two-black-hole case and in the Riemann components $R_{rtrt}$ and its three copies dictated by the Riemann tensor's symmetries ($t$ really denotes $t_E$ as an index).

(The throat in the black-hole case isn't Ricci-flat; the nonzero Ricci tensor must be blamed on the high-energy matter that resides in the stretched horizon(s).)

So the two contributions to the path integral – from the two neutron stars; and from the two black holes – only differ by the extra "wormhole" in the two-black-hole case. This wormhole is a "handle" of a Riemann surface and the exponent of the Euclidean path integral is more negative in the black-hole-case (I hope) relatively to the neutron-star case by the factor$\exp[-(S_E^{\rm BH}-S_E^{\rm neut})] = \exp\left(\!-\frac{A\int d^2 x\sqrt{|g|}R_{(2)}}{16\pi G}\right)=\dots$ over the handle (wormhole). But the two-dimensional integral – the Einstein-Hilbert action above – is proportional to the Euler characteristic$\chi = \frac{1}{4\pi}\int d^2 x\,\sqrt{|g|}R_{(2)}.$ Note that a sphere of radius $a$ has $R_{(2)}=2/a^2$ and $\chi=2$. Each added handle (which has a negative curvature $R_{(2)}$ in average) reduces the Euler character by two and (therefore) the integral of $\sqrt{|g|}R_{(2)}$ by $8\pi$. When you substitute this $8\pi$ decrease above, it becomes an increase of the exponent due to the extra minus sign in the exponent and you will see that the two-black-hole contribution is greater by the factor of $\exp\zav{ \frac{A\cdot 8\pi}{16\pi G} } = \exp\zav{ \frac{A}{2G} },$ exactly as expected from the Bekenstein-Hawking entropy of two black holes. This multiplicative increase implies that there are $\exp(A/4G)$ black hole microstates per black hole whose precise identity doesn't significantly affect the correlator we agreed to compute. So if we trace over them (and we do so in a thermal calculation), they just influence the result by the simple multiplicative factor (the number of these microstates).

You may have some doubts about the sign of the Euclidean Einstein-Hilbert action used above. I have some doubts as well. I can enumerate about 6 things one must be careful about that may lead you to a wrong sign but I am not sure whether I am not missing some other sign flips. The probability that I keep on committing a sign error here is too close to 50 percent at the end ;-) which is why I must add that a more careful scrutiny is needed.

This argument may arguably be generalized to derive Wald's entropy formula for a more general action including higher-derivative terms. In these cases, one still has $R_{rtrt}=2\pi \delta^{(2)}(r,t_E)$ per black hole located at the horizon and if we treat this modification of the Riemann tensor perturbatively, the change of the gravitational action produces Wald's entropy formula instead of the Bekenstein-Hawking formula above.

Incidentally, I think that quite generally, the black hole entropy must also be interpretable as the total order/volume of an approximate symmetry group of a given spacetime because a black hole may be interpreted as a codimension-2 "cosmic string" in the Euclidean spacetime (which is analogous to 7-branes in F-theory and requires us to study the monodromies). But why this gives the right results in weakly coupled string theory (where you have a $U(1)$ for each free field-theory mode produced by the string theory); pure $AdS_3$ with the monster symmetry group; and in BTZ-black-hole-based $AdS/CFT$ calculations will be reserved for other blog entries, much like the connections of the ideas above with the representation of microstates as Mathur's fuzzballs.

String/M-theory gave us numerous pictures of the microscopic structure of the black holes. Those usually make it hard to see the locality in the bulk (and even hard to see into the black hole interior) and difficult to assign the degrees of freedom to the locations in the bulk. While unitarity etc. is manifest in these string/M-theoretical pictures, various geometric properties are less clear. Realizations such as the text above are meant to clarify all the remaining secrets of the black holes that are "universal" and independent of the microscopic description of the black holes.

#### snail feedback (35) :

Is there some sort of surgery theory behind this "different topologies" analysis? Just wondering if a deformation and a cut would "prove" this duality in a better way?

ok, just to remember everything right I opened the wiki page about surgery theory and I look at it. If a theory is described indeed on a boundary then that boundary can either be the boundary of one product space or the boundary of another product space between a disc and a sphere. Having in this sense two equivalent theories described on two spaces with different but controlled homology and homotopy groups. For me the ER-EPR looked from the very beginning as an application of this idea and I suspect it can be further applied to other things.

ER EPR equivalence started off explaining well what seemed a non-paradox. As your use of it here shows, it gives much more. It lets one begin to "visualize" how nature may use time space to connect what must be and separate what must be.
Look forward to further.

OK, I would probably disagree with your history completely (ER-EPR arose from lots of research of entanglement in QG, like Ryu-Takayanagi and van Raamsdonk who are actively at work years before the AMPS-nonparadox), but thanks.

Dear Lubos,
this ER-EPR correspondence still confuses me a lot.
Is the ER-EPR correspondence a mathematical analogy or something which I should take seriously physically? If the latter is the case would it be a rare case or a common case?

It is a completely serious, physical, and universal identity.

I will try to give an explanation that may contain lots of errors but this is how one learns :D
As discussed before quantum mechanics is defined such that its answers, given in form of probabilities encode the "global" structure (the topology) of the problem. In the case of quantum gravity this becomes somehow non-trivial. The topology of the problem may become dynamical and may have interesting effects. If the metric becomes "dynamical", large changes of it may in the end have effects on the topology. One has to understand what the effects are when these changes are considered and in order to define a correct quantum theory one cannot avoid considering them. As far as I understand the problem now, one mechanism that can give a "class" of changes would be given by surgery theory. This, essentially allows to "classify" the spaces of dimension higher or equal to 5 (not sure have to check or to accept being violently corrected by some experts here)... anyhow, the higher the dimension the better. ER-EPR makes the connection between a probabilistic answer (EPR) and a topological effect (ER) by (apparently) applying some sort of surgery and identifying theories on 2 spaces, one connected and the other one not simply connected. Now, this is my incomplete understanding of the situation and I expect anytime someone to start criticizing me... :D

or I am just considered stupid and vigorously ignored... :D

Right. But I should take it as a purly mathematical duality, correct? To ask it in a naive way what is the probability that the black hole in the center of our galaxy has a wormhole that leads to another black hole maybe sitting in the center of another galaxy?

Einstein-Podolsky-Rosen vs Einstein-Rosen. It is amazing what a genius Einstein was. He was the single best physict of the 20th century. A towering giant who single handedly developped STR and OTR and made fundamental contributions to quantum mechanics as well. His paper on EPR is the most cited paper in whole of physics.

I do not understand the mathematical details of this ER-EPR connection but the idea seems a little wild to me. Entanglement is a very special quantum state and for it to develop, particles need some common history (like the splitting of a photon on a BBO crystal). What kind of common history could there be for two black holes?

In the Maldacena paper they argue that you cannot send signals FTL through entanglement and through ER bridge and therefore, the ER=EPR. Such arguments are rather silly to be honest. It is like arguing that an apple is spherical and an orange is spherical and therefore apple=orange. But the Maldacena paper is nicely written, even I can follow half of the content :-)

although I agree that those papers are not necessarily mathematically sound they are inspiring... at least for me... :-)

actually, they are also mathematically sound... what am I saying?!

I don't have the time to try to understand this in detail (unfortunately I have to mark some silly examination papers) but it seems to me you are over-estimating what surgery can do.

Firstly, it does not work on general "spaces" but only on manifolds (of various kinds - smooth, PL, topological) and secondly, it certainly can't classify them. In is used to try to answer questions like: given a manifold M and a normal map f from M to a Poincare complex X (where normal refers to a certain technical condition) can one convert f to a homotopy equivalence? It turns out that you can do it partially (below the so called middle dimension) but then you encounter complicated obstructions. The great success of these techniques was the proof by Smale of the generalized Poincare conjecture in dimensions >=5 but this certainly does not mean you can classify smooth manifold, even up to homotopy equivalence...

Just checked if I put the "quotations" on "classify" ... yes, I did! :D It is true that they do not work on general spaces but wasn't it a topological manifold we were interested in in this case anyway? I am not 100% sure but I suspect some physical restrictions won't really allow us to work on something as the Sierpinski space... we do need to have a metric etc. so there should be a metric space and a manifold, probably Riemaninan...

I don't understand why you keep on insisting on the "purely mathematical". ER=EPR is the identity between two concepts in the actual real world of physics, so it is "purely physical".

Every entangled pair of degrees of freedom in the real world may be interpreted as a non-traversable wormhole (although the wormhole is very "thin" or "quantum" or "Planckian" if one only thinks about a few bits); and every wormhole may be viewed as a correlated, entangled excitation of degrees of freedom that existed in the two regions of space.

Assuming that you measure the exact microstate of our galactic black hole whose entropy is S (in k=1 units), the probability that among the basis vectors including the right one, another galactic black hole is in the same state is clearly exp(-S) = 1/N where N is the number of microstates, exponentially small. So it's not the same, it's not connected by a tunnel unless it was prepared by an intelligent agent - and this agent would have needed much bigger gadgets to achieve these things than the black holes themselves.

But this sort of misses the point. Even if the entanglement is imperfect, one may describe the pair of the black holes are a "perturbation" of the Einstein-Rosen bridge. The more entangled the two black holes are, the more useful the description using the bridge becomes.

But the description using bridges is applicable even for completely unentangled black holes. You just integrate over all possible twists inside the ER bridge and what you're left is a tensor product of two basis vectors.

As I have emphasized a few times, it's really the point of ER=EPR that you can't "objectively say" what the topology of the spacetime is. The topology of the spacetime isn't an observable - isn't a linear operator. If it is "unentangled, bridge free" for some microstate of the 2 black holes, it doesn't mean that it's "unentangled, bridge free" for a linear superposition. .Indeed,a linear superposition of unentangled states is usually an entangled state.

Thank you, Lubos. Got it.

I meant it in terms of being brought to attention and comprehension of those of us who benefit from such in grasping the connections that likely were already obvious to some long ago.

There is nevertheless a small issue with which I think lucretius here will agree: many of the observations made by theoretical physicists (and I have to include ER=EPR) are not perfectly mathematically proved. They look plausible when thinking at them from a physicist's point of view but do not have a precise mathematical proof. ER=EPR looks a lot like some application of surgery to me and lucretius underlines nicely that this is not a "miracle-solution" so, in applications we have to consider when it can be applied and when not, and for this it is wise to ask for the help of mathematicians. :) I saw also lots of wanna be applications of AdS/CFT duality in regions where it cannot possibly work and the approximations made were utterly un-physical and un-mathematical...

Dear Andrei, right, I think that the maths behind ER-EPR is not explicit at this point - ER-EPR is a physics paradigm at the level of 't Hooft-Susskind's holographic principle of the early 1990s.

Mathematically explicit and convincing examples should arrive in the future - they would make it really analogous to AdS/CFT. So in this sense, ER-EPR still needs its true Maldacena. ;-)

But I do think that ER-EPR is kind of valid everywhere, even more generally than the holographic principle.

I agree that most papers by theoretical physicists which I have had a look at (which is not many) lack rigour - but what is worse, they often do not make it clear if they have only omitted the mathematical details or they have not worked out the details or they don't even know how this should be done. Of course there are exceptions, most notably Witten, who almost always makes it clear what is proved and what is conjectural (but then he is used to addressing mathematicians). Papers in general relativity that I have looked at also seem to be usually clear on this point although they usually do not give enough details for my needs. An example of this is the fascinating paper by Friedman, Schleich and Witt on "Topological censorship". The mathematics seems quite rigorous but the proofs are still too sketchy for me.

It's easy for classical GR papers to be more rigorous if they use something that could be classified as undergraduate if not high-school maths.

I agree, but there is a striking problem with people that tried to apply AdS/CFT to problems where it couldn't possibly give accurate results and it obviously did not do so (proved by experiments)... after that, some other people started saying that AdS/CFT is just wrong in general... If need appears I can search for some examples...

"Not sufficiently rigorous" rigorous means that they use physical terms for which no mathematical definition is given. Of course it is quite likely that such definitions are given elsewhere and not being a physicist I have not come across them while the authors assume that they are so well known they do not need to bother. As for using "high school maths", I assume you must be joking.

Such issues as discussed here is why good nice collaborations between theoretical physicists and mathematicians are needed to produce cool rigorous new insights ;-)

As for example described in "The Shape of Inner Space" this often works quite well :-)

http://www.amazon.com/Shape-Inner-Space-Universes-Dimensions/dp/0465028373

So concerning the ER-EPR correspondance, maybe it is now the turn of the mathematicians to do their part and make it rigorous ...?

Cheers

I agree with you, Lucretius. Mathematicians are much more rigorous and clear than physicists who are mostly sloppy. Mathematicians clearly distinguish between a theorem (a rigorously proven fact) and a conjecture (speculation).

A solid paper should distinguish between these two and should point to all relevant evidence, i.e. quote other papers where the necessary building blocks are rigorously proven. It is even a very different experience to study a subject from a physics or mathematics textbook.
http://abstrusegoose.com/128
http://abstrusegoose.com/129

Dear Lucretius, this comment of yours only reinforced my pessimism that by "rigorous", you really mean "dumbed down". Why should definitions of all physical concepts be given in a physics research paper? You're surely joking, right?

When people write such papers, and I used to do that as well, they are addressing the product to other experts who have spent 10-20 years by learning and thinking about the meaning of all the concepts. You surely don't want to compress such things into each paper, do you?

Sometimes a concept is ill-understood even by the authors. Well, sometimes even a paper whose all concepts are well-understood is wrong. It makes no sense to "perfectly" eliminate the former problem if you can't totally eliminate the latter.

By the high school maths, I meant that virtually all classical GR papers only require some engineering applications and body of knowledge that doesn't go any "deeper" aside from the conceptual knowledge I had - and e.g. most string theorists arguably had - as high school kids.

I wouldn't be able to reproduce the proof of the Gromov-Lawson surgery theorem and I am just barely aware of what it says. But what it says - and arguably the proof as well - can be understood by a reader who knows manifolds and formulae for the Riemann/Ricci curvature tensors and scalars which is the case for many high school kids who end up in science. This situation is very different from some advanced, abstract-quantum-field-theory and string-theory-dependent, research papers where one needs some graduate-level background to even understand the claims and steps in the proofs.

Mephisto, this is a very stupid comment of yours.

Since the clear divorce of mathematics from physics etc., mathematics was *defined* by insisting on this more or less unlimited rigor while physics mostly denies it. That doesn't mean that the mathematics' approach is right. The price one pays is that mathematicians often get to much lower depths of mathematics than e.g. theoretical physicists.

But more importantly, natural science just can't operate like mathematics. It is not mathematics, stupid. If something is quite rigorous, it can't be argued to directly or reliably apply to the real world and vice versa. This is an obvious fact - and a quote from Einstein, by the way.

By suggesting that natural science should and could work like maths and sharply separate all things to theorems and conjectures, you are only proving that you don't have the slightest clue what natural science is. In Natural science, one always has to work with things that are pretty much reliably (but never 100.00000000000%) known, things we think are very likely, likely, might be true, and so on.

As one gets to the cutting edge, it is inevitable that the number of assumptions whose probability is just "moderate", like 90% or even 50%, inevitably gets higher. If everything were 100% certain, it wouldn't be a cutting edge. It would be established science.

The difference between maths and science is that in maths, one establishes claims "abruptly", by the moment of finding a rigorous proof. But there are no rigorous proofs in natural sciences and the evidence for or against almost all important propositions is usually accumulating gradually.

Just try to read my comment twice and appreciate how elementary things you seem to misunderstand about science - yet you have the chutzpah to market this rudimentary misunderstanding of yours as some deep insight that allows you to attack physics. Your attack on physics - and natural science in general - is absolutely indefensible and idiotic.

I see your point and agree. I guess some dialectical compromise between the two approaches is the best option. There is an assay about the subject

http://arxiv.org/pdf/math/9307227v1.pdf

Lubos, you get so upset when you think (in this case completely mistakenly) that you have detected a criticism of something that you are emotionally attached to, that you forget your own expressed opinions and attack people who are far from disagreeing with them.

You have yourself argued in this blog against using strict mathematical rigour in physics and in favour of replacing rigorous proof by mutually reinforcing verifications (I can’t remember the exact phrase) and I agree with all that. One (but not the only) reason why I agree is because as a mathematician I look at physics as a source of inspiration for mathematics - if physicists decided to prove everything with strict mathematical rigour their own progress would be greatly slowed down and they would start directly competing with us, which would not be so good for either. Actually, I remember when Witten was awarded his Fields medal I Kyoto in 1990 - I was living in Japan and came just for this and to talk to my former supervisor. There was a lot of opposition to this award because quite many thought that “a Fields medal should not be given to someone who has never proved anything” and it would not have happened had it not been for Atiyah’s great influence and effort to make sure that it did (I heard that from him personally…). Obviously the standards of what constitutes “rigour” are different and should remain so (although by no means all mathematicians are “rigorous”, Manelbrot was one of the least so).

What I wrote what seems to have annoyed you I was only speaking from my own point of view: it would have been easier to me to read physics papers if all the physics was eliminated from them, so that, fro example, instead of entangled observers falling into black holes there were only mathematical objects and definitions. In no way it was a criticism of physics or physicists. I am not claiming that making things easier for me or other pure mathematicians would make better physics, in fact I am sure it wouldn’t. Still, there are some physics articles, like the one on “topological censorship” that I can almost completely understand (except for the reason for some of the conditions) without learning any physics and, it seems to me, they are most often found in general relativity.

Dear Lucretius, first, I do count Witten primarily as a "mathematician from a cultural viewpoint". He's made many contributions. It's also true that the bulk of the important ones were elaborations on ideas by others that were around for months and sometimes for decades.

Looking at physics as inspiration for maths is good but you shouldn't forget that it's your viewpoint and you shouldn't forget to credit physics for that because giving inspiration to mathematicians isn't physics' primary goal.

It doesn't sound nice when you respect physics for something else than its primary goal - for its being a reservoir from which mathematicians may apparently freely steal whatever they like - and then you even don't appreciate the intellectual environment that made it possible for the reservoir to emerge. It's little bit like a communist ideologue complaining that the newest Western information technologies haven't been improved sufficiently for his attacks on the inhumanities and inefficiencies of capitalism to be really efficient. ;-)

Please, have mercy with mathematicians... they can give some "sudden" insights that appear in a completely discontinuous manner (historically speaking) but have major impact on lots of things... se Grotherndieck or Attiah! I am pretty amazed by this kind of "geniuses"... I am not that sure that progress in physics must always be of class C^{infty}... sometimes jumps may occur too and then we get closer to math!

I am not sure if you mean me personally or mathematicians in general. Certainly the most influential mathematicians in areas bordering on physics greatly respect physicists - certainly my former supervisor does and his former supervisor (Michael Atiyah) takes every opportunity to praise physicists. There is and there has always been some jealousy and competition between the two subject - it is very clear at my university, where the relations are rather poor (part of the reason is that mathematics has combined with computer science making themselves much more successful in getting students and grants).

I also think that physicists (or at least one former physicist) have "paid back" for the Fields medal given to Witten by awarding the "fundamental physics prize" to Kontsevich.

As for me: I obviously greatly admire both physicists and physics and since my childhood I used the former as a means to combat antisemitism and the latter as evidence that mathematics is a useful subject.

Dear Lucretius, I didn't mean you personally in any statement.

Obviously, I am attached to both disciplines as well. That's why I can't remain silent when someone suggests that one of them is evil because of something that is pretty much its defining feature.