Wednesday, August 28, 2013 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Light dark matter in NMSSM and non-diagonalization of BH evolution matrices

I want to mention two new papers today.

First, Jonathan Kozaczuk and Stefano Profumo of Santa Cruz discuss the possibility to embed the very light, sub-\(10\GeV\) dark matter particle (indicated by some of the direct search experiments) to the Next to Minimal Supersymmetric Standard Model (NMSSM: it's the MSSM in which the Higgs bilinear coefficient \(\mu\) is promoted a chiral superfield \(S\) which is, according to many criteria and physicists, more natural than the MSSM itself):

Light NMSSM Neutralino Dark Matter in the Wake of CDMS II and a \(126\GeV\) Higgs
They find out that there are regions in the parameter space of NMSSM that are able to produce this very light higgsino-singlino-mixed LSP dark matter candidate with a huge, spin-dependent cross section coupling it to the nucleons. The Higgs mass may be achieved sort of naturally, other "negative" constraints may also be satisfied, and the scenario produces some automatic predictions, e.g. a large invisible branching fraction of the Higgs decays.




Yesterday, I watched some black-hole-information talks in Santa Barbara, especially the whole talk by Suvrat Raju about his paper with Papadodimas. The talk was rather impressive, Suvrat (my former TA, I am tempted to proudly say, although his brilliance has nothing to do with your humble correspondent) was responding to a stream of questions and they addressed pretty much all the criticisms.




There is a new related paper by Steve Hsu today in which he reiterates and perhaps updates his February 2013 paper,
Factorization of unitarity and black hole firewalls (arXiv)

Promotion of the paper on Steve's blog
I've left the following comment on Steve's blog:
Nice paper, I completely agree with you: all the potential final microstates are being mixed in the most general way so it's certainly incorrect to assume that the evaporating black hole Hilbert space may be rewritten as a direct sum of many "superselection sectors" that evolve unitarily and independently so that the evolution matrix would block-diagonalize in that splitting. This is one of the wrong assumptions constantly made by AMPS and followers - one that effectively amounts to believing that the geometry is purely classical and the gross features of a system like BH are perfectly predictable which they're not. Just to be sure, the evolution matrix may not only be block-diagonalized but diagonalized but its eigenstates are not states with a simple classical representation, e.g. a sharp center-of-mass location, they're not eigenstates of some natural local operators etc. They have no reason to be.

Also, just to sure, you're not the first one who pointed out pretty much the same thing but it's nice that you cite Nomura, Varela, Weinberg, and friends for whom this was a key point to point out. In fact, I think that the main "controversial" point of the Raju-Papadodimas paper (at least according to the structure of questions during Raju's Monday talk in Santa Barbara) - that the definition of the interior BH field operators in terms of the CFT operators is microstate-dependent - is pretty much equivalent to this your or Nomura gang's claim that the superselection sectors aren't separated (the evolution matrix isn't block-diagonalized in those subsets). The relevant subset of the Hilbert space to which one may "plausibly" evolve by a simple action of a few operators etc. depends on the ket vector we start with and the spacetime has no good reason to be a good "description of the background" if one deviates too much (by too many creation operators etc.), because of the back-reaction.

So in a "neighborhood" (in the sense of measuring the number of simple actions of natural operators) of a microstate, the local operators are sort of well-defined, but they become increasingly inadequate for more general, "faraway" microstates. This Papadodimas-Raju statement implicitly says that the definition of the local operators must gradually change as you change the microstates but there are no sharp borders between the neighborhoods, so no block-diagonal decomposition is possible. Instead, it's essential for the unitarity that all the microstates from the superselection sectors may be transformed to each other. The exponentially small matrix entries (including those between what AMPS and others would consider different superselection sectors) can't be neglected because they're essential even for the difference between pure density matrices and the maximally mixed one, see Papadodimas-Raju or
Hawking radiation: pure and mixed states are microns away
The microstate-dependence or the non-decoupling to the classical superselection sectors seems like a totally obvious point when looked at from a proper direction: it just means that the "Hilbert space of plausible pure states" and their organization that one needs to consider is allowed to depend on the rough or gross or "classical" evolution of the system: the Hilbert space of finely grained microstates is "fibered" over the space of coarse degrees of freedom and the character of the fiber may change. This is obvious - that's why we consider things like "the mass M black hole Hilbert space" at all (the spaces for different M are different; after all, they have different dimensions, even though both belong to a grander space of string theory) and why we can separate these states from others in the Hilbert space.

But the point is that the organization of the subspaces of the Hilbert space by local operators in the black hole interior is dependent not just on the presence of the black hole and its mass but "most" of its microstate details if a black hole is present. To me, this sounds sort of inevitable because one needs a "more than infinite" Schwarzschild time to penetrate inside the event horizon which means that in a certain Schwarzschild-time-slicing-based basis, a huge amount of scrambling that can mix really everything that waits to be mixed is performed on the local degrees of freedom right when an observer is crossing the horizon. There's absolutely no reason why the evolution operator should be block-diagonalized in the "superselection sectors" that look like classical patches. Superpositions of all of them may occur and therefore will occur.

I think that many of you are saying the same thing - or at least a big portion of what I consider the right answer to most of the questions here - but you don't fully appreciate that you use different words for the same insights.
Incidentally, Scott Aaronson who attended the KITP Santa Barbara fuzz-or-fire workshop posted a blog entries with some links and quotes about this topic.

Add to del.icio.us Digg this Add to reddit

snail feedback (22) :


reader anna v said...

This puzzles me : "this very light higgsino-singlino-mixed LSP dark matter candidate with a
huge, spin-dependent cross section coupling it to the nucleons."
OK lets accept that a 10 GeV LSP would not be seen at LEP because it could not be created in any numbers , though how it could have a huge spin dependent crossection to the nucleons and not appear at lep mystifies me. Still it should have appeared in the axions and monopo;e searches, imo.


reader Luboš Motl said...

Sorry, Anna, I am a bit confused by your confusion although the reason is a bit vague and widespread over your comment much like the comment itself.

Do you realize that LEP wasn't colliding nucleons but electrons and positrons which is why the large coupling of the new particle to nucleons didn't necessarily made it more visible to LEP?


reader anna v said...

Well, I put LEP first because the experiments were much more accurate than the hadronic experiments. L3 actually did see antiproton-protons http://www.sciencedirect.com/science/article/pii/S0370269303009286 .I guess the rest of us did not find it an interesting channel at the time. I was extrapolating from the large scattering cross section to reasonable production crosssection in association with baryons.


Anyway I looked at the paper and they do discuss extensively the LEP limits, so that should be OK. If I were still active I would be tempted to go back to the LEP data and check once more those limits.


reader and said...

I like the comments you make on the page given as a link here about analytic continuation... I am pretty sure by now that this concept can be extended in a far more fascinating way ... hope I'll get a PRL out of that ;) Until then: silenzio stampa


reader and said...

I do have a small question: is it ok to compute with "mixed" states and the density matrix formalism probabilities when one allows for fluctuations of the space-time geometry? Let me explain: mixed states and the density matrix correspond to the fact that the statistical ensamble contains nonequivalent states. I perfectly agree with assigning "classical" probabilities to quantum states in an ensamble when the geometry is not quantized but will this be allowed after quantization? Is the density matrix proved to be ok when the geometry is quantized? I think this comment is somehow similar to the idea of non-factorizability in some sense but I have to think a bit more...


reader Luboš Motl said...

Hi, I don't follow why you think that density matrices should be forbidden in quantum gravity. Density matrices are a totally universal part of the basic tools of quantum mechanics that work in all quantum mechanical theories - they describe the ignorance about some part of the information that doesn't translate to a knowledge of relative phases i.e. a perfect knowledge of another observable non-commuting with the original one. Whether the matrix elements of the density matrix or pure states have the interpretation of properties of the spacetime geometry or the length of extraterrestrials' ears is just a technicality that doesn't influence anything about the formalism of QM.


I think that your suggestion that mixed states should be forbidden in QG may be viewed as an extreme version of the factorization fallacy - you really want to treat the states classically, according to how they look like in the classical approximation, and forbid the normal things that must be allowed for any states in any quantum mechanical theory. Sorry it's completely wrong. If QG ever modifies something about the "foundations of QM", it does so in the opposite direction than you suggest - it allows one to consider things that seemed impossible. It surely doesn't ban basic procedures and mathematical expressions we knew from other QM theories.


reader and said...

No, I did not mean mixed states are completely forbidden. I just say it might be a more fundamental restriction of our knowledge when dealing with QG that may not be correctly taken into account by density matrices in this way... I was talking about a generalization or extension of them, by no means about "forbidding" them... ;)


reader Luboš Motl said...

That sounds creative, intriguing, but also vague and Orwellian. ;-) What do you mean by restrictions on our knowledge?


Of course that in practice, only some density matrices - some states of our knowledge - may be achieved by realistic protocols and which protocols are realistic depends on the context. But the math framework of QM still says that those things are allowed in general and determining that some matrices are "unrealistic" or "impractical" is always just an approximate consequence of an evaluation of all possible tools that one could use to study the situation, right?


reader and said...

oh, come on! I am not here to "promote" anything... I have some curiosities and I am thinking around. Some things may be wrong... at this moment I am sure this idea is incomplete but not necessarily wrong. Actually I just started writing in a very drafty way about this idea on a random piece of paper... Let's see to what this amounts :) Don't forget I appreciate your critique as harsh as it may be 'cause I'm kinda interested in "truth"... let's see what I can bring up tomorrow... :)
Say, I don't have a structured idea but what about the holographic principle? ;)


reader Luboš Motl said...

Dear and, sorry, I don't see any "incomplete idea of a generalization of a density matrix". This idea isn't incomplete in my eyes, it is non-existent. It is just a downright contradiction to some properties of a mathematical structure that can be pretty much rigorously proven. "A generalization of density matrices" is just a vacuous combination of words, like "planet Mercury with a ketchup on steroids", or anything else. It doesn't mean anything else.


You may add the words "generalization of" in front of any concept in science but that doesn't mean that you have an idea. A 1-line program in BASIC is enough to add the words "generalization of" in front of anything.


10 INPUT A$
20 PRINT "generalization of " + A$


I haven't written a BASIC program for decades so let's hope it's right. ;-)


To talk about an idea even though you apparently - or clearly - have no idea is just a distortion of the reality. It seems totally crazy to me to compare the situation of this non-existent generalization to the holographic principle which means a particular thing - carefully studied in 20,000 expert physics papers. Your generalization of density matrices is just a sequence of several words, a piece of complete bullshit. I can't understand why you don't believe it.


reader and said...

Well, give me some time and arguments (maybe) so that I can understand it... as I said, nothing existed longer than 1/2 an hour... still trying to understand... :))


reader and said...

question: in simple quantum mechanics states are defined as linear combinations of basis sets with different weights. When obtaining statistically relevant data we need to consider squared amplitudes so that the result has the meaning of probability and we get the usual interference pattern. In a mixed state there will be more, non-equivalent states, represented in an ensamble where the probabilities are calculated following the standard rules without interference. Here we do not have "noncommuting operators" and we calculate the probabilities "classically" (I use the term losely). We have say, two nonequivalent types of subsystems. We can nevertheless not separate the system (factorize) unless we know for sure that the subsystems are not entangled. In the case there is no entanglement the full system is described by a pure state and the subsystems are also described by matrices corresponding to pure states. Entanglement obviously changes this and then the partial trace of the full density matrix will describe a mixed state of the considered subsystem. There is no obvious reason to change this. Nevertheless, the question arises: what is accessible to measurement? The density matrix formalism relies on a fundamental lack of knowledge about the system? If yes, what is the origin of it? One can obviously say that different techniques of measurement will give different restrictions, as you said, but if something like quantization of the geometry generates a new and fundamental restriction for what we can know wouldn't it be nice to have it encoded in a direct way? Maybe the density matrix does that implicitly but is there a risk that people overlooked this mechanism? Yeah, I know, still no practical idea... I am still at the level of plausibility...


reader Luboš Motl said...

Apologies, And, I won't have time to teach you quantum mechanics. You're clearly confused about things from the first 5 classes of an undergraduate QM course.


Operators such as X,P in QM are always non-commuting. This has nothing whatsoever to do with using pure states or density matrices. Operators X,P, and any other operators may act on pure states - ket vectors - much like they can act on (multiply from the left, multiply from the right, or compute commutator) the density matrix which is formally just another operator (matrix), just not one interpreted as an observable. But XP*rho - PX*rho is still equal to i*hbar*rho, and so on. Operators don't commute. This is a universal property of the operators of observables and they're the primary constructs. Using pure states or density matrices has nothing to do with properties of operators.


reader and said...

right, thank you for teaching me, despite your aversion of doing so ;) I did no confusion about the fact that p and x or by that means any incompatible observables do not commute, this is a bit of extrapolation from your part, but it's ok... what I wanted to say (but probably missed it) was that our lack of knowledge does not come from the normal quantum mechanical "sources" ... now, to make it clearer: in my opinion which is of course relative, ignorance that comes from something like non-commuting of operators is fundamental whereas "more classical" ignorance is less fundamental. (I really don't want to be picky and say that operators of compatible observables do commute ;) ) I agree that density matrices do unify quantum and classical "ignorance". The question is: is there some sort of "more fundamental ignorance" comming from quantization of geometry? That's it! I think now I have it clear but I'm sure you correct me if not ;)


reader and said...

to avoid another possible misunderstanding in what I said, but I realized only now: I also agree that taking the partial trace is the quantum-mechanically right operation to do. This is not my question! My question is: is there some new and fundamental restriction of our possible knowledge that was possibly overlooked when dealing with quantization of geometry. (honestly, I am happy I do have to think quite carefully about the words I use when posting here... so, don't ban me please! It is probably the best exercise I could get :D )


reader Luboš Motl said...

The best answer I can give you is No, that there can't be such a universal restriction but I am afraid that for certain reasons I don't understand, this answer doesn't satisfy you, does it? ;-)


reader and said...

Nope, it doesn't... some proof would be nice... I may agree with an answer like "no" with some arguments around it... I will try to prove my ideas too, as good as I can, promise! :)

P.S. there were some old papers about "extension of heisenberg uncertainity" like this one : http://arxiv.org/pdf/hep-th/9301067v1.pdf

what is your opinion about them?


reader Luboš Motl said...

Dear and, I am slightly sympathetic about attempts to deform uncertainty principles but 1) those in this paper are really among the less motivated ones, 2) this has nothing whatsoever to do with your hypothetical restrictions on the density matrices.


I don't know how to nontrivially disprove a possibility you haven't even described. What you write is impossible by definition. The density matrix is *defined* as the most general quantum counterpart of a probabilistic distribution. So of course that it is unrestricted.


What you propose is the exact analogue of a classical "idea" that the probability distribution on the phase space is "restricted". What? WTF? Restricted by whom? What does go wrong if one violates the restrictions?


It is positive, integrated to one, and real (the density matrix is Hermitian), but otherwise there are no general restrictions. It's really a point of probability distributions that it can deal with all possibilities and their weighting. The density matrix case is completely analogous.


reader and said...

you are unfortunately again misinterpreting my statement: I was talking about limitations of our knowledge. As one can see from quantum mechanics this does not imply a restriction on the probability distribution... quite the opposite: The uncertainty principle has its origin in non-commutativity of some operators and this amounts to the construction of the path integral formalism where the non-commutation is encoded in the lack of convergence of a limit, etc. and the two formalisms (canonical "commutator" quantization and path integral quantization become equivalent if done right). You see that I speak about restrictions *on our knowledge* and not on the probability distribution. A fundamental restriction on our knowledge would essentially increase the domain where the distribution exists in the same way in which quantum mechanics brings us beyond the classical path... Anyhow, thanks for pointing out this ambiguity. Now I hope everything is clearer...


reader and said...

Unfortunately you are again misinterpreting my statements. I was talking about fundamental limitations on our knowledge, not on the distribution function. In fact, as we see in quantum mechanics an uncertainty principle appears from the fact that some operators do not commute and this amounts to the canonical quantization procedure. If one goes to a path integral formulation one incorporates this non-commutation in a subtle way in the convergence criterium of a limit. If path integral quantization is done correctly the canonical and path-integral quantizations are equivalent. Our lack of knowledge in that case does not restrict the distribution, quite the opposite, it extends it... Thank you anyway for pointing this out. I hope now, everything is clearer.


reader Luboš Motl said...

Dear and, what is clearer now is that your comments are wrong.


Our knowledge and the probability distribution is the *same thing* in classical physics. Our knowledge is mathematically quantified by the probability distribution. So you can't have a restriction on one and not the other. The same holds in QM if you replace the probability distribution on a phase space by the density matrix.


reader and said...

I have the impression that mainly because of me this discussion became rather philosophical. What we share is probably the interest for extending or deforming uncertainty principles and the role played by the holographic principle in this "affair". I am afraid this set of comments becomes rather long and few people will read them all but I will follow your other contributions in the hope something related to this subject will re-emerge... about me being wrong... maybe just a bit and in a controlled way;)