Tuesday, March 08, 2005 ... /////

Littlest Higgs model & deconstruction

The main article about Nima Arkani-Hamed on this blog: click at his name...
This note is gonna be about the concept of and its different applications. Itay Yavin, a grad student from Harvard, just spoke about their littlest Higgs model, and I will mention some features of their model at the end of the paper.

What is deconstruction?

In philosophy, deconstruction is one of the characteristic methods associated with postmodernism and with the name of Jacques Derrida. Deconstruction is not attempting to read a text and judge it by the content; instead, it tries to interpret every sentence as a result of social and political conflicts between the author and her or his cultural environment. This goal of this kind of "critical" thinking is to show that the content does not make any sense that could be permanent; the categories and terms are neither objectively well-defined nor separable and they only exist within the given context.

Those who are interested in Feynman and deconstruction (in philosophy) may want to open a very recent article at Mormon Philosophy and Theology. My father and sister were baptized by mormons several years ago ;-) although their current "mormonity" is very limited - but nevertheless I hope that the link is not inappropriate.

Because The Reference Frame does not believe that all methods how to use the human brain are equally valuable, the paragraph above is everything we're gonna say about deconstruction in philosophy which we're gonna interpret as a generic example of flawed thinking from now on. We're going to use the term "deconstruction" in the particle physics sense. ;-) In this context, deconstruction is a particular method to construct new physical theories. It was pioneered by the following influential paper
Because Andrew Cohen is from Boston University, Sheldon Glashow rightfully identified deconstruction as an important contribution of Boston University to particle physics in the last 5 years - and an argument that Boston University's approach to theoretical particle physics may be superior to Harvard University's approach. That's an excellent argument - because deconstruction is definitely one of the most influential ideas in phenomenology in the last 5 years - except that Andrew did the work with Nima Arkani-Hamed from Harvard and Howard Georgi from Harvard while he was a visiting scholar at Harvard. ;-)

OK, so let's ask again, what is deconstruction? It is a procedure to construct theories that behave as theories with extra dimensions (something that the authors considered mostly because of the inspiration coming from string theory), but these dimensions are fake or discrete ones. In some very rough sense, deconstruction is similar to latticization. Imagine that you latticize three dimensions. At every link of the lattice - an edge connecting two nodes - there is an object U taking values in the group manifold - for example a unitary matrix of determinant 1 in the case in which we latticize SU(3) QCD.

You see that the manifold of values of U is a non-linear manifold, and correspondingly, the action will have a non-linear, "non-renormalizable" form. It will contain, for example, the plaquettes
• Tr (U1 U2 U3 U4)
where U1, U2, U3, U4 are the unitary matrices associated with the edges of a minimal square inside the lattice. These plaquettes represent the action from the magnetic field F_{ij} associated with two latticized dimensions - assuming that there are at least two of them.

Whatever coordinates on the SU(3) group manifold you choose, you will see that the trace above will be a highly non-linear function of the coordinates. In the case of deconstruction, you want to replace the SU(3) group manifold by its Lie algebra, and the corresponding terms in the action shall be replaced by polynomial terms.

There are several more changes you must make. The first one is that you don't want to call the picture containing the lattice "new dimensions". Instead, you call the counterpart of the lattice a "theory space" - this is partially terminological change only. The theory is something that is capable to behave as a continuous space in the continuum limit. However, you only count the truly continuous dimensions as "dimensions". Consequently, you must also modify your interpretation of the gauge group. It's no longer one SU(3) group you have: the group living at each node (formerly known as a lattice site) is treated as an independent SU(3) group.

What about the links U? They used to live in the group manifold. Deconstruction linearizes this space, and so its counterparts of U live in the Lie algebra generating the given group. A Lie algebra of SU(M) coincides with the adjoint representation, which is the tensor product of the fundamental and the anti-fundamental representation (with the trace removed, which is a comment I had to add because of a picky reader who forced me to write SU properly instead of U). However, the degrees of freedom U live on the link that connects two different SU(M) groups, as we said, and therefore this U in theories of deconstruction will be living in the tensor product of the fundamental representation M of the first SU(M) group living at the first node, and the anti-fundamental Mbar representation of the second SU(M) group living at the second node. This tensor product is called the bi-fundamental representation.

So the kind of fields we want to work with include the gauge fields living at the nodes only (they can still be defined in additional continuous dimensions), and various fields transforming in the bi-fundamental representations associated with various links in your "theory space". The "theory space" including the links is a good diagramatic description of the field content. This diagram - being identified with the theory space - is called a "moose diagram" by the phenomenologists and a "quiver diagram" by the string theorists and mathematicians. It includes
• nodes, each of them representing a factor SU(M) in the gauge group that lives in the continuous dimensions. This gauge group contributes the gauge field and perhaps its superpartners to the field content. In supersymmetric theories, the nodes "carry" a vector multiplet...
• links, each of them representing a field transforming in the bi-fundamental representation (M,Mbar) under the two SU(M) groups that the link connects. These links contribute new matter fields to the field content - in a supersymmetric theory they're typically chiral multiplets (for N=1) or hypermultiplets (for N=2)...
A more general quiver diagram can also describe factors SU(M_i) with different values of the rank M_i. We also add various potential terms for the matter fields - and construct a kind of generic or less generic theory with a gauge group SU(M)^k where "k" is the number of nodes in the quiver diagram. OK, what's physics of these models?

Because of the analogy with the latticized gauge theories, physics of these models can look like a higher-dimensional gauge theory and in many cases one can prove that it is so. Nima, Andrew, and Howard originally also constructed models whose quiver diagrams contained two alternating types of nodes, SU(k1) and SU(k2). The two type of groups became strongly coupled at different scales, and one of them could have been forgotten (confined).

Is there some relation of these deconstructed models to string theory? Yes! Gauge theories are low energy limits of the worldvolume theories defined on D-branes. And when you put the D-branes on orbifolds, you will obtain exactly this type of quiver theories with product gauge groups, as analyzed by
For the simplest and most supersymmetric quiver diagrams, one can prove by T-duality that deconstruction leads, in the continuum limit, to the higher-dimensional gauge theories. In fact, you can start with 3+1 continuous dimensions and add 1 or 2 "discrete" dimensions with a particular quiver diagram (lattice), and you can show that you obtain
in the continuum limit. Although these six-dimensional theories were always very mysterious and no simple "Lagrangian" description is available, we can define them as pretty ordinary four-dimensional gauge theories with a lattice-like quiver diagram in the limit in which the vevs and the density of the quiver diagram goes to infinity in the right ways. (These theories can also be studied using their holographic duals, e.g. the 11D supergravity in AdS7, and by matrix models.) The proof that the stringy six-dimensional theories may be obtained in this way is based on
• T-duality
• and the fact that the cylinder can locally be obtained from a cone whose opening angle is sent to zero (orbifold C^2/Z_N for large N, which produces a quiver diagram with N nodes, to mention the basic example), and the distance of the D3-branes from the tip of the cone is sent to infinity at the same moment (the distance equals the vev)
Deconstruction can therefore define many of the nice theories we like as particular limits of lower-dimensional theories that are easier to understand. This has led to various ramifications.

Deconstruction in phenomenology

However, the most popular application of deconstruction is in particle phenomenology. What do we gain if we build models based on deconstruction? There are two major things we get:
• The models can have many nice properties associated with extra dimensions without really having them (you can both eat the cake and have it, or how the proverb exactly goes) - and these properties already appear for a small number of lattice points
• We may solve the small hierarchy problem
Concerning the first point, let me say that there have always been many positive features of superstringy models that looked "purely stringy". For example, one of the nice technical tools that string theory gives us is the symmetry breaking via the Wilson lines. In field theory, you normally need a lot of Higgs fields to break a large gauge symmetry, such as SO(10), to a smaller realistic group, such as the Standard Model. (More generally, string theory can give Standard-Model-like models with a small gauge group whose fermionic spectrum nevertheless organizes into representations of a grand unified group - something that the experiments definitely want us to want.)

In string theory, you can break the grand unified symmetry by Wilson lines: you identify some "cycle", a topologically non-trivial closed contour inside your manifold of extra dimensions (also known as torsion), and you postulate that the gauge field has a monodromy around this closed contour - the monodromy M is an element of the gauge group. The n-th power of this monodromy must be the identity if the n-th multiple of the contour is topologically trivial.

This breaks the gauge group to the subgroup of it that commutes with M. Note that this breaking of grand unified symmetries is a very popular tool. It was, for example, used in the recent heterotic standard model where the Wilson lines equal a particular Z_3 x Z_3 subgroup of the grand unified group. This tool is also efficient: you don't need many Higgses. For large grand unified groups, the exact representations for the Higgses would typically be huge, and full mechanisms to break the symmetry are hard to find. Note that the symmetry is only broken because of the existence of extra at least one extra dimension.

If we want to use deconstruction, we want to imagine that the number of dimensions continues to be 3+1. Is there some way to obtain an efficient breaking of the grand unified group via the Wilson lines, even though there are no Wilson lines if there are no extra dimensions? Yes, there is. We create artificial, discrete dimensions via deconstruction. See for example the paper
in which the gauge group is taken to be SU(5) x SU(5). These are two copies of a grand unified group, and you may imagine that it is the same group defined at two points - and these two points are the simplest approximation of an extra dimension. (Witten also finds an analogous model in terms of M-theory on a G_2 manifold.) Consequently, such a SU(5) x SU(5) model in 3+1 dimensions may behave much like an SU(5) grand unified model which allows one to break the gauge symmetry down to the Standard Model in a way that looks almost like Wilson lines!

As you can see, one can almost always immitate string theory by a sufficiently sophisticated effective field theory. It's not shocking: string theory is supposed to be, at low energies, a kind of field theory anyway. So one should not be surprised that we're able to immitate string theoretical tricks by a sufficiently powerful quantum field theory - one that we can consider to be a "truncated string field theory" that is almost in the same "universality class" as string theory in the sense that some of the purely stringy effects may be mimicked.

This is why deconstruction in quantum field theory may lead to many models that have similar nice properties as models that used to rely on stringy physics. Deconstruction is, together with holography, another example of the recent reconciliation of the phenomenological and string-theoretical cultures. Some ideas in string theory are obviously good and one can apply their counterparts in field theory - but it's hard to imagine that the ideas would be found without string theory.

Now we want to focus on the little Higgs models, keeping the paper by
as an example. Deconstruction may be relevant for the hierarchy problem. What's this problem? The Higgs boson is the ultimate God particle that gives the bare masses to all other massive elementary particles (much like God herself, it has not been seen yet). By dimensional analysis, there must exist new physics at the Planck scale, 10^19 GeV (quantum gravity), or at least at some higher energy scales than those available at the current collider(s).

If you want to study physical phenomena at these higher energies, you need to allow the particles in the loops of your Feynman diagrams to have comparably high energies. For the computation of the loop corrections to the Higgs boson mass (self-energy), this leads to quadratically divergent contributions: the result of the integrals goes like
• Lambda^2
where Lambda is the cutoff, the maximal energy we allow, that can be as large as the Planck energy. But the total Higgs mass should be much smaller, below 1 TeV, and therefore the bare mass must be very finely tuned so that it cancels the loop corrections with an amazing accuracy - the terms like +-(10^19 GeV)^2 are cancelled and the remainder is as small as (115 GeV)^2. Who is responsible for this amazing and almost exact cancellation that looks so unnatural? There have been several dominant answers proposed:
• God or extraterrestrial superintelligent civilizations that engineered our world
• The anthropic principle that does not allow life to exist unless the cancellation occurs; some technicalities are similar to those in the first solution, some of them are different
• Technicolor in which the Higgs is not an elementary particle, but is composed from techniquarks in a theory analogous to QCD (disfavored by high-precision experiments as well as the observed absence of flavor-changing neutral currents)
• Supersymmetry that automatically guarantees cancellation between bosons and their superpartner fermions, and therefore the Higgs mass is tied to the supersymmetry breaking scale - it remains the technically and aesthetically preferred solution for most physicists
• Randall and Sundrum who are able to create exponentially large hierarchies from the "warp factor" of anti de Sitter space (well, actually only the paper under "Randall" solves the hierarchy problem)
• Deconstruction
You see, deconstruction is the last option. We should be more accurate: the models based on deconstruction, the little Higgs models, usually don't quite solve the "full" hierarchy problem - the gap between 115 GeV and 10^19 GeV. Usually they only solve the "little hierarchy problem". What is it?

Even with the nice solutions to the hierarchy problem listed above - which means supersymmetry in particular - we already know that some "reasonable" amount of tuning is taking place. If the Higgs mass were completely naturally explained by supersymmetry, the superpartners would already be seen (directly or indirectly). The experimental bounds show that the superpartners must be a bit heavier, and therefore the "truly" natural, protected value of the Higgs mass is calculated to be higher than what is needed for a weakly-coupled Standard Model and what is expected from the experiment.

Consequently, the true value of the Higgs mass is a bit smaller anyway - like 10 percent of the "natural" value - and therefore it is slightly tuned. It's a purely philosophical question whether you worry about this modest fine-tuning. I personally don't. Numbers of order 10 or 0.1 are fine with me: 10 is less than 4.pi, for example. But let's now believe that even this minor tuning is a problem. What does it mean to solve this small hierarchy problem?
• It means to find a quantum field theory that is well-behaved up to 10 TeV or so - perhaps even perturbatively calculable - in which the Higgs mass is naturally gonna be much smaller than 10 TeV even though all coupling constants are chosen to be "of order one"
This task is exactly solved by the little Higgs models.
These models have the property that the little Higgs boson is a pseudo-Goldstone boson associated with an approximate global symmetry. The symmetry breaking is collective - many degrees of freedom must participate to break the symmetry. Quadratic divergences in its self-energy only occur at higher loops because the relevant Feynman diagrams must contain many types of vertices. The masslessness would be preserved by the vertices separately. See the paper of Itay and Jesse for more comments and references.

Tuning to a 10% accuracy is a slightly ugly thing, but adding lots of new fields is ugly, too. So we want to look at the simplest possible little Higgs model that already has the good features of the little Higgs model - namely the littlest Higgs model. ;-) This has an SU(5) group to start with. It's broken to SO(5) in the infrared (which is geometrically defined as the position of the IR brane in the AdS context of Itay and Jesse - in analogy with the RS model but with different boundary conditions). The remaining parameters in the coset SU(5) / SO(5) include the Higgs boson.

I don't plan to copy their whole paper but nevertheless, this kind of model is expected to become
• a conformal field theory at very high energies, above 10 TeV, which means that the holographically dual geometry approaches Anti de Sitter space at infinity. Only an SU(2) x SU(2) subgroup of SU(5) is equipped by the gauge bosons, but nevertheless SU(5) is a pretty decent approximate global symmetry. The diagonal SU(2) group becomes the electroweak SU(2) factor - again, a deconstruction involving two nodes, much like in Witten's SU(5) x SU(5) case discussed above
• a window between 1 TeV and 10 TeV in which SU(5) is confined to SO(5), and the Higgs lives in the coset. The Higgs also has a lot of partners etc.
• the Standard Model below 1 TeV with a Higgs whose mass is naturally light...

Also, Jesse Thaler gave another nice and related talk on Wednesday (postdoc journal club) about "little technicolor".

A technical note: I strongly encourage the readers who are not interested in a particular article - or a class of articles - whatever their reason is - to ignore it instead of writing negative feedback in the "comments". There exists a limited feedback mechanism - the topics and formats that have its happy readers and that lead to a meaningful or at least lively discussion are most likely to be repeated in a mutated edition. However, I have also other independent criteria that decide about the composition of the articles, and please be aware that the feedback "I am not interested in it" is considered to be unconstructive - it is absolutely obvious that the readers can't be interested in everything or appreciate everything. Thanks for your consideration. ...

snail feedback (31) :

> In some very rough sense, deconstruction
> is similar to latticization.
[..]
> one that we can consider to be a
> "truncated string field theory" that is
> almost in the same "universality class"
> as string theory
Sounds familiar 8-)

In related news, the introduction of an "artificial" 5th dimension solves the fermion problem for lattice QCD (domain wall fermions).
I guess am not the only one wondering if this is more than just a "nice trick".

Hi Wolfgang!

Thanks for your comment. If it sounds familiar to you because of latticization of gravity ;-), it has been tried as well, of course.

It has led to a more direct understanding of massive spin 2 fields and massive gravity, and sometimes it allowed to raise a cutoff a bit, but don't expect a UV complete theory of gravity from such things!

All the best
Lubos

Hi Lubos,

As you are a defender of freedom of speech, before you pass out propaganda, please make sure you understand what you are criticising! Do make the same mistake as critics of Larry Summers!

I will try to explain (impossible with such a complex subject!) what 'Deconstruction' really is.

In the social sciences, there is a field called Linguistics. This field is about interpreting texts. However, it ran up against a problem. Evolution.

For example: Take some DNA. Now the task in front of you is to try to figure out the various representations of DNA. It turns out that there are many different representations as proteins fold in different ways! This is a difficult problem, which has lead to many Cladistic arguments!

Another example is the Law. Now when the American constitution was written, educated people at the time had a good idea of what the framers of the constitution meant! However, as time goes by, communities’ change and what was 'common sense' becomes problematical.

These problems do not have DEFINITIVE SCIENTIC ANSWERS as we cannot go back in time and see how these systems evolve! This is at the heart of Deconstruction!

I will say one thing. J Derrida and his minions basically went insane in the 1980's and decided to attack science! Why I do not know! The methods of 'Deconstruction' are TOTALLY inappropriate to science!

An Amateur Mathematician.

There's no U(M) symmetry group. There's an SU(M) gauge group associated with each node and there is a link connecting node N to node N+1. The representation under the relevant subgroup, namely SU(M)× SU(M) is (M,\bar{M}).

And no, when only one dimension is discretized, we don't have plaquettes because we still have to take the derivatives with respect to the continuous dimensions in the action.

I know you apply the word deconstruction and to me I sense it is still particle reductionism?

Second your theory space had to have certain inclinations and adoptions and I seemed to have covered them, while reductionism is taking place, to graviton entities being produced. Is this wrong?

Daniel Kabat is speaking to your moose and quiverings?

I know I should leave it to better educated minds like some of you fellows, but it is hard to ignore the distance measures here and here (Dvali's moon measure).

Was it wrong to apply such a wide sweeping analogy to the diversity of the energies needed in particle reduction, to current GR solutions of the cosmological scale?

Lubos, most phenomenologists work with a nonlinear sigma model with the coset space SU(M)×SU(M)/SU(M)_diag and only worry about its UV completion later. A linear (M,\bar{M}) scalar field would usually lead to unwanted light scalar bosons.

Dear Zelah,

"...Linguistics. This field is about interpreting texts..."

No, linguistics is not a field about interpreting texts but a study of human languages - a field composed of phonology, phonetics, morphology, syntax, semantics, stylistics, and pragmatics. Only contextual linguistics is really what you say (and close to literary criticism), and it is a small part of linguistics.

Concerning what you seem to consider your defense of deconstruction, it is not easy to agree. The U.S. constitution is a valuable document exactly because it can be used even centuries after it was written, in a kind of literary way - exactly because the assumptions of deconstruction are wrong.

It is true that other things may become ambiguous and incomprehensible after some time, but in the case of law, this is definitely a sign of lower quality.

Proteins can be encoded as different triplets of aminoacids, or how it goes, but this multiplicity of the "reverse problem" is well understood and there is no reason to create myths about it. Moreover, in many cases we can directly see the exact DNA code of organisms from the past, and the uncertainties can go away completely.

There are also many other, indirect ways how can we "go" back in time. This is what evolution, cosmology, and other fields of science are all about.

All the best
Lubos

The reason why many quantum field theories are similar to some string theory models is because string theories predict an astronomical number of models corresponding to different compactifications over various manifolds/orbifolds/conifolds/etc. with various brane properties. Basically, phenomelogical string theory models are scattered all over the parameter space of quantum field theory. And universality classes restricts the size of the parameter space.

Wilson line breaking is hardly a stringy effect. Generic gauge theories over a spacetime which aren't simply connected can exibit Wilson line breakings if the radiative corrections have the right sign. Spacetimes with nontrivial higher homotopy groups can have gauge breakings associated with the higher homotopy groups. Of course, since the four dimensions we all know and love are noncompact, at least not up to the Hubble radius and 15 billion years anyway, and probably more, such mechanisms can only work in higher dimensions.

And the analogue of Wilson line breaking in deconstruction is nothing other than an ordinary Higgs mechanism. Nothing stringy about it. String theorists like to claim a property is stringy if a string theory model has that property even if there are ordinary quantum field theory models which also have that property, making it independent of string theory.

And the analogue of Wilson line breaking in deconstruction is nothing other than an ordinary Higgs mechanism. Nothing stringy about it. String theorists like to claim a property is stringy if a string theory model has that property even if there are ordinary quantum field theory models which also have that property, making it independent of string theory.

There is no convincing reason why the Planck scale has to be associated with new physics due to quantum gravity. Sure, if we run the coupling strength of the Einstein-Hilbert action as an effective field theory to higher and higher scale, we find it is around unity at the Planck scale, but what reason do we have to assume that the Einstein-Hilbert action holds right up to the Planck scale? Heck, we don't even know if it is right at large scales, at astronomical scales. We can't measure torsion, for example. And there are all sorts of anomalies like the accelerating universe with an "unusually small cosmological constant", but only according to the current paradigm and the rotation rate of spiral galaxies which are fudged away with dark matter which in turn has to be fudged away with who knows what. We can't even completely rule out the Brans-Dicke theory or theories of conformal gravity. Or what if gravity is not fundamental at all but is an emergent property? There are many papers out there proposing various mechanisms by which gravity can emerge out of low scale particle physics. By low scale, I mean far below the Planck scale.

So why should we be bothered about miraculous cancellations of the quadratic divergence? We don't even have the slightest clue of what the Higgs sector is supposed to be. The Standard Model Higgs is merely the simplest possibility and most definitely not the only one. It's nice that people are now starting to break out of the paradigm of assuming that the Higgs is a fundamental scalar boson which exists all the way up to the Planck scale. Probably the reason why they have been blinded to other possibilities for so long is because SUSY manages to stabilize the Higgs mass. At least the authors of the little Higgs models are open to other alternatives.

Lubos:

Still remember the bird shit on the telescope that caused astronomers to record two false images of a galaxy 10 billion light years away, which falsely proved the string theory?

See this:
http://motls.blogspot.com/2005/02/mark-jackson-cosmic-strings.html

And this:
http://motls.blogspot.com/2004/12/astronomers-prove-string-theory.html

It's been a few weeks since the exact coordinates of CSL-1 are released. Why there is no more noise on the cosmic string thing? I presume they could easily aim their telescope towards your given coordinates, and it would take just a few hours to get another data set and verify whether the two images are real or not.

Since there hasn't been any independent verification so far, I guess it was really just bird shit causing the original telescope to record a slightly distorted image :-)

Quantoken

Quantoken is off-topic as usual, but timely for once:
http://arxiv.org/abs/astro-ph/0503120

On-topic: I agree with Lubos that the deconstruction stuff is just a re-hash of things one can do with string theory. That being the case, what's the point?

First of all, thanks to the last contributor for mentioning the cosmic string signatures, answering Quantoken's "friendly" comment and moreover updating other people than Quanroken!

http://arxiv.org/abs/astro-ph/0503120

Let's not argue about philosophy, and let me offer you a friendly compromise that Derrida's contributions to lit crit may be interesting, and they only fail and become nonsensical - one of the key authors attacked by Alan Sokal - when applied to science.

(Another question is whether I really think that the first part is meaningful, but you don't want to hear my opinion!)

Then, at the beginning, there has been a real phenomenologist - thanks for giving an upside kick to the quality requirements. OK, I reflected the fact that one can't ever have a full U(N) (actually, in the conformal cases, this was a goal). One must always separate SU(N) only, and I had to a sentence about removing a trace from the product, and a few more details.

"Phenomenologists only care about the UV completion later..." - is there some observable difference between the word "later" and "never"? ;-)

This is actually related to the person who questioned the meaning of the high scale. Why do we insist that physics must work at the high scales?

It's simply because we can, at least in principle, build accelerators that will accelerate a particle to huge energies. (Incidentally, practically, the Planck accelerator would stretched across the observable Universe.)

Therefore physics must have answers to these questions. The only reason why we're allowed to stop at the Planck scale is that the collission of particles with a Planckian energy starts to produce Planckian black holes (and then bigger ones, much like if you classically create them by colliding heavy stars), and we understand physics at superhigh energies above the Planck scale from classical GR, and there is most likely "nothing new and qualitative". It's a natural place where we can stop. Spacetime itself becomes jittery, and so on.

This fundamental Planck scale, the size of the smallest meaningful black holes whose lifetime is comparable to their size, may be reduced in models with large and warped extra dimensions. We can't say for sure what the true fundamental scale is, but of course, there are many constraints. The low energy Planck scale - the braneworld situation in which even the LHC may start to produce 10-dimensional or 5-dimensional black holes at 10 TeV - is a fashionable and newer model, but most theorists (and probably also phenomenologists, in they're candid) would tell you that the fundamental scale close to the Planck scale is more likely and better motivated.

Finally, even if there is a field theory rehash of string theory, I think that we should know about it because eventually we are probably going to describe real physics by an effective QFT in practice.

Another person posted several contributions that string theory's predictions are 100% ambiguous and any model can be incorporated. No. String theory gives very particular twists what sort of ideas are natural and should be looked at. It increases our imagination. If people without string theory were asking what classes of theories we should try to study and expect new interesting physics from them, they would probably not come up with such things. They would be considering one more Z' boson, two more Z' bosons, as the "infinite family" of models (I exaggerate a bit) - but string theory is an inspiration for completely new mechanisms etc.

I was treating the Wilson lines as a stringy effect because it required extra dimensions. Sure, at low energies, any symmetry breaking must be via Higgs mechanism, but the deconstruction gives a very particular picture how the Higgs must look like.

Hi Wolfgang

In related news, the introduction of an "artificial" 5th dimension solves the fermion problem for lattice QCD (domain wall fermions).
"Solves" may be a bit strong in the lattice QCD context. It's true that if you use domain wall fermions with an infinitly large 5th dimension, you solve the problem exactly. However that can't be done on a computer.

In the simulations, the 5th dimension is taken at some finite value, so there is residual chiral symmetry breaking. You can take the 5th dimension large enough to make that effect small, but it's very expensive.

Still, the approach is promising, and there's lots of people working on it (and it's close cousin, overlap fermions). There's lots of stuff on hep-lat about it. 0411006 is a large summery of the current state of the art. It's still behind the largest simulations using staggered fermions, but they are starting to get results.

Interesting stuff. I always liked the Wilson loop idea.

Lumo said:
I was treating the Wilson lines as a stringy effect because it required extra dimensions. Sure, at low energies, any symmetry breaking must be via Higgs mechanism,

I wonder if Wilson lines could be useful even in 4D, in the early universe when energies were high?

Fyodor U.

matthew,

thank you for the response.
My question was if this is "just a neat trick" or if this indicates something about real physics.

My question was if this is "just a neat trick" or if this indicates something about real physics.
That's a loaded question :) so I'll start by dodging it. In terms of solving QCD using lattice simulations, it's a neat trick. A potentially very useful one, but not fundemental to QCD

The idea of sticking fermions on extra-dimensional walls and so forth as a "beyond the standard model" theory, on the other hand, is an interesting idea. Personally, I doubt it's right, but many of these models make predictions, and should be testable. That's always a good thing.

The only reason why we're allowed to stop at the Planck scale is that the collission of particles with a Planckian energy starts to produce Planckian black holes (and then bigger ones, much like if you classically create them by colliding heavy stars), and we understand physics at superhigh energies above the Planck scale from classical GR, and there is most likely "nothing new and qualitative". It's a natural place where we can stop. So this is the basis for problems facing reductionsm with high energy particles, and the resulting idea of energy going off into those extra dimensions?

It would be like saying, you stop at 4th dimension spacetime, hit the plate like Dvali does, and know full well that energy is unaccounted for.

This would be the fifth dimensional attribute given to those theoretical spaces where we begin to imagine all sorts of things?:)

Yes we know it's theoretical and it has abstract maths involved, and for some, they do not like to play in that realm.

In reference to Alan Sokal, he did a bad thing by exloiting the media, by using the Hermeneutics of Quantum gravity to discharge the abiltiy of media to corrupt sound scientific processes.

The internet forum can be used as it is being use, to disparage attacks on that "method" by sound information being departed to the public.

Thanks Professor Lubos Motl for time and effort to that end.

Lubos said: "They observe the discontinuity of the microwave background temperature near the cosmic string with *2 sigma* significance which may or may not be a sign of the cosmic string. The whole CSL-1 double galaxy fits into one pixel of WMAP or so, so they have to use 4 pixels around, and the precision is not great. PLANCK is expected to be twice as sensitive. The cosmic string would have to move by velocity higher than 0.94c to fit the data."

That's another ghost chasing story! The so called CSL-1 feature, being 10 billion light years away, is only 2 arcseconds. The WMAP resolution is not very high at all. The paper listed 13 arc minutes as the pixel resolution. See the paper:

http://arxiv.org/abs/astro-ph/0503120

I know this guy Edward Wright and had read his web site. He is a complete idiot unable to even count numbers using fingers!

Top of page 6, he cited: "At 13 arc-minute resolution, the sky is divided "12x4^9 pixels". That looks very odd to me. Calculating from a square of 13 arc-minutes per pixel, you would thought for a whole sky of 4*PI solid angle, there's about 9x10^5 pixels, not "12x4^9". It is also an odd way, none-scientific way of expressing a quantity like "12x4^9". The scientific way would have been X * 10^Y.

There are 10^11 to 10^12 galaxies in the whole universe. Divide it into 9x10^5 pixels for the whole sky, each pixel will contain 10^5 to 10^6 galaxies, each as big as CSL-1.

So CSL-1 is just one out of 10^5 to 10^6 galaxies whose photons fall onto the same pixel in the WMAP dataset. Even if there is the so called cosmic string on CSL-1, it probably would't result in any recognizable feature after all photons from all 10^5 to 10^6 galaxies mix into data of just one pixel.

Plus how significant is a 2 sigma random fluctuation? The first letter of Lubos's last name happen to be the same as "M-theory". That's probably a 3.5 sigma significance:-)

Quantoken

Lubos, you obviously don't think like a phenomenologist, but phenomenologically speaking, in SUSY gauge theories, a Wilson line around a noncontractible curve is a moduli (er... modulus). This leads to all sorts of phenomelogical problems.

The same reasoning goes to deconstruction models based upon Wilson line breakings.

Dear anonymous,

interesting. You may be the first person who thinks that I am not thinking as a phenomenologist - others say just the opposite. It's not clear to me which answer is more flattering. ;-)

You are making basically two errors. One of them is that you think that the Wilson lines in the relevant superstring models are continuous degrees of freedom.

They are not. Take any model of this kind, such as the heterotic Standard Model. The nontrivial Wilson lines are those on the torsion cycles - and the torsion is a Z3 x Z3. So these Wilson lines over any of these cycles must have a third power equal to one, and such a condition determines them uniquely up to a conjugation (which is a gauge transformation).

Similar "frozen" moduli are found in "triples, fluxes etc.".

In other words, there is no way how can you continuously deform the Wilson lines on the torsion cycles, and there are no moduli generated with this choice.

Second point is about the Wilson lines that could be continuous. For example if they were breaking the group to many U(1)'s etc.

In N=1 theories, such Wilson lines are only moduli classically. In deconstruction, the analogue of the Wilson line - like the field PHI in the little Higgs model that transforms as (2,2) under the SU(2) x SU(2) - the deconstructed electroweak SU(2) - and generates the quartic coupling - are explicitly massive - in fact, their mass is close to the Planck or GUT scale. Very far from moduli.

All the best
Lubos

Erm, I need to be more precise in my wording. Wilson line breakings associated with a manifold compactification with one extra dimension, not an orbifold compactification. The little Higgs model corresponds to a deconstructed orbifold model.

No, anonymous. I assure you that what you write is rubbish.

The little Higgs model has explicit mass terms for "phi" and these degrees of freedom certainly don't produce any moduli. Just open the first elementary paper on the little Higgs model to see that you're not right.

It is also not true that the manifolds with the torsion I talked about - e.g. the heterotic standard model - are orbifolds. You apparently think that if someone writes a Z3 x Z3, it must be an orbifold. Nope. The manifold is completely smooth (no fixed points or orbifold singularities) and can't be obtained in the way you think.

One more comment: in deconstruction, one is of course more free to put the potentials etc. for the scalars that would become components of the higher-dimensional gauge fields in the continuum limit.

This may also be interpreted as a toy model for a non-trivial torsion in a more complicated stringy compactification. In both cases, the problem "new moduli" you talk about is absent in the models under consideration.

One more comment. It's certainly not guaranteed by gauge invariance that the mass term is absent.

The Wilson line P exp(i.int A) is gauge invariant, and any function of this object is therefore gauge-invariant, too. Such as mass term arises generically in N=1 from string theory, and it can also be added explicitly to the deconstructed model such as the little Higgs - it's there in the model.

Um, right. I was really thinking of tori, T^d where the fundamental group is Z^d. I know there are manifolds whose fundamental groups contain elements of finite order, but I had deconstruction models in mind and most of them deconstruct S^1 or S^1/Z_2. Witten's paper is really about the deconstruction of an orbifolded 2D disc (the quotient of a 2D disc by the Z_n rotational group about the center) which was why he could get Z_n as the fundamental group. Most little Higgs models are deconstructions of the S^1/Z_2 orbifold, which wasn't what I had in mind either.

The model that I had in mind was your example of a circle of SU(M) nodes and also of a circle with alternating SU(k_1) and SU(k_2) nodes. A SUSY version of such a model does have a moduli.

Not every higher dimensional model is deconstructible. To take a simple example, one which crops up in most models, a Dirac spinor field in five dimensions decomposes into a left-handed Weyl spinor and a right-handed one. So far, we have no problems. But phenomelogically speaking, we want chiral fermions. So, what do we do? We orbifold the extra dimension, turning it into S^1/Z_2. We automatically get chiral fermions out of orbifolded bulk fermions. Most phenomenological models make use of this trick. But what happens when someone tries to deconstruct it? They find they can't get chiral fermions and why is that? Because of fermion doubling, the well-known nightmare of QCD lattice simulators.