Saturday, April 30, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Filibustering

The filibuster, i.e. an extra-constitutional obstructionist tactic - speaking about irrelevant things for hours in the Senate, trying to delay a decision, has been used by both parties throughout the U.S. history. In the 1950s and the 1960s, it was used to prevent new bills about the civil rights.

Recently, at least 10 conservative judicial nominees have been filibustered by the Democrats in the Senate - an unprecendented large number. Princeton's alumnus, the Senate majority leader William Frist, proposed the "nuclear option" based on the paradigm that it should be enough to debate a candidate for 100 hours - and a vote should follow afterwards. Today it takes 60 votes to stop a meaningless debate; according to Frist's new rules, it would take simply 51 votes in the case of judicial nominees.

Most Democrats and other left-wing forces - which also includes 95 percent of intellectually diverse Princeton University - vehemently disagree. Everyone should be allowed to speak for hundreds of hours and maybe for years. It is vital for democracy to obstruct and delay nominees that the correct people do not like - much like it is important for bureaucrats to slow everything down as much as possible (these slowing procedures are usually extremely efficient and in many cases more annoying than a "no" vote). For example, it is important to read random pages from Introduction to Elementary Particles by David Griffiths for more than 50 hours.

Edward Witten and Chiara Nappi are not the only ones - Frank Wilczek is having a great filibustering time in Princeton, too. ;-) See also the filibuster webcam and program in Princeton. I was explained that the last sentence was "unnecessary".

Stringy Baby Universes

Robbert Dijkgraaf, Rajesh Gopakumar, Hiroši Ooguri, and Cumrun Vafa (DGOV) have extended their previous work about the relations between topological string theory, two-dimensional Yang-Mills theory, and Hartle-Hawking states

to non-perturbative effects in Yang-Mills theory. The most relevant previous blog article about the topic is

Note that Savas Dimopoulos has used this term with an incorrect meaning (anthropic haystack) but we obviously mean the more correct one. ;-) Everyone who wants to read about the Baby Universes that are advertised in the title is encouraged to be extremely patient. Although the new work is very interesting, let me be rather brief. Imagine that you want to count the index of a (3+1-dimensional) black hole which is really a D4-brane wrapped on some 4-cycle of a six-dimensional Calabi-Yau space - a manifold which is nothing else than a four-dimensional fiber bundle over the two-torus. If the word "index" sounds too abstract, replace it by "the number of microstates with some minus signs".

If you accept the word "index" anyway, you are counting the supersymmetric (BPS) sector of this theory, and it is a usual story that the BPS sector of a higher-dimensional theory may be described by a non-supersymmetric lower-dimensional theory. In this particular case, the relevant lower-dimensional theory is nothing else than two-dimensional Yang-Mills theory compactified on the same two-torus.

Looking at two-dimensional Yang-Mills

Now, you might think that two-dimensional gauge theory must be terribly boring. The number of transverse physical excitations of a photon (or a gluon) is "D-2=0", for example. It does not have any other fields, like matrix string theory which is a two-dimensional gauge theory with matter, that would allow the theory to describe infinitely many states and their interactions (for example the whole type IIA string theory, in the case of matrix string theory). Nevertheless, you may still compute its partition sum as a function of the number of colors "N", the coupling constant, and the area of the torus. Don't forget that this partition sum is computing an index of the higher-dimensional black hole.

It's been shown roughly a decade ago that as far as this partition function goes, two-dimensional Yang-Mills theory is equivalent to a system of free fermions that fill a band of states with energies between "-N/2" and "+N/2" (let me ignore the integrality vs. half-integrality properties of "N"). This band has two Fermi surfaces: one of them is up (near "+N/2"), and one of them is down (near "-N/2"). The partition function is really a sum over possible excitations of these two Fermi surfaces.

Note that if "N" is large, these two Fermi surfaces are very far apart and almost decoupled. Consequently, the partition sum of the free fermions factorizes into a product

  • Z_{Yang-Mills} = Z_{up} Z_{down}.

Moreover, "Z_{up}" and "Z_{down}" are very similar and essentially complex conjugates to each other. That's not the end of the "entropic principle" story: Z_{Yang-Mills} may be interpreted, for large "N", as the black hole entropy, while "Z_{up}" and "Z_{down}" are the partition sums "Z_{top}" of topological string theory on the Calabi-Yau manifold that describes our black hole and its (the partition sum's) complex conjugate. This was the essential point of the work by Ooguri, Strominger, and Vafa: the exponentiated black hole entropy may be computed as the squared absolute value of the partition sum of topological string theory.

In terms of the two-dimensional Yang-Mills variables, the black hole partition function becomes the Yang-Mills partition function. The partition sum of two-dimensional Yang-Mills theory may be written not only using free fermions, but more generally also as a sum over irreducible representations "Rep"

  • Z_{Yang-Mills} = Sum_{Rep} Tr_{Rep} exp[-C_2(Rep) A (g^2) N]

where "C_2" is the second Casimir of the representation, "A" is the area of the two-torus, "g" is the coupling constant (whose dimension is "mass"), and "N" is the number of colors. For non-toroidal topologies, an extra factor "dim(Rep)^{chi}" with the exponent "chi" being the Euler character of the surface would have to be added to the sum. Nevertheless, for large "N", most irreducible representations have a huge Casimir that kills their contribution to the sum. The "small" Casimir irreps of "SU(N)" can be obtained from the tensor product of a "small" representation constructed by tensoring (drawing a Young diagram) from the fundamental representation, and another "small" representation obtained from the antifundamental representation in the same way. The Casimir is then a sum of two pieces, the summation over "Rep" factorizes into a summation over "Rep_{fun}" and an independent summation over "Rep_{antifun}". Finally, the partition sum itself factorizes in such a way that the last two displayed formulae agree.

Non-decoupling of two surfaces

Nevertheless, the two Fermi surfaces are not quite decoupled for finite "N" and there are correlations. For example, if you fix the total number of fermions "N", a missing group of electrons near "+N/2" must be accompanied by added fermions near "-N/2". These correlations modify the partition sum by "exp(-N)" effects - which are non-perturbative effects with respect to a "1/N" expansion and can be neglected for large "N". DGOV have the full expression for the partition sum, and therefore they can evaluate it including these tiny effects. The partition sum of Yang-Mills then contains not only terms of the type "Z_{top}^2" but also higher powers of "Z_{top}", so to say, and the exponential suppression of "exp(-kN)" arises because of the same reason that makes the total exponentiated entropy of several black holes negligible compared to a single black hole with the "total" mass: single black hole is entropically preferred and splitting it into pieces is unlikely and exponentially suppressed by entropy counting.

If we avoid the term "Baby Universe", the black hole partition sum may be visualized as a sum of partition sums of "K" black holes, where higher values of "K" are discouraged exponentially. However, every single black hole among these "K" objects has its own near-horizon geometry which is an independent "AdS2 x S2 x Calabi-Yau" universe. Consequently, the partition sum of Yang-Mills theory may be viewed as a gauge-theoretical dual of a system of many "AdS2 x S2" universes - the baby Universes. Andy Strominger and his collaborators also liked to play with these "AdS2 x S2 x Calabi-Yau" disconnected backgrounds. DGOV explain that this multiplicity of Universes does not destroy the coherence in a single Universe.

Another interesting subtlety is that the term in the partition sum coming from "K" disconnected Universes is weighted by a rather unusual factor - the Catalan number "C_{K}" (1, 1, 2, 5, 14, 42, 132, ... if we start from "K=0") that measures the number of planar trees whose endpoints are the given Universes. (They can be written as "(2K)! / (K)! (K+1)!".) For every tree like that, one can construct a corresponding "tree-like" solution of supergravity that are really generated by multi-centered black hole solutions. The appearance of this Catalan number may be interpreted as some new obscure kind of "statistics" that remembers the "origin": for Bose-Einstein and Fermi-Dirac statistics of the Universes, we would obtain simpler factors.

I am still confused about some interpretational issues. These Baby Universe effects only become important for small values of "N" which is exactly where the geometry (and even the "counting of the number of universes) is fuzzy. I don't know how could one ever extract the information about multiple large independent universes from the partition sum - and its generalizations.

Friday, April 29, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

New Scientist on TOE

The new issue of New Scientist discusses the current situation of our field, it does so relatively honestly, and the spectrum of topics is consequently not the most optimistic one: the conjectured large number of solutions of string theory which is called "the worst embarrassment of riches ever known"; loop quantum gravity; taming the multiverse in various ways, and so forth.

Concerning the anthropic haystack, Susskind obviously supports it, referring to eternal inflation, while Witten says:

  • "More work has always given more possibilities - far more than anyone wanted ... I hope that current discussion of the string landscape isn't on the right track. But I have no convincing counter-arguments."

Thursday, April 28, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Generalized geometry

The book on the left contains almost everything you need to know about algebraic geometry and Calabi-Yau manifolds in the context of string theory and closely related fields...

Andy Neitzke was leading the postdoc journal club, and it was exciting.

Hitchin, a famous mathematician, decided to understand the following question:

  • What the heck is the B-field?
And he answered the question by the phrase "generalized geometry" and the associated equations and concepts that I will mention below. Consequently, 20 physicists at Harvard had to spend 2 hours tonight trying to answer the following question:
  • What the heck is generalized geometry?
What's the answer? Well, surprisingly, it seems that it is a crazy mathematical construction that is supposed to incorporate the B-field. :-)

OK, let's start more seriously. When you talk about complex manifolds or something like that, it is useful to imagine that you have the tangent field T at every point of the manifold. And there is a group like SO(d) in the real case, or more precisely GL(d) because you're not forced to preserve any metric, acting at each point.

Hitchin makes it more complicated and tells you that you should replace
  • T ... by ... T (+) T*
where T* is the cotangent bundle. There is a natural contraction between the vectors and covectors that is preserved by a SO(d,d,R) group. It's a mathematically analogous contraction to the contraction of the momentum and winding, although the latter two quantities are discrete, and similarly, the SO(d,d,R) group is analogous to the discrete T-duality group SO(d,d,Z) that occurs for string theory on tori.

Tuesday, April 26, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

RHIC produces quark-gluon plasma

Just links for those who are interested - thanks to David Goss for reminding me of this news:

Of course, the dual description involving black holes is mentioned, too.

LHC: Gigabyte per second transfer works

The Large Hadron Collider will create a huge amount of data - and one of the big tasks is to transfer the data to other labs where they can be effectively investigated. LHC is expected to produce 1,500 megabytes per second for ten years or so, or, according to other sources, 15,000 terabytes per year. At any rate, it will be the most intense source of data in the world.



Figure 1: The Canterbury Cathedral is small enough to fit the LHC's Compact Muon Solenoid (CMS), as argued here.

It's a pleasure to inform you that the GridPP project (60 million dollars) has passed an important test, the "Service challenge 2". For a period of 10 days, eight labs (Brookhaven and places in the EU) were receiving 600 megabytes per second from CERN (yes, it's not 1 GB/s yet, as announced in the title, but it will be). It would take at least 2,500 years for my modem ;-) to transfer the total amount of data, namely 500 terabytes.

The current acceptance rate is 70 MB per second only, and in a series of steps, they plan to increase it roughly to 400 MB per second. Further reading:


For comments about the support from IBM and their breakthough performance & storage visualization software, click here or here.

Monday, April 25, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

The volume of the haystack

We've been asking various questions to Frederik Denef and Michael Douglas who are visiting us.

One of the things one typically imagines is that the volume of the haystack (formerly known as the landscape) is very large. How large is large?

Take the quintic hypersurfaces in CP^4. They have 101 complex structure moduli. Construct the 101-dimensional moduli space, determine its Kahler metric from the kinetic terms in type II string theory, and measure its volume. What will you get? Something like

  • 1 / 5^24 times...
well, that's a pretty small number, but it's not the worst factor, so let me continue:
  • 1 / (5^24 times 120!)
Yes, it's the factorial of 120 in the denominator. That's a wicked small number, something like 10^{-250}. A typical example of my thesis that the "very interior" of the haystack (or the "configuration space") has a small volume. Nevertheless, in this small volume, one is supposed to find googols of vacua. That's because the estimated density of the vacua
  • det (R - omega)
where "R" is the curvature and "omega" is the Kahler form (in dimensionless units) really does not contain the factor (1/120!). But still, don't you find it a bit strange that there is a density of 10^{350} metastable vacua per unit volume? We don't have a real emotional intuition how "density" in a very-high-dimensional space should behave, but we should probably try to learn it. I feel that these (especially the de Sitter) vacua cannot be quite isolated. There are just many other vacua nearby (virtually all of them) into which one should be able to decay. KKLT only consider one Coleman-DeLuccia instanton, without an enhancement, and I feel it can't be the whole story.

Friday, April 22, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Kennedy's landscape

Frederik Denef (Rutgers U.) was explaining how to build a better racetrack (with Bogdan Florea and Michael Douglas), i.e. how to construct particular examples of the numerous KKLT anti de Sitter vacua - the mathematical constructions that are used to argue that the anthropic principle is needed in string theory. The talk today was actually based on a newer paper with Douglas, Florea, Grassi, and Kachru; sorry for an incorrect reference, and thanks for Frederik's correction. Nevertheless I will keep examples from the older paper, too. This stuff is impressive geometry. A really high-brow mathematics, even if it happens to be just recreational mathematics.

Nevertheless, the most illuminating idea was the following variation of Kennedy's famous quote due to Abdus Salam:

  • My fellow physicists, ask not what your theory can do for you: ask what you can do for your theory.
This could become the motto of the landscape research. Suddenly it's not too important whether a theory teaches us something new about the real world - either predicts new unknown phenomena or previously unknown links between the known phenomena and objects. It's more important that such an unpredictive scenario might be true and we should all work hard to show that the scenario is plausible because we should like this scenario, for some reasons that are not clear to me.

It's slowly becoming a heresy not to believe the anthropic principle - but it already is heresy to think that even the question whether the anthropic reasoning is the explanation of the details of our universe is not the most interesting question, at least among the scientific ones. Even if some numbers in Nature - such as the particles masses - are random historical coincidences, we will never know for sure.

Let me remind you about the basic framework of the Kachru-Kallosh-Linde-Trivedi (KKLT) construction - the most frequently mentioned technical result to justify the anthropic principle in string theory. String theory often predicts many massless scalar fields that are unacceptable because they would violate the equivalence principle and we could already have detected them.

They must be destroyed - i.e. they must acquire masses. The potential energy as a function of the scalar fields must have a finite or countable number of minima. The scalar fields then sit at these minima - we say that the moduli (scalar fields) are stabilized which is a good thing and one of the unavoidable tasks. Moduli stabilization was only the main goal of Frederik's talk.

KKLT start with F-theory (a formally 12-dimensional theory due to Cumrun Vafa) compactified on an elliptically fibered (=interpretable as an elliptic curve, i.e. a two-torus, attached to every point of a lower-dimensional base space) Calabi-Yau four-fold (an eight-dimensional manifold) to give you a four-dimensional theory with a negative cosmological constant and all moduli stabilized. Then they add some non-supersymmetric objects (D3-branes) to create a de Sitter space (with the observationally correct, positive cosmological constant and broken supersymmetry) out of the original anti de Sitter space (AdS).

The talk today focused on the AdS, supersymmetric part of the task.

The F-theory vacuum on a four-fold may be re-interpreted as a type IIB vacuum with orientifold planes (both O3 and O7 where 3 and 7 count the spatial dimensions along the fixed planes). Moreover, there are some fluxes of the three-forms over three-cycles (both the NS-NS as well as the R-R field strengths). The integral
  • int (H3 wedge F3) + #(D3)
must vanish due to a tadpole cancellation which constrains the fluxes H3 and F3 (numerical constants ignored). In terms of the four-fold, the vanishing quantity may be written as
  • L = 1/2 (int G4 wedge G4) = chi (X) / 24 - #(D3)
where you may think about M-theory on a four-fold instead of F-theory (a dual description for finite areas of the elliptic fiber), and G4 is the standard M-theoretical four-form field strength (its integral over one of the two 1-cycles of the toroidal fiber gives you the NS-NS and R-R three-form field strengths, respectively). Such a cancellation condition still allows for a huge spectrum of possible choices of the integer-valued fluxes: as Bousso and Polchinski estimated 5 years ago, if there are 300 three-cycles and each of them can carry a flux roughly between 0 and 30, then there are 30^{300} or so possible universes. The light scalar fields that we need to stabilize are
  • the dilaton/axion
  • the complex structure moduli, the shape parameters of the four-fold
  • the Kahler moduli, the areas of topologically non-trivial two-dimensional manifolds (2-cycles)
The former two categories are stabilized perturbatively by the Gukov-Vafa-Witten superpotential
  • W = int (Omega wedge G3)
where Omega is the holomorphic three-form and G3 is the complexified three-form field strength that includes both the NS-NS and R-R components (with "tau" as the relative coefficient, which makes "tau" also stabilized). This perturbative superpotential handily stabilizes the dilaton/axion and the complex structure moduli at some values that are in principle calculable. Well, I should really write the 8-dimensional integral "int (Omega4 wedge G4)" from the M-theory or F-theory picture.

However, the Kahler moduli (the sizes of the two-cycles) are not stabilized by any perturbative effects. Such a fact is also known from other types of stringy models of reality, the so-called "no-scale supergravities" obtained e.g. by compactifying the heterotic strings on Calabi-Yau three-folds. These moduli are, however, stabilized by M5 (or "F5")-brane instantons wrapped on six-cycles of the four-fold. This can either be interpreted as D3-brane instantons in type IIB, or condensation of gauginos living on the D7-branes.

Note that we want to add new terms to the superpotential W that stabilize all the moduli. The precise value of the Kahler potential (not to be confused with the Kahler moduli although Mr. Kahler is of course identical in both cases; the Kahler potential is another function that determines the physics of four-dimensional supersymmetric theories) is not protected and it's always a source of controversies.

OK, these are the general rules - everything else is to look for more exact, particular examples. A goal is to stabilize the Kahler moduli at sufficiently large volumes of the internal space whatever the space exactly is. This (large volume) is something that can be marginally achieved (if you think that the number 20 is large), but the 2-cycles are never really large at the end. Instead, they are comparable to the string size.

The anthropic strategy is to pick as complicated Calabi-Yau manifolds as possible, to guarantee that there will be a lot of mess, confusion, and possibilities, and that no predictions will ever be obtained as long as all the physicists and their computers fit the observed Universe (which is an encouraging prediction that Frederik has also mentioned).

This means that you don't want to start with Calabi-Yaus whose Betti numbers are of order 3. You want to start, if one follows the 2004 paper, with something like F_{18}, a toric Fano three-fold. That's a 3-complex-dimensional manifold that is analogous to the two-complex-dimensional del Pezzo surfaces, in a sense. But you don't want just this simple F_{18}. You take a quadric Z in a projective space constructed from this F_{18} and its canonical bundle. OK, finally the Euler character of the four-fold X is 13,248. Great number and one can probably estimate the probability that such a construction has something to do with the real world. It becomes a philosophical question whether we should be distinguishing this probability from the number "zero" and how much this "zero" differs from the probability that loop quantum gravity describes quantum gravity at the Planck scale. One can also estimate the values of the scalar fields at the minima of the potential, and the number of vacua (some of their models only had a trillion, others have 10^{300} - of course, the Kennedy rule is that the more ambiguous and unpredictive the set of vacua is, the more attention physics should pay to them).

The example today, from the 2005 paper, was the resolved orbifold "T^6 / Z_2 x Z_2" which has 51 Kahler moduli and 3 complex structure moduli. The singularities were analyzed by a local model, and various toric diagrams shown were related by a flop (or a flip, as is now a more popular terminology). Sorry for neglecting the real model of this talk in the first version of this article.

Cumrun - who is not exactly a fan of the anthropic principle (unlike Nima, who tried to counter) - was extremely active during the talk and he argued for the existence of many new effects that were neglected. For example, there is new physics near a high-codimension singularity that is needed in one of these models. Cumrun argued that the fivebrane instantons could get destabilized - kind of unwrap from the singularity; that a lof of instanton corrections could arise from various cycles, and so forth. The expansions are never quite under control because they rely on some "small" numbers that can be as large as (4.pi/flux) where the "flux" is of order "ten" or "one hundred". Most estimates for the Kahler potential are unjustified, and so forth.

Their calculations required to draw a lot of toric diagrams (that's a representation of a manifold where toroidal fibers are attached to a region with boundaries on which some of the circles of the tori shrink to zero); determine various cycles and their triple intersection numbers (it's like counting how many holes a doughnut has, but in a more difficult 8-dimensional setup) which are needed for the volume; a lot of computer time. Do we really believe that by studying the orientifold of the weighted projective space CP^{4}_{[1,1,1,6,9]}, we will find something that will assure us (and others - and maybe even Shelly Glashow) that string theory is on the right track? I believe that the simplest compactifications, whatever the exact counting is, should be studied before the convoluted ones. If we deliberately try to paint the string-theoretical image of the real world as the most ambiguous and uncalculable one, I kind of feel that it's not quite honest.

When we study the harmonic oscillator and the Hydrogen atom, we want to understand their ground states (and low-lying states) first - where the numbers are of order one. Someone could study the "n=1836th" excited level of the Hydrogen atom, hoping that it is messy enough so that it could explain why the proton mass is so much larger than the electron mass. But it is a well-motivated approach? Some people used to blame string theorists that they were only looking for the keys (to the correct full theory) under the lamppost. It's unfortunately not the case anymore: most of the search for the keys is now being done somewhere in the middle of the ocean (on the surface). Maybe, someone will eventually show that the keys can't stay on the surface of the ocean, and we will return to the search for the keys in less insane contexts. But it's not easy to prove something about the middle of the ocean, especially if we don't yet understand the shape of the Manhattan island.

Jim Peebles & formation

Yesterday, Jim Peebles gave a nice talk about possible anomalies in the standard model of structure formation and possible remedies in the dark sector. He showed many pictures of colliding and other galaxies, and so forth. The main technical hypothesis was that there is an equivalence-principle violating fifth force caused by a massive scalar that only couples to the dark sector. The inverse mass is comparable to 1 megaparsec. Such a new force would allow to empty the voids more efficiently. I've already described these ideas after Steve Gubser's talk.

Thursday, April 21, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

NY Times about Kavli

The New York Times - Dennis Overbye, more precisely - writes about Fred Kavli (and his brother):

Wednesday, April 20, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

A celebration of Richard Feynman

Comment: a longer blog article about Richard Feynman is here.

A small announcement for everyone in the Boston area:

Wednesday, April 20, 2005, 6:30 pm (tonight)

  • Boston Public Library, Johnson Building, 700 Boylston Street
  • Mezzanine Conference Room, 1 level up
A Celebration of Richard Feynman:
  • Alan Guth, Massachusetts Institute of Technology
  • Robert Kirshner, Harvard-Smithsonian Center for Astrophysics
  • Stephen Wolfram, Wolfram Research
Free and open to the public...

Gregory Gabadadze & UV Lorentz violations

Gregory Gabadadze was just speaking about the infrared modifications of gravity, one of his recent favorite topics. This includes various theories of massive gravity and spontaneous Lorentz symmetry breaking. In the latter case, for example, they found theories

in which the massive graviton only has two polarizations. This can only occur because there is neither Lorentz symmetry nor Galilean symmetry using which you could go to its rest frame in such a way that the rotational symmetry would be preserved: if it were preserved, the spin "j=2" particle would have to have "(2j+1)=5" polarizations.

There has been a discussion about the Lorentz symmetry restoration in the UV. One may imagine that the diffeomorphism symmetry group is always preserved - even Newton's theory may be written in a diff invariant fashion - but the real physical issue is whether the spontaneous Lorentz symmetry breaking is undone at high energies.

In normal theories where you break the Lorentz symmetry spontaneously - e.g. by Coke in a bottle - the original causality (the speed limit "v less than c") is guaranteed to be preserved. This is a consequence of the Lorentz symmetry restoration in the UV regime. However, one may construct theories - at least UV incomplete theories - that violate this property. I am a bit uncertain whether such theories that allow the Lorentz symmetry to be broken spontaneously in the deep UV may be UV-consistent.

Cottrell & Pope

Just a few sentences. Our fellow string theorist Billy Cottrell was sentenced to 100 months (more than 8 years) for his unusual treatment of the SUVs - a topic that was discussed last year here on this blog. He should also pay $3.5 million - well, one may also call it "infinity".

The cardinals have chosen a new pope - Joseph Ratzinger (78) of Germany - one of the frontrunners of betting companies. Yes, Germans are probably not as controversial (in comparison with the Italian guys) as some pundits tried to claim because he was elected by one of the fastest conclaves in history.

He became Benedict XVI. While the previous Benedict XV (about 100 years ago) was a rather liberal Pope, Ratzinger is very conservative. He's one of the leading theologians in the Catholic Church and he's already been very powerful under John Paul II. Don't expect him to legalize gay marriage or something like that. ;-) No doubt, those who will criticize him in the future will be happy to learn that much like many other kids, Ratzinger was a member of Hitlerjugend at the age of 12 or 14. ;-)

Ratzinger looks like a rather impressive guy. As a former professor (who taught dogma), he is also an accomplished pianist (who prefers Beethoven and Mozart), speaks 10 languages, and dislikes communism, relativism, homosexuality, and other things. He will prefer a smaller but purer Church. Although I don't share most of their dogmas, his approach is appealing in many ways.

Sunday, April 17, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Japanese textbooks

Anti-Japan riots spread in China. The Chinese are not satisfied with the attempts of Japan to become a permanent member of the U.N. security council (this bid is supported by the U.S. while China not only opposes it, but probably wants India to become a permanent member), and with the new Japanese history textbooks that seem to downplay the evils of the Japanese aggressions against its Asian neighbors.

For example, the textbooks in the past used to talk about the "comfort women" i.e. the employees of the military brothels (a part of the Japanese war policy at that time). Most of them were Japanese, but some of them were Korean or Chinese. Most of the new textbooks fail to mention the "comfort women" and other war topics. These textbooks may be viewed as a victory of the Japanese nationalist groups that have fought against the "masochist" education that was undermining the national identity.

In Shanghai, 20,000 protesters attacked cars and restaurants that had something to do with Japan. There were also 10,000 protesters doing similar things in Hangzhou where Andy Strominger, his family, and his collaborators are having a great time. Let's pray - or do a rational equivalent of it if there is one - that the situation won't become dangerous for them. Japan demands an official apology from China - a country that apparently failed to prevent the violence; China seems to blame Japan's "wrong attitude" for these protests.

Thursday, April 14, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Behind the horizon

Steve Shenker (Stanford University) has reviewed his (and his collaborators') work

  • Lost behind the horizon
While the AdS/CFT correspondence contains black holes, the boundary CFT only describes their exterior easily. It is hard to see behind the horizon.

Steve showed that for the eternal BTZ black holes one has two (entangled) boundaries, as described by Kraus, Ooguri, Shenker, and the correlators include contributions from the geodesics through the bulk. For a particular choice of the points on the boundary, one expects a singularity from geodesics that become null and reflect from the future BTZ singularity.

Unfortunately, one can't see this singularity in the perturbative expansion. Nevertheless, there is a lot of interesting questions that one may start to answer once some uncertainties get resolved. Steve worked on these questions with Lukasz Fidkowski, Veronika Hubeny, and Matt Kleban, and perhaps someone else whom I will add if necessary.

Another development was the description of inflation inside AdS/CFT. Steve needed to assume the Landscape conjecture. He wanted to create a bubble of a false dS vacuum inside the AdS space. This dS bubble inflates for a long time, and the question is how it should look like in the CFT. Note that this is an example of the "universe in the bottle". Guth and Farhi have shown that such a universe in the bottle is unlikely to exist because when you trace it back, you are likely to encounter a singularity in the past. Also, some people would say that such a large universe in the bottle contradicts the Bekenstein bounds or holography.

Steve has drawn a lot of non-trivial Penrose diagrams, and he was trying to figure out which boundaries should be associated with conformal field theories living in them, and how could the observers at the boundary "measure" physics inside the inflating portion of the Universe. Many questions remain open. Some of them are related to the "decoding of the hologram". We have had discussions how difficult it is to decode the hologram or the Hawking radiation; whether the complexity is very different to decode the local physics in the bulk outside the black hole, and the local physics inside the black hole. And also the role of analytical continuation in this whole story.

The questions are exciting - and probably critical for understanding of quantum cosmology - but the answers (or the lack of answers) to many of these questions are rather frustrating, and that's a reason to stop at this moment.

Monday, April 11, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Savas' colloquium



Savas Dimopoulos of Stanford University is not only an entertaining and pleasant (modern, not ancient) Greek physicist, but arguably one the most important persons behind the theories of phenomenology beyond the Standard Model. Nima summarized his achievements. The most striking - and perhaps a bit exaggerated - description was that "the joke is, whatever happens at the LHC after April 2007, Savas will be in good shape. The only question is which set of his collaborators will get to join in the fun - [Nima] hopes its [him]!" There are several choices

  • a pure Standard Model with a single Higgs - it's a nightmare scenario for particle physics because Savas did not discover this one
  • the Minimal Supersymmetric Standard Model, co-authored by Savas (well, yes, there has been a pre-history)
  • the old large dimensions by Savas, Nima, Gia
  • the warped large dimensions by Randall and Sundrum
  • the (extended) technicolor by Savas and Lenny
  • the Little Higgs model etc.
  • the landscape, as proved by split supersymmetry of Savas and Nima
  • a question mark - a new possibility which may or may not have been written down by Savas

Recently, the Landscape has unfortunately became Savas' most favorite scenario. I've discussed split supersymmetry in a text about Gia's paper here, and the related works about the friendly landscape here and here. The previous article about the anthropic reasoning described Vilenkin's seminar. Savas compared the conjectured huge multiplicity of the vacua to the statements of Giordano Bruno:

  • There are many stars and many planets like the Earth, and our civilization is not a center of the Universe.

These statements were somewhat controversial, Savas said and supported this statement by a picture of burning Bruno. However, the good news is that Bruno's ideas, applied to the Universes themselves - the idea of the landscape - is now supported by physicists all of whom are tenured professors. Therefore, the young people should be careful if they don't want to get fired like Bruno. Incidentally, that would really be an excellent joke if it were a joke. ;-) Savas explained the anthropic principle and the "entropic principle". Unfortunately, he did not use the term "entropic principle" along the lines of the "entropic principle" of Ooguri, Vafa, and Verlinde, but rather as Michael Douglas' demographic approach to the statistics of vacua.

Saturday, April 09, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Greene, Einstein & uncertainty

When I learned that the publication of Brian Greene's article about Albert Einstein and quantum mechanics was imminent, I rationally waited for 12 hours. Then I opened

found the "Most e-mailed" articles in the right column, and clicked at number one:

Recommended. Well, it's just 100 of relativity and photons, but it's a long enough time anyway.

Friday, April 08, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Critical dimension: anything goes?

After Lisa's talk, we had an interesting discussion with Eva Silverstein of Stanford University, one of the most insightful and powerful young string theorists. Her statement that provoked the discussion was:

  • It's completely dishonest to say that 10 or 11 are preferred dimensions predicted by string theory because everything works in other dimensions, too. For example, I can construct AdS_{d} for any "d" with constant dilaton, and all such backgrounds exist in string theory. The dimensions "10" or "11" are not distinguished in any invariant way.

Those who know that I find it dangerous if a field of science starts to say that "anything goes", especially if there is not enough evidence for such a potentially postmodern approach, can predict that we were not exactly in a full agreement, exactly if you notice some strong words in Eva's assertion. ;-)

So let me say a couple of basic statements. In perturbative string theory, the condition "D=10" comes from the Weyl invariance on the worldsheet. The beta-function for the dilaton "Phi" (the classical values of the fields like "Phi(X)" play the role of "coupling constants" that define the two-dimensional theory on the worldsheet, and the beta function measures how much these couplings depend on the scale if you perform a renormalization group flow) contains terms like

  • beta_{Phi} = #.(D-10) + #.(Nabla Phi)^2 + #.(Box Phi) + ...
where "#" are unimportant constants. The term "-10" comes from the contribution of conformal and superconformal Faddeev-Popov ghosts. This beta function must be zero for the theory to be conformal - which is necessary for our ability to gauge-fix the metric to the conformal gauge and obtain meaningful finite-dimensional integrals defining the loop amplitudes (and it's necessary for the unphysical modes of gauge bosons and graviton to decouple in spacetime, among many other things).

So how do we guarantee it's zero? Well, the simplest solution is that we set "D=10", and the dilaton to constant. This is the canonical way to cancel the leading terms in the beta function for the dilaton. Are there other possibilities? Yes, you can set "D" to any other number, as long as your dilaton "Phi" is a linear function of spacetime coordinates in such a way that the "(Nabla Phi)^2" term cancels the "(D-10)" term. For linear dilaton, the "(Box Phi)" term is still zero. Of course, one can also add mild non-constant dilaton, e.g. one that satisfies the equation "Box Phi = 0 + #.(Nabla Phi)^2 + ...". I wrote the term "zero" for the main idea of the equation to remain transparent.

OK, Eva now claims that she can keep the dilaton "Phi" constant, and still allow "D" to be different from ten. These tricks are described in her papers

The second was written with Alex Maloney and Andy Strominger. How is it supposed to work? You re-interpret the requirement of the vanishing beta-function beta_{Phi} as an equation of motion in an effective field theory whose action contains terms like

  • S_{eff} = ... + e^{-2 Phi} (Nabla Phi)^2 + ...
Now they argue that there should be other Phi-dependent terms in the action arising from fluxes, proportional to other powers of "exp(Phi)". There are at least two of them. For constant "Phi" the action reduces to "-V", i.e. minus the potential energy, and with these three competing terms, the potential energy can have another minimum at a non-zero value of "exp(-Phi)" - draw a graph of "V(Phi) vs. Phi" that first increases (1), then decreases (2), and then increases again (3) and you will find the minimum between the regions 2 and 3. Stationary points of the potential energy for scalar fields represent solutions in which "Phi" is constant.

Lisa Randall's talk

Lisa Randall just spoke about the "inverse Brandenberger-Vafa" mechanism. According to Brandenberger and Vafa, the world has 3+1=4 large dimensions because 2+2=4. This may sound as a childish statement, so let me say a few more words about it.

The dimension of the worldsheet of a fundamental string is two. The maximal spacetime dimension in which two worldsheets generically cross - and allow the wrapped strings to annihilate (and unwind) which allows the space to expand - is "2+2=4", which is why we are supposed to live in four large dimensions. A larger number of dimensions can't expand because the strings would not have enough chance to annihilate (there is too much space and the strings have too a small dimension), and the remaining wrapped strings would prevent the small dimensions from expanding to astronomical sizes.

You might object that this argument also allows a lower-dimensional spacetime to develop, but 3+1 dimensions are preferred. If nothing prevents 3 spatial dimensions from expanding, it will occur.

There is a huge number of small problems and subtleties about this proposal, but it is undoubtedly attractive to imagine that a cosmological mechanism explains things such as the dimensionality of the Universe. Recently, in the context of the second superstring revolution, the Brandenberger-Vafa framework was upgraded to the "brane gas cosmology" which includes not only strings but also higher-dimensional branes.

Lisa's approach is the opposite one: she wants to consider the mutual annihilation of branes in a higher-dimensional Universe. Consider some simplified type IIB string theory. The branes with low dimensions won't annihilate too much, but the p-brane energy density will simply decay as

  • a^{p-n}
where "n" is the total number of spatial dimensions (nine). The higher dimension the brane has, the less rapidly its energy decays because its total volume scales like "a^p" where "a" is the linear size of the Universe. However, for 4-branes and higher, it is very likely that they will annihilate with their antibranes rapidly, and you must use a different power law. As Lisa combines the pieces, she argues that 3-branes and 7-branes are those that survive. A pretty good starting point for phenomenology. Except that one also needs 3+1-dimensional gravity, and Lisa argues that it is also possible to localize 3+1-dimensional gravity on a triple intersection of three stacks of some kind of 7-branes stretched in the usual SUSY way,
  • ++++__++++
  • ++++++__++
  • ++++++++__,
assuming that the bulk is something like AdS_{10} (a hard background to realize in type IIB). There may be many bugs and stringy subtleties that are neglected, but I definitely think that it's useful if Lisa provokes others to think about these issues.

Many formulae looked more concrete than last time when I was collaborating on this line of reasoning, and as Lisa says, this sort of Brandenberger-Vafa reasoning may be useful to identify a cosmological selection mechanism that will find some preferred vacua (in this case: braneworlds) within the landscape of possible vacua.

Wednesday, April 06, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Anthropic world: Vilenkin

Alexander Vilenkin just spoke about

  • Probabilities in the landscape
Very nice work - and Vilenkin is a smart gentleman - except that I am not the only one who feels very discouraged after the talk. The colleagues are asking how far the boundaries of science can be pushed.

The goal: distributions

The goal of the talk, as I understood it, was to make the first steps to determine some probability distributions for various quantities using the hardcore anthropic reasoning. These distributions are the only numbers we should be trying to figure out, according to the anthropic reasoning. Unlike quantum mechanics where the distributions may be exactly checked by repeating the experiment many times, we only have one Universe to measure.



The first choice is to fix the parameters of the Standard Model and low-energy physics, and only vary the cosmological constant "Lambda" and perhaps "Q". That's of course unjustifiable, but let's not stop at this point. For our purposes, the quantity "Q" is defined as "delta rho / rho" which is the typical relative fluctuation of the temperature of the cosmic microwave background (CMB) and it is equal "10^-5" in our Universe. Its value depends on the energy scales and other parameters of the inflaton potential which is "very high-energy physics" that does not affect local life.

The probability distributions are written, in analogy with Drake's equation for the number of telephone contacts with the extraterrestrial aliens, as products of many quantities - the simplest example is
  • P (Lambda) = P_{prior} (Lambda) P_{formation} (Lambda)
where P_{prior} is an a priori distribution coming from the fundamental (string) theory (in the most optimistic picture, it could arise from the Hartle-Hawking state in some way; in less optimistic cases, it is given by "counting vacua" or another unjustifiable method) and it is completely unknown at present.

Density of galaxies

On the other hand, P_{formation} is taken to be proportional to the number of observers that we generate in a given Universe. Vilenkin argued that it can be taken to be a quantity proportional to the number of galaxies in the Universe. This particular quantity, the number of galaxies, is pretty much calculable, as many Vilenkin's simulations and graphs of the volume of various regions (the three regions are roughly speaking: quantum diffusion near the top of the potential; slow roll in the middle; thermalization near the bottom) in the inflating "pocket Universe" showed, but it is much more controversial whether this is the right number that should become a factor in the probability distribution.

As you see, Vilenkin considers one galaxy to be a unit of life. P_{formation} should really measure the number of observers that will exist (or be born) in a given Universe. The total amount of intelligent life, so to say. The precise definition of P_{formation} - which is the quantity affected by the details of the anthropic (non)rules - belongs to humanities, not to science, I think. For example, Nima kept on asking "what is the moment at which you measure the number of observers" and he suggested that the integrated number of observers in the whole spacetime should be relevant. Vilenkin disagreed and he only wanted to count the number of observers at one moment (still not sure which one, and I was not the only person confused by the rules how to choose the "right" slice). Nima's proposal seems to be the more plausible one to me but neither of them is justifiable scientifically.

Intermezzo: defining and predicting intelligent life

But imagine that you would like to create an argument of this sort that is more than just hand-waving and where some other parameters would be varied. You would have to decide who is an intelligent observer. For example, a Universe with different parameters could produce small planets with weaker gravity. This would probably affect the typical size of the animals, the size of their brains, and consequently (politically correct people, please, forgive me) their expected average intelligence. Should we consider or expect the "bugs" living as the Universe to be equally good intelligent observers as larger animals like us? This can change the distribution by 10 orders of magnitude, or parameterically by huge additional power laws.

Note that in order to count these things realistically, we really need to know virtually all factors in the real equation due to Drake. In other words, physics behind the Standard Model is now supposed to be less solid than the search for extraterrestrial civilizations (SETI) because we need all answers from SETI and some additional answers to figure out what the new physics beyond the Standard Model should look like.

Moreover, should we just count the total number of observers that live in the Universe, or should we also take their lifetime into account? The number of words they can say per life? If the lifetime of humans were 1000 years, like in the Bible, should they increase the probability of a given Universe by an order of magnitude? Were the long-lived heroes of the Old Testament ten times as good people as we are? If the typical velocity of the humans in some civilization is 1000 times faster than in ours and if these people have 1000 times richer lives, does it increase the probability of their Universe and the quantity P_{formation} by three orders of magnitude? Shouldn't we count the number of cells as opposed to the number of people which would add 9 orders of magnitude for our life? Or should we calculate the number of villages or nations as the independent units of intelligent life which could subtract 9 orders of magnitude?

Fortunately, I am not the only one who is convinced that these questions will never be resolved and cannot be resolved scientifically. There are hundreds of similar questions, we can answer them in many different ways that reflect our prejudices. All different answers lead to different outcomes, and by "properly" adjusting all of our prejudices, we can obtain almost any answers we want. No doubt, there will be people who will argue that only one choice (theirs) is correct - but they will never be able to show why their choice is better than others. Moreover, there are many ways how to tune the prejudices to obtain one desired set of answers. The prejudices are untestable. They are not a subject of scientific verification. There is no natural "one-dimensional" scalar function of the properties of animals that could tell us how much they contribute to the probability distribution.

Input vs. output

Someone can try to intimidate other people by the conjectured large number of vacua in string theory. But the number of different versions of the anthropic principle - the number of formulae through which we determine "the amount of intelligent life in the Universe" times the number of ways how we count the "number/volume of Universes of certain kind" is much higher even than the worst existing finite estimates for the number of vacua. In other words, we can always fine-tune our anthropic "principle" in such a way that virtually any kind of a Universe can be chosen as the preferred one. Is that science? Even if there exists a Universe that cannot be obtained as the most likely one by adjusting the anthropic (non)laws, we can't really eliminate it because we don't know whether our fantasy about the possible anthropic (non)laws was complete.

Weinberg's success (?) story

If I return to slightly more technical topics, one of them was the distribution of the cosmological constant. You know, Steven Weinberg showed that it should be between -1 and +100 times the currently observed value, roughly speaking, for the galaxies to be able to form. If the cosmological constant were too negative, the Universe would approach the Big Crunch too early; if it were too large and positive, it would expand and dilute before the matter could clump into galaxies. This Weinberg's calculation is a source of pride and inspiration for the people who have switched to the anthropic mode because it was an actual "prediction" of a positive cosmological constant. Weinberg's predicted value differs from the right one by two orders of magnitude - which is 30 times better than the naturally predicted value of "Lambda" after SUSY breaking which differs from the observed one by 60 orders of magnitude. The price for this factor of 30 in the exponent is that the methods of physics are supposed to be replaced by methods of humanities.

Note that Weinberg's argument implies a rough solution to the coincidence problem ("why now") - the problem why do we live in the era in which the cosmological constant is comparable to the density of regular matter. Weinberg's answer is that the probability distribution for Lambda is naturally peaked near the density of matter at the time when galaxies are formed (which is comparable to the era in which we live), as his calculation shows. This simply comes from the requirement that galaxies can still form (the density is sufficient) when the cosmological constant starts to dominate - this is roughly where his allowed interval ends.

Can we make Weinberg's computation more accurate? Depending on your assumptions, you can achieve any distribution you want. The total distribution may be nearly constant near "Lambda=0", but it can also behave as "exp(C/Lambda)" which hugely prefers tiny positive values of Lambda. Moreover, the constant "C" is very important to get some details. Note that the "constant distribution" has absolutely no invariant meaning because distributions depend on the coordinates that we use to parameterize the parameter space.

Trying to falsify the anthropic framework

Another thing. Banks, Dine, and Gorbatov argued that if we allow not only "Lambda" to vary but also "Q", then we obtain wrong predictions for "Lambda" and "Q" because the Universe with correlated higher values of "Lambda" and higher values of "Q" will be preferred. The matter density will be less uniform if "Q" is larger, and the density of galaxies increases. This will allow us to increase "Lambda" because the galaxies won't dilute so quickly anyway. In my opinion, this is the right approach to these questions - try to falsify them, instead of adapting them.

Vilenkin has made a trick that led him, once again, to a factorized probability distribution for "Lambda" and "Q", so that any change for "Q" is irrelevant for the statistical predictions of "Lambda". I forgot what the trick was, and I am not sure whether I should be sad because of it. A lot of extra discussion focused on the bounds on "Q". Of course that some "mainstream" bounds tell you that "Q" is not that far from the observed value "10^-5". Concretely, it should be true, they say, that
  • 10^-6 is smaller than Q is smaller than 10^-4.
Nima asked what is the upper bound for "Q" where the galaxies don't form because matter collapses into black holes before they can start to be formed. No clear answer appeared, but people guessed "10^-2". More generally, I find the inequality like the displayed inequality above completely useless. We already know "Q" from the observations. It's "10^-5". And it's clear that if we varied the values of the parameters of our Universe too much, we would obtain a very different Universe. It's a tautology. The only remaining question is how much different from ours we allow the other Universes to be so that we admit that they can contain enough "intelligent life", and how small probability for the formation of life in that Universe we consider realistic. This is almost the same type of question like the question "how much can we tollerate racism" or, more drastically, "how many angels there are on the tip of the needle".

Googol vs. googolplex

Regardless of the value of "Lambda", it's always possible that a Universe appears as a gigantic fluctuation. The probability of such fluctuations are not the inverse googol (10^{-100}) but rather the inverse googolplex (10^{-10^{100}}). Obviously, as Nima likes to say, in the anthropic framework we need to allow the unlikely events whose probabilities are of order the inverse googol because it is necessary for picking the Universe with the unnaturally tiny Lambda, but we should not allow phenomena whose probabilities are the inverse googolplex because this would also allow us to say that the Universe (including the fossils) was created 6,000 years ago, in agreement with Genesis.

The only good reason why they want to allow googol-like probabilities but not googolplex-like probabilities is that they're not Christians, and they only want to support the weak form of the religion (the anthropic principle) but not the strong version (literal belief in the Bible). You may think that I am joking, but I am not. The choice of these bounds of acceptability, once again, influences the conclusions dramatically. Moreover, it is not quite clear how the probabilities should be calculated. We can always imagine that there are googolplex of different galaxies in a Big Universe, and in one of them, life can arise as a gigantic fluctuation. In fact, I believe that one could arrange various fluctuations whose probability would be much closer to one (much greater) than the inverse googolplex.

Let me mention a very different example showing why it is important for the data to be stable with respect to perturbation of the "conventional" parameters. If the people in the 1970s looked at the temperature records from 1850, they could have seen a mild warming trend. If they looked at the temperature records starting from 1940s, which is what they actually liked to do, they could have deduced that a new ice age was getting started. Both conclusions were unjustified. If the conclusion depends on the choice of the year where you start to measure, or on the probability which you call the least acceptable probability, then your conclusion is unstable and it is scientifically irrelevant. Nothing better than guessing.

Summary

These ideas based on infinite possibilities and infinite Universe can never lead to convincing i.e. new quantitative results. The amount of bits that we insert as parameters - the answers to various "subtle" questions how the anthropic principle should work (plus, in reality, the assumptions about the behavior of the fundamental theory that we don't yet know fully) is hugely larger than the amount of (fuzzy) information that we can derive. A scientific theory should generate more predictions than the number of assumptions we insert into it.

The only way how similar things can work scientifically is that we understand the full theory at the fundamental scale well enough, including its unique Hartle-Hawking-like framework (which must be free of all assumptions and can only use the standard path integral of the theory, or an equivalent of it, to calculate any probabilities) to calculate the initial state of the Universe, and we take this initial state to calculate the probabilities of the subsequent evolution. Another possibility is just ignore the cosmological selection considerations altogether, and just continue to try to identify the correct "vacuum" by trying to match the properties of particle physics, leaving the early cosmological questions to the very end.

Finally, I am sure that various people who have a similar opinion about the anthropic thinking will use this admitted frustration as a weapon against string theory. Unfortunately, I must assure you that the expansion of the anthropic principle is a problem of the whole theoretical physics, not just string theory - and this talk was not a string theory talk after all. Incidentally, I was just explained by one of the authors of the entertaining article Supersplit Supersymmetry that it was making fun primarily of the anthropic approach, not supersymmetry.

Political bias and science

Update: a very similar article to mine, containing the very same points (the anti-science acts of the Left in the "Summers controversy", their tendency to reduce the political diversity on campus, but also some other topics such as the "Sokal affair") - was written by Prof. James D. Miller. I admit that his article is better than mine.

Sean Carroll from The Preposterous Universe asked the question why academics tend to be left-wing. His answers (to be discussed at the end of this article) do not seem terribly deep to me: they're the kind of cheap stuff for the simple readers who need to be assured that their being left-wing is good enough for being a great people - but let's try to answer his question anyway, in a slightly more reasonable way. So why is the Academia so pre-dominantly left-wing?

Different departments

First of all, the large concentration of left-wing scholars is especially the case of humanities and some social sciences. In some examples, it cannot be surprising. Some of the fields at these departments are left-wing almost by definition. For example, what can the political opinions of a professor of (feminist) gender studies look like? Many jobs - and probably many departments at many universities - have been deliberately created to support a certain type of political thinking and suppress the opposite type of thinking. Perhaps, the motivation may have been good in some cases, whatever it means. Therefore, the question "why there are so many people in some fields" may be reduced to the question "why were these departments created in the first place and why do the taxpayers and others continue to fund them?".

Monday, April 04, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Semiconductor visions by Eli Yablonovitch

Eli Yablonovitch (UCLA) just completed his first Loeb lecture at Harvard, and it's been pretty fascinating. The lecture was somewhat similar to the lecture of the Samsung Electronics' CEO Dr. Hwang, but it was much more informative on the physics issues. (Also, we have not received any USB flash drives this time.)



Yablonovitch (Yabloko is a Slavic word for the "Apple", but no links with Steve Jobs!) is one of the people who are associated with the first chips and microprocessors that have been built, but he has quite a lot to say about the present and the future, too.

His talk has had many dimensions: sociological ones, economical ones, and a great deal of physics. The chip industry has many levels that includes the semiconductor companies; chips; software; hardware; digital content - the broadest categories produce about 3-4 trillion dollars per year.

A simple model of a transistor

Yablonovitch focused on the interplay between physics and economy, but he also presented a very nice simplified caricature of a transistor. It has a gate that can store the electric charge. If there is some electric charge on it, it repels the electrons from a nearby wire, so that the resistance between the "source" and the "sink" increases a lot. With this picture in mind, you can see that it is not difficult for various memory chips to store information for a year. It's simply about storing a small amount of electric charge. Also, if you look at the transistors in this way, you might almost be surprised why you need any semiconductors in the first place. ;-)

The exponential laws

Yablonovitch discussed various exponential laws, especially Moore's law. Things are getting smaller (he cited Feynman as the originator of various visions - "there is a lot of room at the bottom", for example). They're getting faster. We're able to produce more of them - the planet is producing 10^18 transistors per year and this number will soon approach the Avogadro's number. ;-)

Saturday, April 02, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

A search for the new Pope

Update: An hour after this message was posted, very sadly, Pope John Paul II returned to his home which is unfortunately much further than Australia.

Karol Wojtyla (1920-2005) has been a visible Pope, and I think that he has been a clearly positive figure. He's been loved by the catholics and others. He considered abortions on par with holocaust, which is an example of his clear conservative approach, and he has rehabilitated some of our old colleagues who have been terrorized by the Church half a millenium ago, which is an example of his progressive thinking (I don't mean the U.S. English meaning of the word "progressive" where it means a "far left-wing nutcase"). His Church has not died in this modern world. In fact, it has expanded in many regions of the world and it helped to tear down Communism. And he has personally been a source of peace and a moral authority.

Millions of people including pagans like me wished him good health. But because his condition did not look too optimistic on Saturday - in fact, it made us saddened - and because 85 years of age was not such an unexpected time of the last day of one's life, the College of Cardinals was already undoubtedly thinking about a new Pope.



The papacy of John Paul II has been a pretty impressive era, and it will be hard for a new pope to match Wojtyla.

Because I am Czech, it is natural for me to mention Miloslav Vlk (*1932), the head of the Czech Catholic Church and the Archbishop of Prague. (Christoph von Schonborn was also born in Czechoslovakia, in 1945, but I have no idea who he is except that he is a very strong candidate from Austria.) For a discussion of candidates as seen in 1999, click here. For an update from 2001, click here. For a recent discussion about this topic, click here. (Incidentally, the latter article suggests that John Paul II was partly elected because of his nice lecture at Harvard University in 1976.)



Because I am not a Catholic and my experience with Christianity has had both signs, my comments may be viewed as impartial or ignorant, depending on your viewpoint. Miloslav Vlk (*1932) has many virtues:

  • he's pretty bright
  • he has a good record for having struggled with the socialist regime in Czechoslovakia to become a priest - and he has worked as a window-washer in one period
  • he's been awarded many awards, and he holds many important functions in the European Catholic Church
  • he is a theologist and he is popular among his colleagues in Western Europe as well as Eastern Europe
  • he is a Slav, and after 500 years of Italian Popes, the very recent experience with the Slavic Popes has been very good, I think
  • he speaks many languages
  • his focus is on movements - John Paul II liked them, too - especially the Focolare movement, whatever it is - and this implies a certain feeling of continuity
  • more generally, according to The Washington Times, Vlk would most likely be Wojtyla's choice; see also a February message from the Pope to Vlk
Vlk as a candidate has several disadvantages, too:
  • he is a Slav from Central and Eastern Europe (possibly the only serious candidate from that region), and it may seem unlikely or even awkward to elect a second Pope from this group in a row
  • the Church may want to choose a younger person (Vlk is 72+)
  • his focus on the movements may be viewed as too narrow by some
  • the Church may want to extend its diversity and choose a non-European candidate (perhaps even an African); this would be a disadvantage for all European candidates; I think that a non-European pope is unlikely, but it is not impossible to imagine
  • his last name (Vlk=Wolf) contains no vowels (although "r" and "l" are treated as vowels in similar Czech words) which may be a problem for the stupid people in the Church

Despite the candidates from Africa, Latin America, Eastern Europe, and Central Europe, the experts on Vaticanology estimate that the next Pope will be Italian once again because the "Italian nationality does not irritate anyone and an Italian candidate is a smooth sailing which would not be the case for French, German, or American candidates," the experts say. More precisely, no one is offended by the nation with the highest corruption in the Western world -and the nation that has invented mafia. In my viewpoint, this argument is actually another argument for candidates like Vlk.

It's time for the cardinals to isolate themselves from the real world in the Sistine Chapel. They will have to chat until special smoke signals prove that the choice has been made. If they're unable to choose Wojtyla's successor for three days, they can only eat bread and wine. After five days, they can only fill their plate once. These rules of starvations have been tested for many centuries and they guarantee that someone is eventually chosen. They must write their choice on a 5-centimeter-wide paper ballots because Jesus Christ has not approved the use of computers yet; consequently, the cardinals are instructed to change their handwriting for the sake of secrecy. A fascinating procedure.

Other candidates

Let me list a couple of candidates according to their country:

  • Africa: Nigeria - Francis Arinze (72) - he is experienced with the Christian-Muslim relations, and could be able either to push the religions closer together, or - if it does not work out - to upgrade the war on terrorism to a universal war against the Muslims. Because some evil commentators deliberately misinterpret what I wrote, let me clarify that this description of Arinze is a reason why I personally think that it is inappropriate to choose him. Islam can't be brought closer to Christianity and the attempts to do so are dangerous.
  • Europe: Italy - Angelo Scola (63) - a leading and young priest from Venice - the Popes in 1958 and 1978 were from Venice
  • Europe: Italy - Carlo Maria Martini (78) - a guy from Milan who has been always against the conservatives, but he's been a candidate for too long
  • Europe: Italy - Giovanni Battista (71) - a moderate guy "from the establishment" which is a disadvantage
  • Europe: Italy - Dionigo Tettamanzi (71) - once a leading Italian candidate - "one saved African HIV kid is more valuable than the Universe"
  • Europe: Italy - Angelo Sodano (77) - the man #2 in Vatican, a conservative diplomat who may have been too close to Pinochet while he was in Chile
  • Europe: Austria - Christoph Schönborn (60) - worked to reconcile with the Orthodox Catholic Church, too young; born in Skalsko, Czechoslovakia
  • Europe: Germany - Joseph Ratzinger (77) - hard conservative who has been discussed a candidate every time the blocs could not agree. He will celebrate the 78th birthday, and then he will probably be elected as Pope Benedict XVI. The first German Pope since 1055-1057. An accomplished pianist who speaks ten languages, dislikes relativism, communism, homosexuality, and prefers the fundamental truth. He prefers a smaller but purer Church.
  • Europe: Belgium - Godfried Danneels (71) - a frontrunner of the reform forces supporting the role of women, rights of divorced people; questionable health after a heart attack
  • Europe: France - Jean-Marie Lustiger (78) - too old; pro-Israeli (risky); born Jewish; archbishop of Paris; anti-racist
  • Europe: Czechia - Miloslav Vlk (72) - a popular theologist and window-washer described in this article
  • Asia: India - Ivan Dias (68) - Church diplomat who traveled everywhere, defender of conservative Vatican viewpoints, 5 languages
  • Latin America: Argentina - Jorge Mario Bergoglio (68) - successfully managed the 2001 synod in Rome; lives in an apartment, cook his own food, travels by bus
  • Latin America: Honduras - Oscar Andrés Rodríguez Maradiagia (62) - he may be too young (after a long papacy, they prefer a shorter one i.e. older candidates) - but he is a star of the Church in Latin America who knows languages etc.
  • Latin America: Mexico - Noberto Rivera Carrera (62) - also young - fights for egalitarianism, however religiously conservative
  • Latin America: Colombia - Darío Castrillon Hoyos (74) - against drug-trafficking, against poverty, against free theology
  • Latin America: Dominican Republic - Nicolás de Jesús López Rodríguez (68) - a critic of his local government and military, socially left-wing, religiously conservative (against abortions, sterilization etc.)
  • Latin America: Brazil - Cláudio Hummes (70) - a German Brazilian - an interesting anti-war, anti-condom candidate

Supersplit supersymmetry

This paper by eight phenomenologists

is well done. You have to read the abstract - or the paper - quite carefully to become sure that it is an April fool's day hoax. Alternatively, you must know that some of the 8 authors are known to dislike supersymmetry.

What do these guys do? They take the models of split supersymmetry and improve it a little bit so that all remaining superpartners are sent to the Planck scale. This solves about 15 different problems of SUSY breaking - such as the gaugino decay problem, the flavor changing neutral currents, and so on. :-) The resulting model may resemble a model by Glashow, Salam, Weinberg at low (sub-Planckian) energies, but their motivation was not quite correct, the present authors say.

I think that such form of criticism is healthy, and despite my belief that SUSY is beautiful, realistic, and worth considering, I sympathize with many points of their paper. Let's hope that Nima and Savas won't be too upset. ;-) Let's also emphasize that the supersplit supersymmetry fails to reproduce some successes of split (and other) supersymmetry - at least one of them, namely the gauge coupling unification.

Friday, April 01, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Glashow finds the correct stringy vacuum

Today, the most interesting paper on the arXiv is undoubtedly the paper by Shelly Glashow:

It's a pretty long one but I could not resist to read the whole text. How exciting! Apparently, he identifies the correct vacuum of string theory, and checks it numerically. He obtains various masses of quarks and leptons that seem to agree with observations. The accuracy of his calculations is thrilling. I was particularly impressed by the 6-loop correction to the muon Yukawa coupling and the D3-brane instanton that modifies the Weinberg angle at low energies.



There are many funny points about this paper. For example, in the acknowledgements, the author thanks Peter Woit for emphasizing the importance of the Dirac operator on the moduli space of Calabi-Yau four-folds and the importance of string theory to him. (F-theory on the Glashow Calabi-Yau four-fold is the picture in which he decided to calculate the neutrino mass matrix.)

This may be a pretty important paper, perhaps a second Nobel prize paper. Some of Glashow's points that go beyond the analysis of the particular (correct) model are the following: