Thursday, August 07, 2008 ... Deutsch/Español/Related posts from blogosphere

Dixon law firm: CyberSUSY

John Dixon from the Dixon Law Firm in Calgary submitted the first, 40-page-long preprint about CyberSUSY. He plans to write and submit four papers and they should have hundreds of pages in total.

His law firm sounds even better than a patent office and because he claims to calculate the fermion masses and solve the cosmological constant problem, among other things, I couldn't resist to look at the paper. It would be pretty easy for me to overcome all kinds of idiosyncrasies if he had something to say.

Unfortunately, five minutes is enough to transform the eager expectations into a laughter. What does he claim to do?

He claims to have a framework that breaks SUSY at the same moment when the electroweak symmetry is broken. That surely sounds impossible but you don't want to give up too early. So you read how he intends to realize such a heroic tour-de-force.

Dixon modifies the supersymmetry transformations by adding new terms that have a simple impact on some particular composite operators in the theory. After going through 10 pages, you may finally see that this is his strategy.

Cracks appear soon

However, when you want to know what exactly happens, several explosive surprises are waiting for you. First of all, the modified transformations are BRST transformations. That's not what you would expect because the BRST transformation is a technical tool to deal with local symmetries, not with global symmetries such as supersymmetry in the supersymmetric standard model.




Such an observation will shock you a bit so in order to check whether you're on the same frequency, you return to his definition of the "normal" BRST transformation at the beginning of the paper. And your surprises continue. His "BRST" transformation is schematically equal to "C" times "Q". Now, I suppose that "Q" is the supercharge because of its spinor index and declared dimension. Clearly, "C" must be the parameter of the supersymmetry transformation for the variation to be a variation: it has the dimension "-1/2" and a spinor index, too.

Now, it's clear that Dixon has confused the supersymmetry parameters with the BRST ghosts "C". As a result, he has also confused supersymmetry transformations and BRST transformations. He still requires this (inherently) supersymmetric transformation to be nilpotent. ;-)

That's not an excessively promising mistake at the beginning of a paper that tries to derive a new kind of supersymmetry breaking from deformations of SUSY or BRST transformations. But such bizarre things continue. For example, even though Dixon claims to link SUSY breaking to gauge symmetry breaking, you will see no gauge fields or gauge transformations anywhere in the paper. ;-)

Also, the additional terms for the transformation rules are only written down for some composite fields, not for all (elementary) fields. The actions don't depend on the ghosts "C" at all which makes it less surprising that he manages to make his "BRST charge" nilpotent.

The cosmological constant problem is "solved" by looking at leptons only: gravity, gauge fields, and other fields seem to be "irrelevant" in Dixon's opinion. After 10 minutes, you see that the paper is complete balderdash and you stop reading. But there are several universal incredible patterns of papers written by these amateur scientists (a category that also includes lots of professional amateur physicists) that I want to describe explicitly.

Physics vs formalism

In real physics, one has to have an idea that can be described in physical words. The idea has to refer to actual physical objects - particles, black holes, fields, other measurable quantities, and physically verifiable symmetries and phase transitions, among other things. Once the idea or the problem is formulated, it must be refined and investigated by mathematical tools (such as the BRST formalism or others) that take over.

These mathematical tools are not "canonically" connected with the physical statements and questions but they nevertheless decide whether various hypotheses are correct and what the results are. The mathematical tools to find the answers are usually not unique (there typically exist many equivalent ways to obtain a result) but they are always essential. Sometimes you are led to an interesting equation that determines an observable quantity.

In amateur physics, at least the subtype of amateur physics that doesn't avoid equations, it is the other way around. They start with a program that is defined in terms of a mathematical object that would normally be just a tool to study physical assertions: a slave becomes the master. I am talking about the BRST transformation, a superconnection, etc. The physical propositions, physical starting points, and links of their new work to the existing laws of physics are completely missing. You can also see that simple mathematical objects such as algebraic equations or numerological identities are at the very center of their work while the other "calculations" are mostly added as a sort of decoration.

They do something with these mathematical objects that makes no sense and finally they "arrive" to grandiose statements such as "we have found a theory of everything" or "we have calculated the fermion masses" and "solved the cosmological constant" etc. You can see that most of these assertions were formulated long before they wrote the equations - the decoration. ;-) But it is not possible, not even in principle, to solve any of these problems by following a similar path.

To do anything sensible in theoretical physics, one must actually begin with a well-defined physical framework, modify some of its components or assume something about its parameters and initial conditions according to rules that can be motivated in physical terms, use mathematical tools to calculate the results as carefully as we can, and end up with some conclusions that couldn't be known at the beginning.

Consistency checks inside the text

Another thing that the amateur scientists seem to misunderstand is that it typically takes a few minutes to see that their paper is balderdash. They seem to believe that another physicist has to read every word of the paper, evaluate it for the same long period of time that they spent by writing the paper, and only afterwords, the physicist has a chance to have an opinion about the paper.

That's not how it works in reality. The meaningful papers in theoretical physics are actually not composed of 100% of completely new information. Quite on the contrary. Something like 1/2 of sentences in a typical paper are actually consistency checks in which the reader can verify that the author of the paper is not on a completely wrong track: the reader may check that what the author is saying is consistent with some of the previous, limited knowledge of the reader (either standard knowledge or other results in the same new paper), but the author can also add something new.

For example, when a new, more universal expression is given for a physical quantity (a function), the paper typically checks that the quantity is well-behaved in some special regime that should have been known before the paper was written. The author is supposed to be "a few steps ahead" of the reader and to help him to get through (imagine two mountaineers). When they don't share anything from the previous trip, that's already too bad.

Even more importantly, whenever there is something that seems to be a contradiction, a meaningful and well-written paper typically explains the subtlety that actually resolves (or might resolve) the contradiction. The author and the reader don't have an "identical" understanding what is surprising and what is expected about science but because they live in the same world with the same existing knowledge about the discipline, their expectations shouldn't be terribly different. If they're too different, it becomes impossible for the reader to read the paper.

But of course, the author is not obliged to pedagogically explain the fate of apparent contradictions etc. I wouldn't call the the resulting paper "well-written" if no attention is paid to important cases and possible contradictions but it could still be a "correct" paper, in some sense.

However, the amateur physicists are using the physical concepts in such a wrong way that it becomes immediately clear that they don't know what they're talking about. They don't actually omit the checksums. They include them and virtually all of them are wrong. For example, Dixon

  • shows that he thinks that the BRST operator is a physical object in global SUSY model; in reality, it shouldn't be there at all, and even when one uses it, it is just a technical tool to deal with other symmetries and physical objects, not a "primary" object to build a paper upon
  • confuses the BRST operator with the supersymmetry transformations
  • in doing so, he checks the "nilpotency" of the supersymmetry transformations, instead of the correct (nonzero) anticommutators
  • links supersymmetry breaking to gauge symmetry breaking but gauge fields don't seem to appear in the paper at all
  • claims to solve the cosmological constant problem without talking about lots of objects (such as gravity and loops of particles) that are clearly necessary to say something about the value of the cosmological constant (electrons only are not enough)
and so on and on and on. So a meaningful scientific paper has 1 byte of new information followed by 1 byte of checksums. And the checksums must work correctly in 90+ percent of the cases, otherwise the paper is thrown away as gibberish. However, in Dixon's case, about 90 percent of these checksums are wrong.

Moreover, I believe that e.g. Dixon must completely realize that he has no clue what e.g. the BRST operator is. When most people are shown the BRST formalism for the first time, they don't understand why the details were chosen in this way. At the beginning, the BRST framework looks like a mysterious gift from the aliens. (See some motivation starting with the Abelian case.)

So I would personally never use the BRST formalism if I didn't know why the terms are what they are, why the operator should be nilpotent, and why the physical states should be the cohomologies. Dixon clearly doesn't understand most of these things. So why does he use this technical tool in his preprint? Does he really believe that it is possible to end up with anything meaningful if one uses tools that he completely misunderstands? It's like a native of a cargo cult tribe who pilots an airplane. The results simply can't be good.

Building a TOE from the scratch

In principle, you could imagine that an ingenious author writes an important paper that is completely disconnected from (almost) all the previous knowledge about science. You could imagine that the new genius would need no shoulders of the giants to stand upon. It has never happened in the history of physics but if such a thing occurred, the genius would still have to offer his own replacement for all the physical laws that have been verified and that he ignores or rejects.

Such a paper would have to be much longer - thousands of pages - and it would have to refer to a lot of experiments. Why? When you claim that a theory properly reproduces certain phenomena in physics, you must link the theory to experiments. It either means that you link it directly or, more typically, that you link it to some other papers and approximate theories and principles that have already been verified to agree with the relevant experiments: you normally link your new theories to the observations indirectly.

But if you want to avoid the existing state-of-the-art theories, the latter option evaporates. Your paper would have to talk about the experiments themselves and it would look very different from Dixon's paper.

Add to del.icio.us Digg this Add to reddit

snail feedback (2) :


reader Alejandro Rivero said...

Perhaps you are confusing this J. Dixon with G. Dixon.

In any case, I think your description about modern papers being composed in more than a 50% of "consistency checks" is accurate. And perhaps damaging. Basically any paper for publication, if not a letter, contains a lot of unneeded information, used only to show the referees that you know (or eventually not) the topic and techniques you are speaking about. Worse, a referee in hurry could be guided by this information instead of by the real content of the paper. Worse, the authors will keep putting more and more of this cut-and-paste stuff in order to be sure the paper does not hit this low referral barrier.


reader Lumo said...

Dear Alejandro,

I am surely not confusing Dixon from the law firm with G. Dixon whom I have never heard about. Why do you exactly think I should be confusing them?

(Let me tell you in advance that I am also not confusing him with Lance Dixon, if you wanted to invent another silly accusation haha.)

Also, I completely disagree that one could write papers, especially good scientific papers, without what you called "unneeded information".

This "unneeded information" is a critical part of scientific communication. It helps the reader in error-correction and with all other things that are necessary to swallow the paper.

Moreover, the understanding of various "backgrounds" is not identical among scientists - at least it shouldn't be - so it is often important for an author to present the background in his or her way and organize it in a way that is suitable for his purposes.

But even if you erased all backgrounds, introduction, general cliches, consistency checks etc. etc. - all these inevitable components of what I call "good papers" and what you call "unneeded information", it wouldn't be possible to erase all the "checksums" from papers altogether.

The scientific terms can only be combined in a very limited number of combinations. These rules are much more strict than grammar of an ordinary language. The longer and deeper a paper is, the more "redundant" or even "holographic" its content is and must be.

I understand that this fact is very hard to be seen by someone who is not capable to produce a nontrivial yet coherent idea ;-) but real scientists surely know what I am talking about.

Also, if a paper contains a very complex calculation or derivation that must be a part of it, a good enough reader would be able to reproduce pretty much the whole (almost identical) calculation without reading the paper. In this sense, the whole paper - or the sections with the derivations - are redundant. There usually exists someone else in the world who could easily write the same paper which doesn't mean that all papers are "redundant" and shouldn't be written.

Serious scientists surely can't live without these things.

And I even think that it's correct that referees are checking, using these things, that the author knows what he or she is talking about. Refereeing may be time-consuming and this is a good enough proxy because the quality of the aspects that can be easily checked is highly correlated with the quality of the "new" information. I am not saying that it's the only thing that the referees should look at but I am saying that it is completely natural and correct if the referees look at these things whenever it is a good and efficient method to get some perspective on the paper.

If authors are copying and pasting pieces of text, I think that a good enough referee can identify this point rather quickly, too. Just look at the author's previous papers about the same topic and make a fast comparison, assuming that the authors are not committing downright plagiarism using other people's papers :-) - which should almost be left to police, not referees. It usually takes a few minutes to see which parts have been copied from previous papers; I've written such observations in many reports myself. ;-)

I don't think that one percentage or another of the background, pedagogy or consistency checks is "universally better" than others. I only know that it can't be zero percent, because of the reasons explained above, and it can't be 100%, because the referee should be able to see that the paper has no new results at all. ;-)

This percentage is a subject to the market of ideas. Clearly, if the percentage of background etc. is too low, many papers will be rejected as too unreadable and missing context. If the percentage were too high in average, many papers would be rejected as redundant and vacuous. Again, this percentage is not something that should be social-engineered.

All the best
Lubos