Peter Woit wrote a text showing that if someone doesn't learn the formalism well, he always gets stuck with a technicality which prevents him from ever getting to real physical questions. The title is
- Non-perturbative BRST
- Non-perturbative chiral gauge theory
- Euclideanized fermions
His confusion is penetrated by several misconceptions that many laymen, physics newbies, and weak physicists often believe. In particular, I mean the following myths:
- All field theories should be defined on a lattice; "lattice" and "non-perturbative" mean the same thing; problems with a lattice formulation are problems of the whole physical theory
- The BRST formalism itself is physics and its limitations imply a breakdown of a physical theory
- The signature of spacetime and various other spaces has to be kept at the "observed" values throughout the calculations
- Much more generally, all intermediate results and computational steps have to coincide with our intuition and prejudices about the reality
- A lattice description is just one possible realization of a field theory among many; for a theory to be non-perturbatively consistent or well-defined, it's in no way necessary to find a lattice description; the existence of a lattice description may improve the "utility" of a theory but not its "validity": in fact, the importance of many field theories is connected exactly with the difficulties with their lattice description (chiral gauge theory will be discussed later)
- The BRST formalism is just a modern and smooth method to deal with theories with gauge symmetries; the specifically BRST objects such as the Faddeev-Popov ghosts or unphysical states are not connected with any observable quantities, so they're just a part of the auxiliary formalism, not a part of testable physics; the BRST formalism is meant to be a weapon to efficiently deal with the possible gauge symmetries, transformations, and gauge-fixing choices in the vicinity of a fixed transformation or a configuration or a chosen gauge-fixing rule; globally, there can exist difficulties but they're just problems of a particular method to organize the calculations, not genuine inconsistencies of a physical theory
- The unphysical signatures are not only allowed in the calculations: they're extremely useful; in particular, the Euclidean spacetime and the Euclidean versions of various moduli spaces and configuration spaces tend to be more compact and the integrals are more convergent and more well-behaved; the oscillating complex exponentials are replaced by the convergent decreasing ones; only the final results are compared to the experiments
- Much more generally, the assumptions, basic concepts, and intermediate results of a calculation may strikingly differ from the things we're used to empirically; only the final results are compared to the experiments and have to agree; the counter-intuitiveness of the initial concepts and intermediate computational steps don't spoil the validity of a theory; in fact, as people as unexpected as Milton Friedman have argued, if a theory makes correct predictions, the counter-intuitiveness of its initial assumptions and intermediate results is an advantage that strengthens, rather than weakens, our rational belief in its validity
Peter Woit has never gotten to physics itself because, just like other hopeless students, he has always been confused by many rudimentary technicalities that prevented him from thinking about serious, genuinely physical topics. But let's admit that even Jacques Distler seems to be profoundly confused about the last two points above. Most of the criticism concerning Euclideanized fermions will be addressed to Distler. We've had similar, long discussions in 2005 under my introduction to the Wick rotation.
But let's start to discuss the problems one by one.
Non-perturbative BRST formalism
It may be helpful to discuss the origin and meaning of gauge invariance and BRST invariance. Since 1905, we have known that the laws of physics have to respect the Lorentz symmetry. However, it doesn't mean that all fields in spacetime have to be scalars (that don't transform under the Lorentz transformations at all).
Quite on the contrary, Lorentz vectors, tensors, spinors, and spintensors are possible, too - as long as they properly transform and the equations constraining them are Lorentz-covariant. In fact, we know many reasons why fields with indices - or "spin" - are not only allowed but desirable. After all, we know that the electric and magnetic fields have "arrows" so fields with indices are pretty much necessary in a realistic theory of anything.
However, when you promote the theory to the quantum realm and when you create quanta - particles - by fields with indices, you will find out that the inner products of the one-particle states must be
<0| au(k) a†v(k') |0> = C(k,k') g uvThe inner products must be proportional to the metric "g" - otherwise the inner product itself would break the Lorentz symmetry. However, the metric tensor has a negative time-time entry. That would lead to a state that has a negative probability to occur. Such a state can't be allowed in the physical spectrum because the probabilities of physical outcomes can't be negative. Probabilities always determine the relative number of events that satisfy a certain condition - and numbers of events are never negative.
So whenever you have vectors in your quantum theory, something must make them harmless. The "something" is a gauge symmetry. For every time component of a vector field (and similarly tensor fields with higher spins), there must exist "one unit" of gauge symmetry. For electromagnetism, it's the U(1) symmetry. Non-Abelian generalizations are relevant in Yang-Mills theory while diffeomorphisms appear in general relativity and remove the negative probability modes in the metric tensor.
How do the wrong components become harmless?
Well, the 4-component U(1) gauge field "A_u" and the corresponding particle, the photon, doesn't have 4 physical polarizations but just two (e.g. two transverse linear polarizations; or two circular ones). One of them disappears because of Gauss's law,
div D = ρ [pronounce "rho"].This equation of motion contains no time derivatives in it. It therefore constrains the initial conditions of the fields. Such an equation without time derivatives always arises when there are gauge symmetries. You may get the equation from the variation of the fields that also happens to be a gauge symmetry transformation, but the action has to be stationary under this variation, anyway.
So the generator producing this transformation is the charge density, including the compensating term from the gauge field. This Gauss' constraint kills one polarization of the photon - the polarization for which "k.epsilon", the inner product of the momentum and the polarization vectors, would be nonzero.
There is another polarization that disappears because some transformations are "pure gauge". If you write "A_u" as
Au = ∂u λ,then the gauge field is the difference between two infinitesimally close configurations that are gauge transformations of one another. However, such two configurations are physically equivalent, so their difference has to be equivalent to zero. The quanta with "A" defined above have to produce "null states" whose norm must be zero and that must be orthogonal to all physically allowed state. Such states must be fully equivalent to zero - and that's the way how they "decouple" and become harmless.
While we get rid of the two dangerous polarizations, the original gauge symmetry still affects physics. When we gauge-fix it, i.e. when we determine additional conditions for the gauge field that pick a preferred configuration for each "orbit" of the gauge-equivalent choices, the quantum mechanical path integral must be given the right Jacobian so that it really remains unchanged at one-loop level, and so on. It may become tough to remember what all the Jacobians are and how they depend on the gauge-fixing. Moreover, some unwanted terms may appear in the commutators of the symmetry generators. It's a messy formalism.
Fortunately, there exists a modern solution to deal with all these things elegantly, the BRST formalism. It adds some new unphysical states (of both kinds - the forbidden ones as well as the null ones) but it makes the separation of the states to physical and unphysical ones much more elegant. All the Jacobians arise "automatically", independently of your gauge-fixing choices, while the symmetry generators preserve their "naive", classical commutation rules. And the physical states are simply the cohomologies of a BRST operator, Q.
We're interested in fully physical states that are annihilated by "Q", i.e. "Q psi = 0", but that can't be written as "psi = Q lambda", which would also guarantee "Q psi = 0" because "Q^2 = 0" (nilpotency).
This BRST machinery - also known as the modern covariant quantization - is clearly superior over the old covariant quantization. But it's still just a method to describe which "infinitesimal" variations of the configurations are physical and which of them are not. It is an efficient tool to deal with the "local" behavior of the configuration space and the gauge group. But its simplest version may produce misleading results when the configuration space or the gauge group has nontrivial global topology.
One such a problem - with Gribov copies - was pointed out by Herbert Neuberger in 1987 who was thinking about these matters from a lattice perspective. With the 67 citations, it's not a terribly well-known paper, and I think that the number is sensible.
Also, some of the follow-ups actually solve the problem - e.g. Testa 1998 which has 32 citations itself. Testa replaces the delta-function by a sum of delta-functions over points located in a "lattice" and employs the periodic character of the gauge potential. By his modification of the BRST machinery, he solves Neuberger's problem. As you can see, no one really cares. (See the fast comments for links to newer papers addressing Neuberger's point.)
All such problems are purely technical. They are only obstacles of a person who tries to use a particular methodology - e.g. the BRST machinery, especially in a lattice framework - but they're not problems of the underlying theory itself. It may be difficult to get all the gold from a local bank using a children's toy revolver but that doesn't mean that there's something wrong with the gold. You may or may not improve the toy to do the job but your success or failure has nothing to do with the quality of the gold.
The BRST formalism is good to study the local aspects of the configuration space - but the global ones may require some extra work.
In fact, all of gauge symmetry is a technicality in a similar sense. This statement is known to be true - even though Mr Woit tries to deny this modern knowledge. What is really known is that gauge theories may be formulated without any gauge symmetry to start with so the very presence of gauge symmetry is just a convenience (that makes the Lorentz symmetry in the presence of fields with "arrows" more manifest than in "gauge-fixed" formulations), not a necessity; that all the gauge-dependent states and objects are just auxiliary objects, not parts of the physical prediction; and that physically completely equivalent systems of equations may be based on very different principles of gauge symmetries.
S-dualities and Seiberg dualities are well-known examples of the latter phenomenon. But the AdS/CFT correspondence and Matrix theory are actually examples, too. In AdS/CFT, a gauge theory with a "natural" SU(N) gauge symmetry is fully equivalent to a gravitational theory whose gauge symmetry includes the bulk diffeomorphisms, among other things. Something similar happens in Matrix theory, too. Also, theories on noncommutative geometries with the corresponding gauge symmetries may be fully equivalent to theories on commutative geometries, with different, non-Abelian but commutative, gauge symmetries.
One of my additional favorite examples is heterotic string theory on a circle whose Wilson lines manage to interpolate between two 496-dimensional gauge groups, E8 x E8 and SO(32), but you can never say that one of them is more fundamental than the other. They're equally non-fundamental. Gauge symmetries become a useful part of the description if there exist the corresponding light gauge bosons. But "useful" doesn't mean "inevitable".
Simply speaking, the character of the gauge symmetry is not uniquely given by a particular physical system - by a particular set of possible or actual observations. Gauge symmetries, and certainly the BRST approach, are just convenient tools to study physics of certain interesting physical systems. They may have advantages, aside from disadvantages. But they're not essential or canonical components of these systems.
And make no mistake about it: all the "research" that is sometimes mentioned by Mr Woit about the new interpretations of the BRST formalism etc. is just a testimony about his elementary misunderstandings what the formalism actually means. Similar comments apply to his ideas about supersymmetry: he really doesn't understand the difference between the supersymmetry and the BRST symmetry, either - among many other related confusions.
A student who fails to learn these things would simply fail the course; but a formerly fashionable critic of the "evil empire of string theory" may present these elementary misunderstandings of the BRST formalism by him as "research". Much like the warriors against global warming and the evil oil industry empire, the warriors against string theory can be (and usually are) complete imbeciles but it is politically incorrect to point this self-evident fact out.
Chiral gauge theory on a lattice
We have seen that Neuberger's problem was inspired by the thinking confined in a "lattice".
Latticization is a method to define - and calculate - a field theory that is optimized for a digital computer. Lattice gauge theorists like to use computers to calculate various things. Their results are nontrivial and encouraging but they're not too impressive, either. Many people think that this "computerization" is so essential that they can't even think about the field theories without these psychological crutches. For example, Mr Woit insanely identifies the adjectives "lattice" and "non-perturbative". It's silly because a lattice formulation is just one among infinitely many ways to define physics of a field theory non-perturbatively.
This way is arguably "friendly" for a computer. But that doesn't make it more physically valid or more natural; see Discrete physics. Quite on the contrary, it makes it less "natural". After all, computers are "man-made" which is a pretty good antonym to "natural". Digital computers may have trouble to directly calculate with uncountably, continuously many points in spacetime. But Nature has no such limitations. The people who can't distinguish Nature from its simulations on computers are not quite sane. The simulations may be pretty accurate but if there are some technical problems with the simulations, it doesn't yet mean that there are problems with Nature.
(Because various "numerical instabilities" can be taken as examples of "problems of software which are not problems of Nature", it is also possible to view the "dangerous climate change" predicted by climate models as an example of this incorrect identification of computer models and reality.)
Once again, the fact that a theory may be hardly accessible by a particular computer-assisted strategy doesn't mean anything wrong for the theory itself. It is just an obstacle for someone who wants to use a computer to study the theory.
For similar reasons, lattices are hugely overestimated by many people. Lattice gauge theories are just some special theories that flow to the gauge theory in the continuum limit - in the limit where the number of the lattice sites is sent to infinity. But there are infinitely many non-lattice theories with the same property.
Even when you focus on theories that may be "modeled" on a computer, there are infinitely many competitors of a conventional lattice field theory. You may choose the lattice sites dynamically, you may choose different ways to parameterize the gauge transformations - e.g. by piecewise linear functions - and so on. Most of these methods haven't been tried - but only because people wanted to agree with others. There are no rational objective reasons. For gauge theories, the latticization is pretty good because it preserves the basic gauge symmetry "exactly" (the gauge transformations are associated with the "links") - but there are still many ways how to place the links between the sites, and so on.
The lattice approach to field theories has many disadvantages, too.
Lattice - the set of the lattice sites - is not invariant under continuous translations. Consequently, the momentum conservation - or the freedom of the momentum to grow arbitrarily large - seems subtle or broken in the lattice context. Also, the supersymmetry generators anticommute to translations. Because translations are broken by the lattice, (most of the) supersymmetries have to be broken by the lattice, too.
Deconstruction is a method to keep some of the supersymmetries exact, even on the lattice, but fine-tuning of the UV theory is usually needed for the long-distance limit to restore all the supersymmetries so it's still fair to say that supersymmetry and lattice gauge theory don't co-exist quite peacefully.
Also, lattices are left-right symmetric. This simple fact is actually the primary reason why chiral, or left-right-asymmetric, theories may be hard on the lattice. Various problems with the fermion doubling may occur etc. However, left-right asymmetry is an important part of the Standard Model - and especially the weak nuclear force underlying the beta decay. This important part of the Standard Model is therefore in a direct "moral conflict" with a natural expectation of lattice gauge theories (the left-right symmetry).
Some of these problems may be fully solved or partly solved, while the solutions have various disadvantages; others are unsolvable at this moment or proven to remain unsolvable forever. But whatever their status is, it's important to realize that all these problems are just technical problems with one particular strategy to define - and calculate - the field theories, with the lattice. It would be ludicrous to promote these technical problems to one of the most important open problems in physics because they're not really about physics.
There are many other methods to calculate quantities in field theories - both in theory as well as in practice. Various perturbative calculations expanding in various variables, various dualities with completely different systems that either have new expansions or new full non-perturbative definitions etc. It's just wrong to say that the lattice formulation is vital.
I don't think it's too important to find a way to treat chiral fermions by the lattice. I don't believe that there really exists any "canonical" solution that is so cool that it's waiting to be discovered by a great physicist. And I have doubts whether a full solution to this problem exists at all. Of course, the people who work on it want to believe that such a thing is waiting for them - but it's just a belief. At any rate, I don't think that such a solution for the chiral latticized theories is needed in physics.
Chiral gauge theories "exist" even if their lattice versions don't. There are many other ways to organize the calculations in field theory or string theory and the people's task is to learn most of them or all of them, instead of trying to force the Nature to obey the people's narrow-minded expectations.
Transferring spinors to different signatures
Peter Woit is wrong if he thinks that the "Euclideanized fermions" belong among the most important problems in physics. But most of the criticism in this section will be directed against Jacques Distler who got really confused about the continuation of spinors to different signatures - and its purpose in theoretical physics - and who has irrationally used this general confusion against an unlikely target - namely Berkovits' pure spinor approach to perturbative string theory.
As I mentioned at the beginning, only the final results of calculations in physics have to be connected with the physical observations. The basic concepts, assumptions, and all intermediate results may be arbitrarily "detached" from our intuition about the real world. And in reality, they often are. They are "detached" more often than they are "aligned" with our intuition. In particular, such a "detachment" is increasingly frequent in modern and deep physical theories that simply have to use "unusual maths" to do the calculations.
Incredibly probabilistic wave functions, invisible quarks, arbitrarily huge Lorentz contractions, or BRST cohomologies in QCD are just four random examples of concepts that look more mathematical, more abstract, and simply different than the things we directly observe but that are needed for us to calculate things properly.
One of the "unusual mathematical tricks" is the Wick rotation. This trick means that you should imagine that the signature of spacetime is different, e.g. that it is 4+0-dimensional instead of 3+1-dimensional. Only after you calculate the final results, you treat them as analytic functions of complex momenta and continue (extrapolate) them back to the values that are relevant for the 3+1-dimensional physics.
Morally speaking, this method works simply because the continuation "there and back" does nothing to the result, as long as the relevant functions behave nicely (and analytically) as functions of complex momenta and complex energies. They often do. In fact, they have to behave nicely if unitarity, locality, and other consistency conditions hold. Every deviation from analyticity has to be linked to additional physical states (or inconsistencies). Much is known about these things - and much may be unknown. And I personally guess that the analytical continuation will become even more important in quantum gravity that admits new spacetime topologies. In my opinion, it's an understudied issue in quantum gravity.
So the functions in physics are nice, especially if they are functions of momenta and energies. It makes sense to continue these quantities to imaginary values. Such imaginary values are connected with the purely Euclidean spacetime etc. The whole "intermediate calculation" is often done in the Euclidean spacetime because all the issues are much more transparent and well-defined over there.
Only the final result has to be continued back to the Minkowski spacetime. And if it agrees with the observations and with the consistency conditions we demand, no one should ask you whether the method by which you calculated the result was "philosophically pleasing" or "matching our perceptions". In fact, even if he asks you, you should never be ashamed of the Wick rotation because there's nothing to be ashamed of.
Jacques seems to be confused about some very basic issues of these procedures. For example, he wrote that the trace of the metric tensor is "d-2", rather than "d", in the Minkowski space. That would differ from "d" in the Euclidean space. He used this "argument" to emphasize that one shouldn't be allowed to do the calculation in a different signature. In particular, he didn't want to allow Berkovits to do his pure spinor calculations with spinors in 10+0 dimensions.
However, Jacques' proposition is a self-evident nonsense. The trace of the metric tensor is "d" in any signature. It's an analytic (constant) function of the momenta, so whatever continuation we do, it always has to be "d". Technically, when you calculate the trace of the metric tensor, you shouldn't forget that you need to lower or raise one index which changes the "-1" entries to "+1" of the Kronecker delta. The trace of the Kronecker delta is always "d".
But Jacques has made fundamentally erroneous statements about pretty much every step in the Wick rotation, too. For example, he claims that the Wick rotation and continuation should only affect the momenta but not the polarizations. That's, of course, another misunderstanding.
Only the objects that are "fully continued" to the other signature are equally well-behaved. Unless and until an additional equivalence is proven, one is never allowed to treat different degrees of freedom differently. In particular, the dependence on the space may determine the "orbital angular momentum" while the internal indices of the fields may determine the "spin". However, the "orbital angular momentum" and the "spin" are just two terms contributing to the "total angular momentum", and only the total one has an invariant physical meaning.
It's just completely wrong to treat the two terms "differently" and to expect that the Wick rotation works despite this "discrimination". The Wick rotation is done properly if it is just a subtle refinement of the most complete and most universal continuation of your physical theory to imaginary values of time and energy. Clearly, you need to rotate everything you can.
In most physical theories, there are reality conditions and we must be ready that their character changes under the Wick rotation. For example, spinors in "d" dimensions admit different reality projections depending on the signature of the d-dimensional spacetime. There are 2 real 2-dimensional spinor representations in 2+2 dimensions, 2 pseudoreal 2-dimensional spinor representations in 4+0 dimensions, and 2 complex, mutually complex conjugate, 2-dimensional spinor representations in 3+1 dimensions.
But this fact doesn't make the Wick rotation impossible. Clearly, nothing like that can make the Wick rotation impossible. The Wick rotation is just about the right continuation of everything to imaginary values of time and energy. Because the functions are smooth, no one can prevent you from doing so. In principle, the Wick rotation is a straightforward procedure.
What Jacques misunderstands is that the Wick rotation changes the character of the reality conditions in general. For example, if two observables satisfy
A (t) = B† (t)i.e. if "A, B" are Hermitean conjugates to each other at the same point of the Minkowski space, something else happens in the Euclidean space.
A (tE) = B† (-tE)So they're no longer related to the other field at the same point: you must revert the sign of the Euclidean time, too. It's because the Hermitean conjugates of the evolution operators
exp(iHt)† = exp(-iHt),behave differently in the Minkowski space and in the Euclidean space. In one case, the sign of the exponent is changed. In the other, it's not. This time reversal is exactly what combines with the complex or Hermitean conjugation to give you the right type of the spinor in each signature.
exp(Ht)† = exp(Ht)
Effectively, you may say that when you switch from the Minkowski space to the Euclidean space, the Dirac spinor and its conjugate will no longer be related to each other by the "actual" complex or Hermitean or Dirac conjugation. But in fact, the equation above shows you that these two fields may still be linked to each other, but you must link them to the other at the opposite value of "t_E". The Euclidean time reversal has to be added to the complex conjugation.
Because the Wick rotation is often used when you calculate various integrals and they're contour integrals in the complex plane that may involve e.g. the real axis as the contour, the reality conditions may often become "non-manifest" (and physically inconsequential) in the Euclidean spacetime. So you should never be too hasty when you try to impose your reality conditions in a space of a new signature. It's OK to work with the complex or complexified objects and only determine the right contours at the very end. Jacques is clearly too hasty.
Jacques Distler chose the pure spinor formalism as an example of a formalism that uses an "unphysical signature" to do the physical calculations, before they're continued to the physical signature. But we have many examples like that. For example, perturbative string theory amplitudes are conveniently written as integrals over the moduli spaces of Riemann surfaces. These Riemann surfaces have the compact, Euclidean signature - unlike the physical world sheets in spacetime whose signature should be Minkowskian!
In principle, one could also write the amplitudes as integrals over the Minkowskian worldsheets, and that's what one "instantly" gets in the light cone gauge, anyway. However, the conversion to the compact Euclidean world sheets makes things smoother and nicer. The Minkowskian world sheets wouldn't be smooth everywhere, and so on. The moduli spaces of the Euclidean surfaces are nearly compact - all their asymptotic regimes are understood. There's nothing wrong whatsoever to do most of the calculation in the unphysical signature - as long as the results may be continued to the physical one and they have all the desired properties.
Jacques also tries to find the "Minkowski version" of the space of pure spinors: he incorrectly asserts that Nathan Berkovits is "obliged" to find such a "Minkowski version". Berkovits is surely not obliged to do so because it's perfectly fine to do the bulk of the calculation in a different signature. But even when he studies the possible "Minkowski versions", Distler makes new and completely incorrect assumptions. The pure spinor in 10+0 dimensions is a special kind of a spinor with 16 complex components. Because the minimum spinor in the 9+1-dimensional space has 16 real components, Jacques immediately assumes that one is required to start with 16 real components, which is why the pure spinor space has to be a quotient of SO(9,1). Once again, he's too hasty.
It's just not the case. Even in the 9+1-dimensional space, one should carefully start with the 16 complex components. Different pure spinors in this set may fail to be SO(9,1,R) transformations of each other. In a striking contradiction with Jacques' expectations, the Wick rotation doesn't guarantee that the "minimum" spinors with "maximal" reality constraints in one signature get continued to "minimum" spinors with "maximal" reality constraints in another signature.
In general, they surely don't. You must be very careful about what the "new reality conditions" are in the new signature. And the "most constraining and straightforward" conditions simply don't get rotated to the "most constraining and straightforward" conditions elsewhere.
So Jacques hasn't surely listed all the candidate spaces that could be the 9+1-dimensional counterpart of Berkovits' 10+0-dimensional quotient, SO(10)/U(5), simply because the continuation doesn't have to be a quotient of SO(9,1). And even if there were a continuation of the pure spinor space that is a quotient of SO(9,1), the denominator wouldn't have to be as simple as a version of U(5). He has only demonstrated that he didn't do his own homework correctly. He hasn't demonstrated that no natural space directly relevant for the other signature exists. But even if it didn't exist, it doesn't mean that there's anything wrong with Nathan's method to calculate the amplitudes. It's just completely OK to use an unphysical signature to find all the intermediate results.
Also, Jacques claims that it's difficult or impossible to compute the scattering of the Ramond-Ramond quanta in the pure spinor formalism. However, that's a complete misunderstanding of the very meaning of Berkovits' formalism. Berkovits' approach makes the spacetime supersymmetry manifest so the calculation involving the Ramond-Ramond quanta is equally complex or equally simple as a calculation involving Neveu-Schwarz-Neveu-Schwarz or other quanta. That's what the manifest supersymmetry means. You can expand the physical fields, R-R or NS-NS, as components of a superfield.
Summary: formalism vs physics
This discussion has many technical aspects but one "philosophical" observation is much more general. Some people can barely understand the formalism - and they misinterpret technical difficulties one needs to overcome when using a particular strategy to calculate a result as physical problems of the very theory they're using to calculate. Effectively, they're promoting their own personal limitations as deep problems of the whole physics as a science.
It's just too bad.
Of course, I have encountered hundreds of people in my life who have been confused about these two remarkably different things. In sociological terms, some of the problems are the problems of "those who invented, proposed, or discovered a theory" while others are just problems of "those who want to use the theory in a particular way to calculate a particular situation".
Only the former are about the "truth"; the latter problems are about the "utility" or applications of the truth in new context, possibly also in the search for "new truths". The truth and the utility have to be carefully distinguished. If a theory has been "unfriendly" to you because your particular method to organize the calculations - which should have been very efficient in your opinion - has been shown difficult, subtle, or downright inconsistent, you may be upset about the theory but it's just your problem: you made a wrong guess.
And you should never expect that all calculations in physics have to follow exactly the same steps as a calculation that you may have done yourself in the past: such an opinion is almost always just an artifact of your narrow-mindedness, laziness, lack of will to learn new things, and the limited intuition and experience of yours. More experienced and powerful physicists have no trouble with the formalism and they can almost instantly penetrate it and get to the actual physics. They can actually calculate all the things. You may dislike the "image" of their intermediate steps but it's your problem, not theirs.
In particular, you shouldn't be surprised to encounter lots of these problems if you're using latticizations of field theories in contexts where they're unlikely to work (including chiral and supersymmetric theories); if you're calculating things in different signatures than the signatures where the calculations are well-behaved and naturally formulated (e.g. if you're afraid of the Euclidean versions of the objects and of the Wick rotation); if you're overestimating the importance of the BRST formalism and if you try to use it to solve some global issues that have nothing to do with the purpose and advantages of the BRST formalism.
Whatever problem of this kind you experience, it's still plausible - and, in the case of robust theories such as (UV complete) gauge theories or string theory, almost guaranteed - that the full theory predicting the whole desired class of observables arbitrarily accurately exists. It's just you who have made a wrong plan how to organize the calculation. While you may be upset, it's irrational to blame this mistake of yours on the theory itself and to expect the rest of the scientific world to worship your personal problems.
And that's the memo.