There has been some confusion about the question whether we want theories to be as special as possible or as general as possible.
Costs and benefits
Well, profound theories should ideally maximize the number of diverse yet correct and accurate predictions they make while they should minimize the number of independent and a priori arbitrary & unjustifiable assumptions and the number of other parameters because this kind of freedom allows the theory or the framework to adjust its answers to agree with reality which makes any agreement, even if it occurs, less spectacular and less convincing.
You can view this counting as an analogy of the cost-benefit analysis although the precise quantitative character of these calculations is governed by something like Bayes' theorem.
Lower costs and higher benefits
The more obvious consequence of the rules above is that if you have two theories with the same amount of arbitrariness and assumptions and parameters, the more accurate one and the more generally valid one will be preferred. That shouldn't be controversial.
However, the rules above also make it clear that if two theories predict equally large classes of phenomena with the same accuracy, the theory with a smaller number of independent assumptions, objects, basic concepts, and parameters will be preferred. The situation may look complex but we should realize that parameters, assumptions, and independent building blocks play the same role. They are the "costs" in the cost-benefit analysis.
Independence of the assumptions and inevitable assumptions
Also, we should notice that it is only the independent assumptions that should be counted as costs. If two assumptions can be proven to be equivalent (or at least almost equivalent, or almost certainly equivalent), they will be counted as less than two assumptions. If an assumption can be proven to be mathematically inevitable, it is not a real assumption and it contributes nothing to the "costs".
On the other hand, there surely exist many examples of assumptions that are not inevitable. Some people may believe that all theories should be consistent with the validity of the Old Testament. Other people may believe that theories should confirm egalitarianism, feminism, political correctness, special role of the white race, or an increasing role of the United Nations in the future. A third group of people may believe that a text encoding the theory should look like pi. A fourth group may think that all theories should be fundamentally discrete in character and there should be no way to understand the theories as continuous ones.
Neither of these assumptions that people want to impose upon theories has been rationally justified. Neither of them is inevitable. These assumptions simply pick a random subspace in a larger space of theories that are equally reasonable given the facts we know today.
Instead, these assumptions are just random guesses. They are arbitrary and they were probably formulated because of some unscientific reasons. It doesn't mean that these assumptions must be inevitably incorrect. But they are not inevitably correct. In the counting of costs and benefits, all of them are certainly "costs".
For example, if you consider the discreteness assumption, it is not hard to see that pure logical and rigorous arguments are unable to prove that a discrete theory is better, more consistent, or more valid than a theory with a tiny contribution of continuous physics. This is why the assumption is arbitrary. It is why it is a liability. You are picking a very small subset of possible theories.
On the other hand, the situation is completely different if you think about theories with a small number N of adjustable parameters. In this case, it is possible to prove, logically and/or rigorously, that theories outside the N-dimensional space don't have the same degree of consistency (which may include some kind of essential symmetry).
For example, only the relevant and marginal deformations of a local quantum field theory should be considered as elements of the same narrow class of theories. This characteristic feature of such a theory makes it much more likely that it will be abruptly falsified if it is wrong. Because falsification is expected to be easier, if such a theory survives, the survival is more spectacular and convincing evidence that the theory is on the right track. If there is no room for curve fitting, a detailed agreement of a curve is a huge argument in favor of the theory's validity. Even if the amount of data you can use to check your theory is relatively low, it may often be enough to give extremely strong arguments in favor of the validity of your theory simply because it can often be extremely unlikely that even such a small amount of results is reproduced by a really compact and robust theory.
String theory has no adjustable continuous parameters. Instead, it has a large configuration space. The existence of this configuration space is not an independent assumption. It is an inevitable conclusion. The whole "landscape" exists somewhere in the multiverse or at least in the abstract space of allowed solutions.
But you would surely protest that a huge, dense landscape is effectively equivalent to a large number of parameters. Is it?
First of all, it is not true that all conceivable theories can be well approximated by string-theoretical vacua. Despite their large number, string-theoretical vacua imply certain general predictions. This kind of predictions is discussed in the swampland program.
Even the predictions about the quantities that are relatively adjustable by a choice of the vacuum may be highly non-trivial simply because the set of vacua is far from being dense as far as some parameters go.
But imagine that the discrete space of solutions is approximately dense and it covers the same set of possibilites that may be more or less covered by a less sophisticated theory. Once you could determine the exact vacuum and make much more accurate predictions that you could make with the effective field theory, there would be no reasonable doubt that the theory predicting discrete possibilities is more complete and that it is closer to the truth. But should you believe that it is better before you determine whether string theory can give the accurate numbers?
My answer is probably No. If string theory were only able to cover a similar parameter space by discrete points, I would think that it would not be progress a priori, before you check whether the more detailed predictions are valid. Of course, even before you make the test, the stakes become higher if you have a more predictive theory (string theory with a discrete subset of the parameter space, in this case).
Reducing independent assumptions
But the structure of the parameter spaces is not the only difference between string theory and effective field theories. String theory reduces the arbitrariness - it reduces the costs - in many different directions. In Lagrangian effective field theories, the existence of different fields with different values of spin represents a lot of independent assumptions. In string theory, these fields and particles - including gravity - are unified which reduces the number of independent assumptions.
You might argue that the existence of the typical gauge theories with spin-0 and spin-1/2 particles is inevitable because one can prove, in some sense, that nice, interacting, but weakly-coupled theories must always look like that at low energies. You would be right. Indeed, at the level of effective field theories, we know what the right description is. The gauge theories with matter are kind of inevitable and some experimental tests have been sufficient to determine the fields and interactions.
But the question really is: What is there beyond these effective theories? Any step to reduce the arbitrariness must be considered seriously. One possible answer is that there could be another local quantum field theory. But except for grand unification and supersymmetry, we don't know any method to reduce the degree of complexity and arbitrariness that would still agree with the Standard Model. Moreover, the framework of effective field theory almost certainly breaks down in the case of gravity.
String theory is the only structure we know that is not equivalent to local quantum field theory but that is nevertheless able to reproduce all of its physics (including gravity at the quantum level).
A priori, we shouldn't have expected that a randomly chosen framework would correctly predict the existence of spin-0, spin-1/2, spin-1 particles with spin-2 gravity and the right interactions and the right incorporation of the quantum effects. However, string theory is able to do that (and no other theory can), besides dozens of other successes. I think that this result itself is already such a non-trivial confirmation of the theory that it is unreasonable to believe that string theory could be wrong.
Of course that there exists much more evidence in favor of string theory and physicists would prefer to have much more still. But imagine that you are not an aggressive fanaticized crackpot who only wants to criticize - like readers of an infamous blog at Manhattan - but instead, you are a balanced person who is trying to evaluate the probabilities, as accurately as you can, that a given set of ideas is right or wrong. I am convinced that even with the very basic list of predictions above, you would already have to conclude that it is very unlikely that string theory could have passed these tests by chance and that it could still be wrong.
Other things in the list of top twelve results decrease the probability that string theory is wrong further.
It is hard to design a quantitatively accurate scheme to calculate the confidence levels in such complex situations. But if you correctly compare the ability of string theory to confirm the well-established principles and objects of physics with the ability of a generic theory at a comparable level of complexity, I am sure that you must conclude that the probability that string theory is wrong is very low.
I am afraid that too many people are making serious mistakes in this general kind of appraisal. Some people, for example, confuse inevitable facts from random assumptions. Well, inevitable facts are not the same thing as random assumptions. Random assumptions are costs; inevitable facts are not. The fact that quantum mechanics can't be modified or deconstructed in various ways is a fact; the opinion that all degrees of freedom should be discrete is not.
When we look at the benefits and successes, people are making a lot of similar mistakes, too. Some people confuse direct or indirect verification against experimental well-established facts with an agreement of their favorite theory with some other random unjustified preconceptions. Well, these are, once again, very different things.
Real physics vs constantly repeated propaganda
The source of our confidence that string theory is correct is based on its ability to reproduce, with a much smaller number of independent assumptions, the outcomes of real experiments that depend on quantum field theory, gauge theory; interactions of spin-1/2 fermions, related phenomena including running couplings, confinement, chiral symmetry breaking, spontaneous symmetry breaking; tested results of general relativity; and largely inevitable conclusions of semiclassical gravity such as black hole thermodynamics.
All of these criteria are positive because the statements that have been reproduced are experimentally settled or they are settled by a combination of experiments and a very robust reasoning and calculations. Such an agreement is thus a huge benefit.
On the other hand, if someone assumes that a theory should be discrete or spacetime-free, he or she is increasing the costs because these adjectives, much like dozens of others, are a matter of unverified religion and ideology, not facts. As long as we are rational, there is nothing a priori good about a theory that agrees with these adjectives. Reducing your attention to a small class of theories that are consistent with all these arbitrary assumptions makes the work of a scientist more likely to be worthless. Indeed, if a scientist is unable to transcend prejudices some of which (or all) are almost certainly incorrect, he may be viewed as a narrow-minded zealot.
I am convinced that this wrong approach isn't included in string theory because the theory builds on assumptions that are either demonstrably true or inevitable given other assumptions that have been shown to be extremely likely to be true. There is no circular loop here and, more importantly, the individual steps can be trusted. Conclusions are inevitable or almost inevitable consequences of each other.
This situation strikingly differs from the situation in various kinds of sloppy thinking such as loop quantum gravity where every new work adds new arbitrary assumptions and makes it increasingly unlikely that the conclusions of a paper are right.
Of course I can still imagine that despite its rigidity and its non-trivial agreement with the basic well-established principles of physics, string theory will be shown to be wrong. But such a moment would be similar to the destruction of the World Trade Center on 9/11. It would be a spectacular event if such a robust structure could be destroyed.
On the other hand, a collapse of loop quantum gravity and hundreds of other similar proposals that would like to claim to be competitors are more similar to erosion of those sand houses in North Africa that must be re-built twice a year. It doesn't really make any sense to ask whether such buildings can stand or not because the identity of these buildings is not well-defined.
And that's the memo.