When you compute quantum, loop corrections to the Higgs mass, you obtain quadratically divergent graphs. Therefore, the exact value is quadratically sensitive on the cutoff scale and it is naturally predicted to be huge - unless we fine-tune the bare mass of the Higgs. On the other hand, reality forces us to believe that the Higgs is about as heavy as two W bosons. Otherwise, its quartic coupling is far too large and the effective quantum field theory breaks down.
This is called the (big) hierarchy problem. It is big because usually we want to assume that the effective theories should be valid at the GUT scale or even the Planck scale.
Some people may say that they don't care about these high scales, and they're perfectly happy with completely new - and perhaps non-field-theoretical - physics kicking in already at a few TeV. Even these people have a problem. It is the little hierarchy problem.
According to the precise measurements, the Standard Model is incredibly successful. It seems more successful than just a theory of physics below the 100 GeV scale. If you imagine that there is new physics at "M = 3 TeV" or so, it will generate new non-renormalizable terms (operators whose dimensions exceed four) in the low-energy effective action, suppressed by powers of "1/M", whose coefficients will be of order one. You can estimate the effect of these small corrections on the measured data. Indeed, you will find no effects whatsoever and the precision we have today implies that "M" must be greater than 3 TeV or something like that.
This also holds for new physics that is supposed to stabilize the Higgs mass - such as supersymmetry but not necessarily just supersymmetry. The observation about the higher dimension operators should therefore mean that the mass of the Higgs should be around 3 TeV, too. Of course, theoretical considerations show that it should be somewhere in the 115-200 GeV range - and perhaps up to 700 GeV in the non-supersymmetric case. You see a certain discrepancy between 150 GeV and 3 TeV - a factor of order 20 or so - which is called the little hierarchy problem.
I personally don't call it a real problem. There may be cancellations that drive the Higgs mass to 5% of its "natural" value. The coefficients of order one are never exactly one and 0.05 is an example of a number of order one. What a big deal. We have more serious problems. However, if you're a low-energy phenomenologist, this detail may be one of a very small number of problems that you still have :-) and therefore you study it most of the time.
Yesterday, Giacomo Cacciapaglia from Cornell - yes, the Italians are taking over phenomenology - was presenting their model for the Higgs. The electroweak SU(2) is enhanced to an SU(3); you still need another independent U(1) to generate the hypercharge with the correct Weinberg angle. Such a construction creates extra SU(2) doublets inside the adjoint of the weak SU(3). These new fields would transform as vectors under the Lorentz group. But if you imagine that the space is five-dimensional, there will also be the fifth component of the gauge field and it will behave as a four-dimensional scalar. You play for a little while, trying to reproduce the Standard Model.
In their particular construction, it requires some amount of work to guarantee that there will be light fermions. At the end, however, it is more important to get the heavy top quark because its loop effects are responsible for obtaining the correct Higgs mass, including the sign. Their particular construction achieves this goal by introducing new large representations of the weak SU(3) group, namely the rank-four tensor "15".
Such new objects increase the couplings they needed to increase but they also lower the cutoff below which the theory is usable. The calculated cutoff will be just a few times higher than the compactification scale "1/R". It means that the terms that violate the five-dimensional Lorentz invariance may be generated with relatively large coefficients; and it also means that you only have a few Kaluza-Klein modes that can be trusted. Consequently, the set of rules that you find makes this class of the models equivalent to deconstruction and the little Higgs models where the fifth dimension is discretized and replaced by a couple of nodes in a quiver diagram. And in this context, the five-dimensional Lorentz invariance does not really constrain you and you may invent many justifications why the terms violating this invariance may be freely added to your Lagrangian whenever you find them useful.
Democracy between solutions of the little hierarchy problem
This means that the moral content of all known solutions to the little hierarchy problem is isomorphic; moreover, the factor of 20 is just moved to some other unnatural features of your theory that must be adjusted. For example, adding an otherwise unjustified large representation whose dimension is D - where D turns out to be at least 15 - is about as bad as fine-tuning a continuous parameter with the 1/20 accuracy, I think. Consequently, you may ask whether the problems and unnaturalness that you added exceed the problems that you solved.
Supersymmetry only solves the big hierarchy problem (the little hierarchy problem remains because we know that superpartners are absent below 200 GeV or so), but it does so in a very satisfactory way. It allows us to believe that quantum field theory will be valid up to very high scales, which I guess will ultimately be the conclusion of any experiments that the people will ever construct. On the other hand, it allows you to exactly cancel the loop corrections to the Higgs mass. The nonzero contributions that remain are governed by the supersymmetry breaking scale.
I am too conservative to abandon the notion of naturalness. On the other hand, it is obvious to all of us that a sharp and well-defined definition of naturalness can only occur once we have a complete enough theory.
Natural estimates of the size of a quantity are nothing else than an incomplete approximative calculation based on a theory that is pretty close to the full theory, and it should eventually be replaced by an exact analytical calculation of such a quantity. It has been the case of atomic physics and many other contexts and it is the only interpretation I can imagine that makes the question "which model is more satisfactory" relatively well-defined. A more satisfactory model is, of course, a model that is closer to the exact full theory of everything whose existence must be assumed.