Tuesday, July 17, 2007 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Market of ideas

In this essay, I would like to meditate about the analogies between the markets and the value of ideas in pure science. The musings will be somewhat similar to the text about the depth of ideas but the goal will be more quantitative because we will really try to determine something like a monetary value of ideas, including those without any practical consequences (which is the main reason why the quantification is hard). Such a different goal inevitably brings a different, complementary perspective although some sentiments will overlap with the "deep" article.

Value: subjectivity and objectivity

The actual financial appraisal of an object depends on the person or the persons who make the estimate. You might think that such an estimate will be a purely subjective matter but you shouldn't forget that there are always objective circumstances that significantly influence the way how an individual or a group views the value of a product or the value of an idea. She may have particular needs or goals in mind - influenced both by her anatomy as well as cultural and other traditions - and the products or ideas that are instrumental in satisfying her needs and realizing her goals will be assigned a higher value. People who view a certain product as very valuable are the most likely ones who may buy it.

Nevertheless, it is clear that the value will depend on the subject. Once we accept this fact, the whole discussion could turn into a debate in humanities, a very soft description of the number of people who have various needs and who respect various cultural and other values, together with their time evolution that is primarily controlled by social pressures. It is not surprising that humans who don't think that theoretical physics is valuable at all - and most of them don't - usually think that the value of every individual idea in theoretical physics is low. It is equally unsurprising that crackpots make all sorts of crazy judgements: they may conclude that loop quantum gravity is more valuable than e.g. Matrix theory and they are ready to overwhelm you with thousands of similarly insane conclusions.

In this text, I don't want to analyze what various groups of crazy and stupid people think about various ideas that vastly exceed their abilities to understand the real world. However, you may be puzzled: what else can we study? Well, I will try to describe the methods how an ideal observer determines how valuable various ideas are. The assumption here is that a working mechanism would give the opinions of these semi-ideal observers much greater weight in determining the values, much like skillful speculators on the stock market influence the prices more intensely.

We may doubt whether this task is meaningful at all. Is there something such as an idealized observer? The algorithms to evaluate ideas are ideas themselves. They keep on evolving much like other categories of ideas: the ideal observer keeps on improving, too. The best hope is to describe some relationships between ideas that many people in 2007 don't appreciate even though they should. The list of rules will probably be far from complete but it may be much more comprehensive than some other lists that you could be offered elsewhere.

Why do we care about the value of ideas? Well, if a sponsor of sciences wants to invest 100 million dollars, he should probably decide to fund the field where the investment leads to products of the highest value. Similar decisions are being made by individual thinkers and their managers. Once again, different people will end up with different numbers. But the comments below capture some principles that people should be aware of. Note that this application of the calculated value doesn't care much about the overall normalization of the value of all ideas. You may choose a normalization factor that makes the net value of all ideas produced within a year to be equal to the total investments to science and thinking in the same year.

Players: object and relationships

Thousands of years ago, markets were rather simple. You could have bought or sold an animal, a slave, or a chunk of gold. Each of them had a certain price. At the beginning, you had to exchange things for one another. Eventually, money was introduced. It has allowed the players to buy or sell products whose price is a fractional multiple of a cow. It has also allowed them to decide whether they want to own many cows or the money instead.

The capital was found to be able to generate a new capital. People have realized that it makes sense to borrow money. Others have realized that they may want to lend money to others. Many kinds of direct and indirect ownership of various things and many kinds of contracts materialized in the last centuries and people introduced or learned what are interest rates, taxes, bonds, stocks, options, patents, copyrights, funds, and higher derivatives, among dozens of similar concepts. We may argue that these concepts have become too numerous and complicated and transactions with hot air and tricks have unfortunately become more relevant financially than the actual objects with a value, but let us leave this topic for another essay.

Just like the relationships between assets and money, the relationships between ideas have gotten much more complex, too. Scientific papers have many more relations with each other than they have ever had in the past. They depend on a wide spectrum of work that was done with similar or different methods, under similar or different funding schemes, work that was shared by fewer people or more people. Is it still possible to view an idea as a counterpart of a cow that can be sold?

Well, it's complicated because the ideas are no longer pure objects. Most of them are better visualized as relationships between objects. Ideas don't have to be cows, slaves, or coats: many of them are bridges, trade routes, confidence, trade secrets. Other ideas are bridges in between bridges, methods to use different interest rates along trade routes, algorithms to influence perceived confidence, or wise tricks to exchange trade secrets. We don't want to get too deeply into the world of speculators which is why we will try to avoid their daily life and focus on the pristine universe of pure ideas.

Important aspect: probability of validity of an idea

In this section, I would like to argue that the value of a package of theoretical ideas is essentially proportional to the probability that the package is correct.

Consider two such packages, A and B, whose internal values happen to be equal. But A is one million times more likely to be true than B. If you assume that untrue ideas don't have any value, it is not hard to see that the expectation value of the value of A is one million times higher than the expectation value of the value of B. For example, if you believe me that loop quantum gravity is more than one million times less likely to be true than string theory, then - even if you generously assume that the internal value of loop quantum gravity is comparable to the internal value of string theory - it is completely crazy for the global society, from a quantitative viewpoint, to afford more than one milliresearcher of loop quantum gravity.

You may object that invalid ideas may also have a positive value. I agree and the algorithm above can easily deal with this fact. If an idea has a value even if it is incorrect, you should view such an idea as a conglomerate of two ideas. One of them assumes that the package is true and the other assumes that the package is not true. The total value of the conglomerate is more or less equal to the sum of the two parts. It may a priori be questionable whether an assertion should be presented as "A is true" or "non A is false": there is a symmetry between assertions and their negations. However, the calculation of the value breaks the symmetry. One of the two packages mentioned above leads to a higher value. By definition, the assumptions behind this package are described as "key assumptions being valid" while the assumptions behind the cheaper package are "key assumptions being invalid".

As our knowledge increases, we may refine our estimates of the validity of ideas. Just like the stock price of a bankrupt company converges to zero, the value of an idea that seems increasingly clearly incorrect may converge to zero, too. Ideas, much like operating systems, may also lose value when they are superseded by better ones. The useful content of the older ideas is "recycled" and used in a more complete, more unified, better, new framework. As Murray Gell-Mann says in his commercial for Enron, we have inherited some ideas that are unnecessary. We have to jettison that excess baggage in order to make progress.

Important aspect: internal rigidity of a system of ideas

As we have mentioned above, ideas can often be thought of as bridges. They are relationships between existing objects and concepts in the real world and/or the world of other ideas and theories.

In the real world, bridges should have high enough capacity and they should be robust. If the probability that a bridge collapses under a car exceeded 0.0001% or so, the bridge would be clearly useless. How do you get such numbers? Well, a car is only ready to pay 1 dollar for crossing the bridge. If you assume that the people in the car who are killed cost 1 million dollars, it is not hard to see that the probability of collapse should better be smaller than 0.0001% for the toll to exceed the expectation value of the damages. In reality, we have much higher expectations from a bridge because a very small portion of that 1 dollar above may be viewed as profit while the indirect consequences of the collapse are way bigger than 1 million dollars from that single car. You should really count not only the car but also the material costs of the bridge itself. Once you do so, you shouldn't subtract the mortgage from the 1 dollar toll because it would be double-counting but let's not discuss these details.

Anyway, the message is that the bridges must be solid.

The same conclusion holds for theories that connect objects from the real world with each other or with theoretical concepts. This rigidity is actually the main aspect that determines the internal value of a package of ideas. Mathematicians are the first ones who should understand this conclusion. A proof of a mathematical theorem is a bridge - or a sequence of bridges - that connects the assumptions with the final statement. This sequence must be completely reliable for it to have any significant value for an idealized mathematician. Outside mathematics, even unreliable bridges may have a nonzero value because the requirements are not as strict as they are in mathematics: they follow something like a fuzzy logic. In fact, a quantitative calculation similar to the bridge from the real world that we discussed previously may be applied to figure out what is approximately the critical "probability of failure" above which the theoretical bridge becomes useless or worse. At any rate, it is clear that bridges made out of fog - like the content of virtually any paper by Lee Smolin - don't have any significant value.

Important aspect: relevance for other ideas that have already been rated as valuable

Houses in Manhattan are expensive. Why is it so? Well, it's because they can be used to generate a lot of profit, for example through rents. Why do people pay high rents in Manhattan? Well, it's because they can afford it: there are many ways for them to earn a lot of money in Manhattan. The competition between the people who need to live there or who need offices for their businesses elevates the prices. The value of a house can't be estimated if you don't know the location, the context, or the environment.

Is the same thing true in the case of ideas? You bet.

If one can build a reliable bridge between a new idea and some old ideas whose high importance has already been established, such a fact obviously increases the value of the new idea. How could it be otherwise? The economic considerations are analogous to those in Manhattan.

Let me give you an example. In the media and on the blogosphere, you often read that it "doesn't matter" whether the equations of string theory are relevant for the description of heavy ion physics, confinement, whether one can give a stringy geometric description of other important physical processes such as the Higgs mechanism or the chiral symmetry breaking. Also, you often read that it is irrelevant whether the mathematics behind the theory is tightly connected with portions of mathematics that have been recognized as fundamental such as mirror symmetry, enumerative geometry, and many others.

The people who write these "doesn't matter" things are as sensible as the people who say that houses in Manhattan should cost the same as houses in Montana. More concretely, they are complete nutcases. While it is natural to expect that there exist many more people with a rudimentary understanding of the real estate market than those who understand the basic facts about the working of theoretical physics, I am always flabbergasted whenever these imbeciles such as Peter Woit are being read not only by readers who are expected to be ignorant but even by some people who are paid as professional mathematicians or physicists.

Someone may make huge investments in Vanuatu, assuming that it will become the next Manhattan, but he shouldn't expect that everyone else will buy his noiseless houses for octopi on that island for billions of dollars: other rational people usually realize that his guess is unlikely because Vanuatu is not too connected with other places where land is highly valuable. And as we have explained previously, bridges made out of fog don't count.

The general principle that the price of an object increases when it is connected to other expensive objects may be identified not only in the real estate market, other markets studied by economists, and the market of ideas but it is manifested at many other places, too. This principle underlies Google's PageRank algorithm that determines the importance of web pages on the Internet: a page with incoming links from many other important pages becomes important, too. Similar algorithms have been suggested to measure the importance of scientific papers. If a paper A is cited by another paper B that will become important, it is more relevant than another paper that is cited by an unknown paper C.

It can't be otherwise. However, you might protest that the resulting value may be a consequence of a groupthink and historical coincidences. Couldn't the network of skyscrapers have been built in Montana instead? Well, the location in Montana is not as strategic as one in Manhattan because of rivers and oceans. While you might correctly argue that these characteristics are secondary and one could almost definitely find another equally good place where the New York City could have been built, you should realize that the geographic limitations of the world of mathematical and physical ideas are far more constraining. Extraterrestrial civilizations would probably have to find very similar concepts and theories as our civilization has: for example, they would end up with the same list of simple compact Lie groups. In such a severely constrained virtual world, the value of diverse ideas and bridges is far more objective in character than the real estate market on the East Coast which is why the proximity of ideas is much more important and fundamental than it is in the real world.

Just like the price of houses may increase because of this effect, it may decrease, too. Several thriving Czech towns have become unimportant as soon as a new superhighway that has replaced an older road avoided them. The same thing occurs in the world of ideas. If a package of ideas C has been important partly because of its links to another package D and this other package D turned out to be wrong or unimportant, the value of the package C will decrease, too. Note that this link is not the only feature that decides about the value of C so you should avoid absolutist verdicts. But some influence does exist here. It can't be otherwise.

A technical detail: sharing of the value of an idea, priority, and reusability of ideas

In reality, almost no idea is quite new. Most of them are applications of older ideas, mutated variations of previous ideas, or they are at least inspired by some other ideas. These inclusive relationships should never be forgotten. These relationships are similar to those that were already taken into account in the section about the "expensive neighborhoods". But in this section, we talk about a somewhat different situation, a situation in which a new idea overlaps with certain old ideas so that they shouldn't be quite counted as different entities.

If you use music by Madonna in your new movie, you may be forced to pay her royalties. She effectively owns a part of your movie. Similar situation frequently occurs in the world of thinking. A new discovery E is often so powerful that it allows many similar discoveries to be made much more easily. Needless to say, the author of E may take credit for a part of the newer discoveries. Also, if E is discovered independently by several authors, they share the credit. The higher number of authors you have, the less credit each of them may receive.

A discovery may also be made by several authors who can't quite claim to be independent but they are effectively independent. Christopher Columbus has repeated some of the Vikings' discoveries but in a given context, he was effectively the discoverer of America. In this example, the role of the society is unquestionable because the importance of Columbus' re-discovery would be much lower if it had not allowed the Spaniards, Portuguese, and Englishmen to do what they did. In this text, I am trying to focus on objective measures and avoid appraisals that are social constructs as much as I can.

Ideas and concepts may be used and reused. An idea has often applications in vast areas of the human activity and knowledge. The shares that belong to this idea are distributed over a highly fragmented region of the multi-dimensional space of the human knowledge. But you should always be able to look at this multi-dimensional space from the right angle that reveals the idea as a compact object.

Subtlety: discount rate for ideas

In economics, profit in the far future is less relevant than the same profit that occurs right now. The decrease of importance may be thought of as an exponential one whose rate may be pegged to the interest rate or a similar quantity. Recall that with a 3% discount rate, the perceived value of resources available, lost, or created in 2057 is about four times lower than their present value.

The same discounting obviously works in science, too. A discovery that can be made quickly is more joyful and useful than a discovery that will be made in 2057. At the beginning of the text, we indicated that the "overall value of all new ideas" may be pegged to a percentage of the overall GDP. Assuming a roughly constant world population, this assumption implicitly confirms the expectation that a scientist's salary is going to increase, too. If you need the same number of expected man-hours for a particular discovery D and if the scientists need the same number of man-hours to make it, it is clearly cheaper to make the discovery as soon as possible. However, you should discount the value of the money paid to the future scientists which could cancel the effect. Because of the exponential dependence, small discrepancies in the annual rates may dramatically change the conclusions about anything in 2057. Planning 50 years into the future is a shaky and largely irrational enterprise; even communists only had five-year plans.

The intermediate conclusion we have made - that it doesn't matter when the scientists make the discovery - doesn't yet include the discount rate for the value of the discovery itself. When you include it, it becomes true once again that the discoveries should be made as soon as possible.

That's a likable conclusion except that it is simply impossible to speed up the rate of discoveries too much. Certain discoveries are only made by a certain "top" of the workers in the field. For example, if you decided to increase the number of theoretical physicists ten-fold, the rate of important discoveries would probably not increase much. We may argue that the number of people who work in some subfields is already well above the level where the discovery rate approaches a plateau as a function of the number of people. But this conclusion holds for other human activities - even outside science - too. It especially holds for bureaucracy so scientists certainly have no reason to feel a special guilt.

With a sufficient number of thinkers in a certain field, the expected discovery rate D is pretty much close to its maximum determined by the world population and the distribution curves of various abilities. If you multiply the discovery rate D per person by the average value V of a discovery, the product DV should be close to a universal, field-independent constant in a hypothetical social equilibrium.

Funding: a hypothetical market of generous sponsors

It would be fun to write down more detailed algorithms that evaluate the value of various ideas and discoveries and to run them. You might also imagine that you multiply Mike Lazaridis by 100 and these folks could compete with each other in managing the production of ideas. The result of such a market approach would obviously differ from the opinions of an idealized theorist but it might provide us with a semi-realistic benchmark to quantify how meaningful various investments to scientific fields are and which of them seem much wiser than others.

And that's the memo.

Add to del.icio.us Digg this Add to reddit

snail feedback (1) :

reader davecarnell said...


I have been following your interesting blog for some time.

Until recently you were in Cambridge at Harvard. Did the thought police finally catch up with you?

Dave Carnell, Wilmington, NC, USA.