Saturday, January 01, 2005

Happy New Year 2005

During the day, I hope to be able to post some pictures from New Year's Eve. Oh, I also wanted to write an article about Crichton's speech about the cancer called "consensus science". I've already written it once, but the internet connection collapsed before I tried to submit the changes. ;-)

Richard A. Posner, whom I recently defended at Sean Carroll's blog, studies various catastrophes in the New York Times:

He evaluates whether it makes sense to try prevent various catastrophes, and the most serious examples of catastrophes are the following:

  • a high energy accelerator seems to be the biggest disaster ;-) because it kills 6 billion people with the probability of "1 in 5 million per year", as the leading U.S. newspapers argue
  • global warming
  • colission with an asteroid
  • bioterrorism

Let me assume that Prof. Posner got the idea to multiply the probability that something happens with the cost of this catastrophe, to get the expected cost, which should be compared with the cost of the prevention. One can argue that it is better to be insured against something, and therefore it may be useful to pay a little bit more than the amount calculated in the previous sentence - to protect life against major disasters. Obviously, there is no objective calculation that should tell you how perfectly should you be insured.




Well, this was the easy part of the story. The hard part of the story is to actually calculate the probabilities of various catastrophes and their total cost. I apologize, but Prof. Posner seems to believe that this is the less intellectual part of the task. The reality is that this is more or less the whole task, and all the answers he obtained seem to be flawed, at least partially. If someone from Brookhaven told him that the annual probability that they would create "strange matter" that will eat up the planet is 2 x 10^-7, then they were either joking, or they lost their minds. This would be statistically equivalent to murdering 1,200 people a year! ;-)

The probability can be argued to be smaller than 10^-15 per the whole lifetime of our civilization, just by observing that the stars around don't seem to be eaten, and the only explanation consistent with the predicted catastrophe would be that the "strange matter core" actually stops growing when it's as large as Earth. But even if this matter existed, such a conspiracy would require fine-tuning equivalent to the ultrasmall probability mentioned previously.

Frank Wilczek et al. argued that the process is most likely excluded because otherwise the high energy cosmic rays would have already destroyed the Moon and other things:

The probability of colissions with the superlarge asteroids are also infinitesimal - one per tens of millions of years. I have wasted more time than appropriate with the "problem" called global warming, so let me not open this again. But Prof. Posner also wants to impose new censorship laws in science to fight against bioterrorism. I can imagine circumstances under which this is a very reasonable proposal; it just does not seem to be the case right now.

He only wants to investigate the big questions, and therefore he does not intend to waste his time with "details" such as 9/11 or - apparently - the recent earthquake in Asia that has probably killed as much as 500,000 people: Indonesia now estimates 400,000 casualties in this country itself.

12 comments:

  1. hey there! saw that you blog marked me on BE ... thanks for that!

    i'll admit i don't know a lot about science ... i do, however, have a friend named dan out in dc who is a physicist. i think he works at georgetown ...

    robyn.
    spiritofone.typepad.com

    ReplyDelete
  2. I am curious as to who is on that Brookhaven risk assessment panel, how they reached their conclusions, and why this apparently very high risk was so cavalierly disregarded.

    About strange stars though - how would we recognize them? What would their formation process look like? How sure are we that they don't exist or might not even be common?

    A lot of catastrophes are things we can't do much about - a nearby supernova or a really big comet with our name on it, for example.

    It is pretty interesting to look at human caused catastrophes though. We have plenty of examples: The extinction of megafauna caused by human colonization of the Americas and Australia(to mention two places), the catastrophic plagues that killed most Native Americans after the arrival of Columbus, the (still continuing) creation of epidemic diseases due to animal domestication and ecological destruction of the environment, such as occurred in the ancient near east from irrigation agriculture. These were all beyond the comprehension of the people involved so we can hardly blame them. That is less true of the catastrophes that we are preparing today, and those who dismiss the threat without bothering to understand it deserve to be blamed.

    ReplyDelete
  3. Someone in Brookhaven or elsewhere just made a wrong calculation and got this ridiculous huge probability, and this judge shamelessly picks this largest estimate because it's most interesting for him.

    You don't need to study all stars. If such a process of "strangification" could be initiated in Brookhaven, it would have happened inside our Sun - 5 billion years would be enough. This "threat" simply contradicts the existence of the Sun and other things in the Universe. It's an academic topic for long evenings.

    In all these cases, the real threat was many orders of magnitude smaller than what would justify those expensive preventive programs - or what could justify stopping the accelerators.

    ReplyDelete
  4. I don't see how the Sun is a good model, Lubos, as core temperatures and densities are way too low to form a high-temperature quark-gluon plasma. There are doubtless very high energy cosmic rays in the neighborhood, but they again would not encounter high densities before they thermalize.

    ReplyDelete
  5. Hi Capitalist,

    OK, I am not able to calculate exactly the rates right now. If the temperature somewhere in the Sun is of order 10 million Kelvin - it's roughly keV's? Then the Boltzmann supression may be insufficient to make the Sun less "luminous" than RHIC.

    Wilczek et al. emphasized the high energy cosmic rays that would have already eaten the Moon if the mechanism were possible:

    http://arxiv.org/abs/hep-ph/9910333

    More generally, I think that it is not reasonable to imagine that *anything* that the humans are doing is a completely unprecedented physical process in the Universe. These exotics particles we generate are rare in the Universe today, but they still exist.

    All the best
    Lubos

    ReplyDelete
  6. This comment has been removed by a blog administrator.

    ReplyDelete
  7. You'd better look at
    http://arxiv.org/abs/hep-ph/0009204

    To be honest, the benefits gained from running RHIC seem to me [as a theoretical physicist] rather small. I might be willing to risk the world to find out whether string theory is true, but to find out some minor details about nuclear physics? No!

    ReplyDelete
  8. I never said whether the benefits of RHIC were large, and I probably agree that they're not. The topic here was whether RHIC carried a significant risk of destruction, and the answer is No.

    All the best, Lubos

    ReplyDelete
  9. And as far as I am concerned, hep-ph/0009204 looks like pure politics. I just don't see any physics in the article. One can always claim that some risks are underestimated, but if he does not give any rational arguments, then such claims are uninteresting. Incidentally, if you click at "cited by", you will see that they were also uninteresting for everyone else.

    He criticizes some people for having argued that the probability was small because a particular model with some parameters led to this result. I understand this comment, but on the other hand, for example, Wilczek et al. don't use any obscure models, but some rather general observations.

    ReplyDelete
  10. I'm not sure that the expectation value is a very useful measure in such low probability/high payoff situations (e.g., the Saint Petersburg Paradox).

    ReplyDelete
  11. One could say "I am not sure whether it's useful to study these highly unlikely and 'big' situations at all". But once you study them, the expectation value is the only natural, rationally justifiable way to count.

    For others who also did not know what the Leningrad paradox was:

    http://mathworld.wolfram.com/SaintPetersburgParadox.html

    I am not sure why you think that this game - gedanken experiment - reduces the interest in studying the expectation values. If you read the page above, you will see that all the discussions in this context *are* about the average yield etc.

    ReplyDelete