The cataclysm on December 21st, 2012 is less than a month away and I am regularly asked by people in the real life as well as those on the Internet whether a particular doomsday scenario they read about will happen. They are just polite when they ask; of course that if I explain to them that they don't have to worry, they keep on $hitting into their pants, anyway. ;-)
Nude Socialist, Fox News, BBC, AP, and the rest of the pack told us about CSER.ORG, a center founded by the Lord [Martin] Rees of Ludlow, among others (including a co-founder of Skype), that will study the huge one-time risks that can make us extinct and everyone underestimates. What are they? Well, they are:
- robot uprising
- Hiroshimas all over the world
- artificial germs making all of us sick and die
- global warming
I am always amazed how disproportionate impact various crazy people decorated by the queen may have on the society.
A senile woman who has lived a materially wealthy life – greetings, Elizabeth – attaches a medal to a Martin. He goes to the pub, gets really high, talks to his friends about four greatest threats for the humanity, and when he collects the answers from his hopelessly drunk buddies, they establish a center that instantly attracts at least millions of dollars to study these four phrases pronounced in the pub.
Note that the identity of the four most dangerous one-time threats for the mankind is "inserted" as a defining description of the new center. So if you found out that there exists a much more serious or much more likely threat that could exterminate the life on Earth, you wouldn't be welcome. Sorry but this is not the scientific approach. It's a corrupt scheme to use money and influence to promote and strengthen predetermined memes, fears, and prejudices.
In hundreds of articles, this website – and many others – has demonstrated that the idea of a threatening "climate change" is a preposterous delusion believed by the uneducated ones and promoted by the ideologically and financially motivated people who don't really believe what they're saying. What about the other three threats?
Concerning the nuclear holocaust, I think that there's a very limited number of countries that possess the nuclear arsenal capable of a "truly global" destruction. And to activate them in this global way, active and deliberate collaboration of many people would be needed. It can't be quite excluded that weapons could be activated so that almost all of Russia is flattened. But with apologies to our Western Slavic readers, this would be still far from a threat of human extinction. I believe that there are no real plans that would detonate the weapons "everywhere" which is needed for the mankind to go away and it wouldn't be easy for a group of outsiders to launch such a process. And even if you could explode nuclear warheads in every squared mile of the Earth's surface, many people and nations would probably still have and apply tricks to save their skin.
We may see some local usage of nuclear weapons in a foreseeable future but if it's so, we will be reminded how extremely far a single nuclear warhead (or three of them) is from the human extinction. It's powerful but it's just a little bit stronger weapon, not a button able to destroy a planet.
There are various germs and new ones will be produced both by Mother Nature's evolution processes as well as by biologists. I am actually not sure which of them represents a more genuine threat for us at this moment although I know which of the two threats is growing bigger more quickly. Again, it's hard to imagine how new viruses or bacteria could bring us global extinction. By locality, they're not able to be everywhere. If the new germs act too strongly or quickly, people and nations will immediately introduce harsh measures to protect themselves against the infection (and the infected ones).
Nature is making progress in improving the resiliency of the evil germs but this progress has arguably not sped up too much. Our ability to artificially engineer viruses or bacteria has improved dramatically and will improve even more quickly in the future, I guess. But the ability of biologists to do "good things" and detect and kill the germs and diseases is improving equally quickly. So even though the threats may have gotten more sophisticated, our ability to resist has improved by a more important increment. The total result is, I believe, that the mankind has gotten and is still getting more resilient towards infectious diseases, including the (hypothetical) man-made ones.
I am much more worried about the "gradual" negative developments when it comes to our physical, intellectual, and moral qualities.
Machines may do lots of things and they're already more intelligent than us according to many somewhat useful measures of intelligence (but clearly not all of them). However, we're still in the regime in which the machines are our slaves.
We must realize that unlike animals, the machines haven't evolved to egotistically protect their interests. They have "evolved" (in the engineering labs) to increasingly efficiently help the interests of some humans – those who built them or those who paid for the construction.
Despite the immense technological progress I am expecting, I don't see a reason why something should change about the previous paragraphs. Machines are, by definition, man-made objects and the reason why people build them is that these machines should bring something good to the people, at least some of them. It's a waterproof logic.
So even though the power of humans is already immensely amplified by technology – and this amplification will get even stronger in the future – the people are still ultimately in charge of things because that's why and how the technology was built and improved.
Of course, it's plausible that there is already a lab that is building robots "who" are trained to protect their own interests – rather than the interests of [some] humans – and prepare some kind of a "robot uprising". That's great but these robots are still tools belonging to the crazy engineer who is building such a thing. So this person and his assets may be considered the "true enemy".
The intent ultimately comes from a human or humans. I can't imagine how it could not be the case. As long as we are not worried about the human rights for robots, we shouldn't be worried about anthropomorphic threats posed by the robots, either. And if we ever wake up in the future to find out that robots are (at least) our peers because their artificial intelligence resembles ours, our logic will be transformed and the feelings about our identity will be blurred, too. We won't think of robots as someone "completely alien".
In fact, I am sure that the "discrimination against robots" will be viewed as a bad thing by many people – in the same sense why many people fight against "discrimination against other races" and similar things. When the artificial intelligence gets this advanced, the problem "how to resist a robot uprising" will be transmuted into a moral problem "should we try to suppress the robots' free behavior", anyway. Martin Rees' center will be viewed as a controversial center defending some kind of "anti-robot racism" and will surely lose the (now undisputed) label of a "center helping every human to fight some threats". After all, all of us – Americans, Chinese, men, women, and Hyundai robots – will be neighbors and fellowmen who deserve dignity. If robots ever take over, the reason won't be our lack of knowledge about the ways how to stop the revolt but our lack of will to do so.
Interdisciplinary centers produce babbling, not hard science
While the doomsday scenarios are lots of fun to think about, I have explained some of the reasons why I think that a center actively investigating these risks is an irrational enterprise. But even if the threats were genuine, I would have serious doubts that Martin Rees' center would attract the most relevant nuclear experts, microbiologists, atmospheric physicists, and artificial intelligence experts who would be the leaders in the "fight against these threats". The center looks like an insanely multidisciplinary institution and I simply don't believe that the most relevant, advanced, and reliable insights on bacteria, nuclear weapons, artificial intelligence, or atmospheric physics would be born in such an environment that is full of other distractions.
It seems much more likely that the most important discoveries relevant for the four threats – and other threats – would be made by scientists who intensely focus on their field and who are trying to find important truths and mechanisms, not necessarily constrained by the predetermined motivation to "save the mankind".
This was my last piece of evidence that Martin Rees' center is a waste of money.