## Friday, March 11, 2016 ... /////

### Computers' dominance in Go was inevitable

Which activities will preserve the "advantage humans"?

This topic has hijacked a a thread on quantum gravity so let me dedicate a special blog post to that issue. A computer (Google's AlphaGo) has repeatedly beaten Lee Sedol, the world's best human Go player of the last decade, denied him his \$1 million bounty, and some people can't believe their eyes.

Well, I congratulate to the programmer(s). But I feel vindicated, too. I have always considered the idea that "Go is so spectacularly human and complex that computers wouldn't become the champions for centuries, if ever" to be an idiotic religion. For some background on this "Mystery of Go" religion, check e.g. this 2014 article in Wired.

The religion has started with a "Mystery of Go" article in Nude Socialist in 1965 and is primarily justified by the high branching factor of Go. You only have 35 possible moves in chess in average; but the number is 250 in Go. So old-fashioned search algorithms with trees – e.g. Minimax – become costly much more quickly in Go than in chess.

You know, the problem with the religion is that Minimax isn't the only possible algorithm. The human player isn't testing a vigintillion of possible future scenarios. He's basically looking for some patterns to nicely evaluate whether a given arrangement of the board is promising. But there is nothing "intrinsically human" about that approach. It's still some strategy, a type of computation, and a computer can surely learn to do such things as well – and with superior brute force, it will become better.

Those of you who follow the contests at Kaggle.com know that there are hundreds or thousands of amateurs – and the same number of professionals – who can teach computers to recognize all kinds of patterns and solve similar problems that others could consider "human".

I think that the "Mystery of Go" religion has roots in the analogous misconceptions of many people as the anti-quantum zeal. Just like people incorrectly think that the ultimate theory of Nature "should" be realist (i.e. classical), they believe that the algorithms used by computers "must" be old-fashioned such as the mechanical analysis of search trees.

But none of these assumptions is correct. The fundamental theory of Nature is known to be non-realist i.e. non-classical i.e. quantum. Similarly, many intelligent modern algorithms are using very different strategies than the mechanical check of every possible scenario in a tree. They look like much less "mechanistic" or perhaps much less "classical mechanistic" methods. In these methods, we can't quite describe what's going on. Lots of things are going on and they are conspiring in such a way that they may end up producing a clever answer to a difficult problem. I chose the quantum-neural analogy because the synergy between the processes in a neural network is "morally" analogous to the quantum interference. Both are likely to produce the right answers by some "synergy" even though you can't divide the process to sharply well-defined classical steps.

OK, again: We're primarily talking about things like neural networks as the class of "modern" algorithms.

The "Mystery of Go" religious people tend to think that humans have some monopoly over those algorithms, that those algorithms are perhaps impossible to replicate, not available to a scientific analysis, and things like that. Needless to say, all these pillars of their belief are absolute rubbish. The neural networks are perfectly accessible to a scientific analysis, they may be replicated, and they are being replicated on silicon chips all over the world.

Neural networks aren't even a "new" development. If you look at some history of neural networks, you will see that it starts in the 1940s – pretty much just like the von Neumann architecture of computers. In the 1970s, people already had usable and useful neural-network programs.

Sorry, believers in particular and humans in general, but humans don't have any monopoly here.

It's simply untrue that the advantage that allows Lee Sedol to play Go well is some "totally magic, technologically unreachable emotional divine intuition". Whether you like it or not, what Lee Sedol is good at is some kind of brute force! And contemporary computers are already better in brute force tasks. The search for many kinds of patterns of somewhat well-defined types in a large ensemble of possibilities is a discipline in which computers, and not humans, have a clear advantage.

When we say the word "computer", the "Mystery of Go" religious people are actually imagining some "naive, stupid computer performing only some old-fashioned, easily understandable, mechanistic algorithms". But that's not equivalent to the word "computer". A computer doesn't have to be stupid, it doesn't have to perform merely straightforward, readable tasks. It doesn't even have to act deterministically, and so on. It can do all the things that the human brain can do. To talk about "intuition" – and to expect that it is enough to keep these things mysterious – means to be prejudiced against the scientific and technological progress. People can use methods that might deserve to be called "intuition" but even when it's so, it doesn't mean that this "intuition" cannot be analyzed in detail and replicated. It surely can be. And if some aspect of their intuitive approach can't be measured or replicated exactly, it's probably because the exact details don't matter for the functionality. An approximate replication will be capable of solving the tasks just as well.

While both the von Neumann architecture of computers and neural networks have been discussed by scholars since the 1940s, the von Neumann computers have been vastly more widespread. Why was it so? Was it because there's something "intuitive" or "purely human" or "divine" about the features of humans that technology simply cannot mimic?

No. One of the reasons was that it took some time for the computer scientists and programmers to get used to this new "not quite transparent" paradigm. But another reason is simple yet surprising for many: Neural network programs haven't been widespread because the computers were lacking the required brute force – and the humans were better when it came to brute force of neural networks.

This is another point that is widely misunderstood. When you say "brute force", some people automatically conclude that computers must have been superior for a very long time. But it's simply not the case. The biologists like to quantify the memory of the human brain as 2.5 petabytes. This is equal to 1 million times RAM of your Windows 10 desktop computer; or 3,000 times its hard disk.

I don't believe that this estimate should be taken too seriously. I don't even know how the memory stored in all these vague ways could be exactly quantified – and whether it's well-defined. And I guess that this capacity is some "raw limit" which is used inefficiently by the biological brain. But one thing is true: you need something like petabytes if you want to replicate the behavior of the human brain with ease.

It's not hard to see why your Commodore 64 could have had problems in beating human Go players 30 years ago. It had 64 kilobytes of RAM – and just 40 kilobytes was available to BASIC – and most of us had to load the additional data from the tapes and it took minutes to load the 64 kilobytes into RAM. Such a small computer could have done many accurate tasks in ways that surpassed humans. But when it comes to dealing with a large number of potential patterns and things like that, the brute force of Commodore 64 was negligible relatively to the human brain.

We have entered the era in which the computers are becoming better in all these brute force parameters. A petabyte isn't impossible for one computer. A computer may surely use it more efficiently than the human brain. And I am pretty sure that the computer can perform a much higher number of "operations" per second than the human brain. The brute force is available and the tasks that required the "human-like" algorithms have become accessible to computers. Computers have beaten people in Go, in locating photographs on the map of Earth according to the content, and many other things.

All the tasks that are qualitatively similar to Go will surely fall, sooner rather than later. Computers may possibly beat humans in many occupations. I agree with this article in Science claiming that the victory in Go wasn't such a big deal because these constant improvements of computers are inevitable and Go isn't the "grandest task", after all.

I agree with Dana Mackenzie that things like Cortana (this is why I picked Cortana and not a competitor) are ultimately a more nontrivial battlefield for the artificial intelligence. The task to "be as helpful as a human assistant for the owner of the smartphone" is not dealing with as high a number of mathematically well-defined patterns as Go; but it has a much stronger dependence on algorithms that used to be considered "human" so far. And these personal assistants (at least Siri) surely help some companies to earn greater amounts of money than the Go software.

Various simple occupations – such as janitors, writers of loop quantum gravity papers, and so on – may be replaced by computer programs soon. I think that there are typical occupations for which humans will remain paramount.

One reason is that we need humans because of the human touch. Two days ago, I watched Spielberg's wonderful and touching movie "A.I. Artificial Intelligence" about the little boy (looking like a perfectly normal biological boy) who was man-made and silicon-based but who fell in love with his mother. She had to throw him away but he saved his life, dreamed about getting human (because that's how his mother could love him), pursued a fairy, got frozen in an ocean, and in 2000 years, some advanced A.I. community found him and told him that he was de facto human because he was the only robot who remembered when humans were around. So they reconstructed his mother from her hair DNA for one day (the time limit was due to a technical glitch in their spacetime-based DNA reconstruction), the happiest day in his life. I am still crying even when I describe this plot.

(Super 8, another movie by Spielberg on the previous day, was also very good. I hadn't seen those movies before. In Super 8, some semi-artificial intelligent underground monster was present as well but it was ultimately the pure love in between two children – whose fathers had hated each other – that caught my heart.)

So when people want to deal with real humans, computers will probably be in a disadvantage for a while. At least equally importantly, I believe that human string theorists and the human programmers for Google (I mean only those at the top) etc. will be needed for quite some time. I hope that their work will become much more effective once they team up with the computer power in completely new ways (a direct communication between the brain and a computer etc.). And yes, I can imagine that in some distant future, computers will be much better in those "ultimate creative tasks" as well.

But yes, I do think that in coming decades, lots of people will see how trivial their occupations actually are – how they can be replaced by AI. Drivers? Teachers? Hairdressers? Clerks? Soldiers? Cops?

CapitalistImperialistPig has invented five "Beyond Go" ambitious tasks that intelligent computers should conquer in coming years. (Please feel free to ignore the last, environmentalist, nonsense.) I find it plausible that computers will soon be good enough to "actually" solve these problems.

But there's one more hurdle, the hurdle of responsibility. In some contexts, we won't be ready to allow the computer to decide because we don't want to be led by a "different species" or because we don't know how the computer could be held accountable. But once we get rid of this fear and the anti-computer prejudices, it's plausible that we will allow the computers to be our bosses and prime ministers, too.

As I wrote in previous blog entries on that topic, I am not afraid that the Earth will be conquered by the computers against the human will. I think that computers will only become political leaders – and "powerful" in similar respects – when the humans will allow them to do so. If this transition takes place prematurely, I am sure that it will be the fault of some humans, not the computers themselves.

At the end, the transition to a new world in which computers have the "human rights" and similar things will be analogous to many social transitions we have witnessed – e.g. the new paradigm that a woman is basically like a man and should also be allowed to do manly things like driving and voting. ;-) There will be people who will prefer to preserve the old world in which computers are obliged to be just slaves and maids; and there will be others who will think that it's silly and the computers must be allowed to do everything at which they can be as good as the humans. Because the computers' preparedness to do all these "formerly human" tasks and responsibilities will keep on increasing, I think it's just a matter of time when the society allows the silicon-based and mixed (etc.) "citizens" to do everything that humans do these days.