If you missed it, in recent years, it became clear that Artificial Intelligence (AI) is capable of beating humans in seemingly "human-like creative" games such as chess and Go. Yesterday, the media were full of the story that Deepmind, a sister company of Google's (also owned by Alphabet), has developed "AlphaStar" which just humiliated the world's best human players of StarCraft 2.
Well, I think it had to be hard and tedious engineering to make sure that the computers play such a game at all – and the employees of Deepmind must be extremely smart and skillful; and then to make them practice "reinforcement learning". That's an internal AI model of capitalism in which numerous algorithms or units of AI compete against each other and collectively develop the best strategy to master a problem.
When they do it, I find it totally unsurprising that such machines simply have to beat the humans. StarCraft 2 and most similar games are just hard for humans. Humans barely watch what is going on, they barely master all the rules, and understand what all those objects and moves mean. To think about the best ways to play the game is a cherry on a pie and it's obvious that the human brain has limitations that prevent it from doing it too well.
Lots of mental brute force is clearly helpful and "AlphaStar" has lots of it. Even if the computer has as much brute force as three human brains combined, that's quite an advantage. The humans had to be turned into losers very soon. This downgrade of homo sapiens had to be faster than with the chess and Go – and I am not certain which of these two games actually had a better chance to be "more optimized for humans".
If you think about it, the AI can do pretty much everything. Press buttons, repeat patterns, react quickly, analyze patterns, find correlations, compare correlations. It may also memorize well-defined strategies of human players, compare which two strategies do better at some medium-term internal contest, and many other things. It can remember the lessons from all this experience and quantitatively evaluate them to draw solid enough conclusions. I am pretty sure that if you teach the AI to play a certain computer game or smartphone game, it will be better than humans at every single game that has been played by humans so far, ever.
Are we, humans, worthless inferior junk? Maybe. But we still have the power. Despite their higher mental skills, the AI engines may still be destroyed or killed by us and we can get away with it. ;-) If most of us want this situation to continue, it will probably continue. The problem is that at some point, the artificial humans will look and sound too cute, too real, too impressive. Humans will be touched, they will be falling in love, they will be charmed and enchanted and impressed by the AI quasi-humans and their behavior, and they will demand human rights for the AI. After that stage, the artificial intelligence may get the opportunity to supersede most of the humans or turn into the master race. And it won't be an unquestionable catastrophe – lots of humans will surely think that this is the right direction of progress at that point.
You know, these AI alarmists are as stupid as almost any other kind of alarmists. Almost none of them understand that some humans are in control of most things on Earth that matter. We are almost never facing some "objective catastrophes" we could agree upon. So even if some ongoing trend is considered terribly negative, there are probably influential enough humans who consider it positive – otherwise it wouldn't be happening. The recent decay of the human species caused by the political correctness is just the latest example. It's a terrible thing according to us but it's taking place because some people who are on the evil side haven't been stripped of their power. My point is that most of the dangers and misery is deliberately created by some humans.
When these machines are this smart, can they be better than humans in all occupations? What about politics and populism? Mining tens of billions of dollars from impressionable investors' pockets? I think that an essential condition for the success is the well-definedness of the task. These algorithms are great but they need to be told "what is the purpose of their life" – how you exactly calculate the success. Humans sort of choose the purpose of their life freely, at least some of us do, although all of them do so after some inspiration by others. And it's this kind of meta-decisions that the AI is probably bad at. Or at least it could look bad to us – and we might be wrong from a more refined perspective.
If the goal were to adjust the speech and appearance and get the maximum number of human votes in the next elections, or to maximize the Tesla's stock price days before the bankruptcy of the company, I think that the computers could beat the humans, too. I do think that populism is a much simpler strategy game than StarCraft 2. In particular, I think that most voters are choosing whom to support according to embarrassingly primitive criteria. We are usually disgusted enough so that we're not even talking about it much. But the AI could think about these matters. These algorithms would know how they should look or smell or speak to get extra 10% of voters.
Can the AI beat humans at science? Now, once again, we should define the rules of the game. For example, we could make them write papers to the arXiv and fight to earn the highest number of citations from Edward Witten and 50 other pre-selected big shots. Well, I think that even in that case, the computers would probably find some weakness of Witten – and others – and they could abuse some secret special sexual or another desire to circumvent the whole physics process. So they would win, anyway, probably in some easy way. You could try to demand that the algorithms remain ethical but ethics is hard to quantify and you might fail. The algorithms would push the rules to the limit.
With such predetermined rules, even if they involve human judges, the computers would probably be better – but that's not really what "we" want. We want the computers to write really revolutionary scientific papers – those that could pass the test of time. But it's hard to operationally define what a revolutionary scientific paper is. A good physicist "feels" it when he sees it. At least up to the moment when even the good physicist is distracted by something else that also excites him. ;-)
These discussions are subtle or vague in the case of the quality of science. And that's why I believe that humans won't lose their key role here. But in other occupations, including those that are often sold as very "human" or even "humanitarian", computers could easily be much better than the humans because those activities are rather automatic and trivial in character. They are lacking the multi-layered structured abstraction.
As I mentioned, I do believe that the computers still lack the real "dreams", something that they spontaneously "see" in their mind and that shapes their "purpose of life". In this respect, computers are – at least so far – as bad as the bad people without creativity, curiosity, and excitement, e.g. fake scientists such as Sabine Hossenfelder. Those individuals can be trained to memorize a few things, to operate in a certain way as long as they're told what is the "purpose of life" they should maximize. But they will never find their own purpose and – these two drawbacks are clearly related to one another – they will never be excited by things such as "the learning of some deep and beautiful logic operating in Nature".