Sunday, July 19, 2020

Research mathematics will only disappear if the civilization is over



An artificial "average Russian woman" was only allowed to work as a secretary in the Urals so far. A key point is that she wasn't allowed to run for Putin's job yet.

Various people have promoted their ideas about the "end of history", "end of science" but also seemingly less ambitious ends such as "end of string theory" and "end of experimental particle physics" and more far-reaching ones such as the "end of fossil fuels". None of those has taken place yet – despite hundreds of thousands of activists who are trying to destroy one or many items in the list. Tim Gowers, a 1998 Fields Medal winner, responded to a question about his most controversial opinion


Some details and justifications were written in this thread. OK, we are told that mathematicians will cease to exist when the current youngest postdocs start to die en masse. Why? Gowers believes that the expansion of the Artificial Intelligence into mathematics will make sure it will be the case.

I think that his view is a corollary of a mistaken understanding of the relationship between the people and the machines.



I think that the key point that he doesn't understand is that a machine isn't a superior competitor of those who actually matter; a machine is an asset or a slave of those who matter! What do I mean?



What I mean is that the prediction is analogous to the statement that "the cotton industry will die once many workers are brought from Africa to America". It has taken place but the industry didn't die. It really expanded, became more important, and used the workers in a productive way. Just like ancient Greece and Rome, the U.S. owes a lot to slavery whether dishonest SJWs like to admit this important fact or not. (Although many cotton bosses admit that "if they had known what profound mess would be brought by the slavery-enhanced harvest in 2020, they would have harvested the cotton themselves".) In particular, the white people didn't disappear from the cotton industry; they kept their power.

Now, artificial intelligence (AI) may really get powerful enough to try many possible proofs etc. within years or decades. It will be able to replace mathematicians in a great variety of tasks. But while the skills of AI may become impressive, this replacement may still be analogous to the replacement of white workers with the black ones. It didn't really change the essence and it didn't really change the owners of the plantations, either.

Just like the slaves, AI is capable of doing a lot. But this capability doesn't determine what these workers actually do at the end. It is their owner who directs them. So even some very capable AI that trumps the Fields Medal winners in the ability to search for proofs (including very clever proofs connecting faraway portions of mathematics or envisioning mathematical structures of new types) will only replace some relatively "technical" parts of the research while the rest is unchanged. The fun is that there will never be any "totally objective, in vacuum existing" criterion telling the AI what kind of activity is great to spend its time and energy with.

It's determined by the owners. There will always be people with some mathematical or related curiosity or passion who will want to use the AI to find such answers to questions they care for. And the AI will be the body of slaves that will calculate for many years and produce the answers... such as 42. But the right problems are always coming from the passion of the owners, those who have the power to decide. It's these people who want to know whether the number of primes is finite or infinite – and thousands of similar, seemingly abstract facts.

If the AI is useful for mathematical research at all, its contribution will be framed as a technical slave-like activity within a broader, more ambitious project determined by a human – who must be one with true fantasy, passion, and the sense of beauty. In this sense, the usage of the AI will be qualitatively analogous to the usage of calculators or graduate students. Tools of both kinds are exploited for limited tasks of various types but someone must still be in charge.

Note that something similar is true in experimental natural sciences, too. You may have large teams but the true boss is someone who actually organizes what the people beneath him or her are supposed to do. This activity requires some knowledge of the experimental field. If the boss is the genuine one (and not just a puppet playing purely political roles), we may say that the boss – while looking like a manager and called the spokesperson or something else – actually needs the most important skills and knowledge about the field. It's the boss who could in principle learn the work of everyone else – but he uses others to do the limited tasks because the big-picture leadership is the most essential job.

Also, in the mathematical research, just like in theoretical physics, one needs some good taste for the "valuable problems" or "beautiful structures". The latter requires the sense of beauty. Is it important? You bet. But only the "good" sense of beauty is useful, the "bad" taste may be harmful. Which is which? There is no universally applicable procedure to tell the difference. But as Jesus said, you can find them by their fruits. The people who have a "good" taste for beauty in mathematics and physics are those who end up somewhere. With hindsight, we may see whose people had the right intuition or the good taste. In most cases, those are unsurprisingly the people who would also be capable of doing some other tasks that depend on creativity and penetrating intelligence.

Because of the limited role of the AI in the grand scheme of the mathematical work (which will be qualitatively analogous to calculators or grad students), nothing substantial will change about the work of a research mathematician. (I think that we still seem to be very far from the AI that really thinks independently – which has decent self-created opinions about "what questions are worth thinking about".) The only change could be when mankind really starts to reverse the progress (e.g. if mankind fails to liquidate the growing tumor of the SJWs of assorted, increasingly crazy subvarieties); or when the AI engines actually get the political power or human rights so that they may become principal investigators themselves (and they get the power to overrule my etc. opinions about "which sense of beauty is the right one") – and such PIs could soon push the human PIs out of the field due to the machines' technical superiority, the excess of brute force (which will be a big advantage even in many activities that look extremely fine, aesthetic, or human now).

But if the latter scenario happens, then, well, it will simply mean that the AI computers will be given the civil rights and they will be on par with humans. In this sense, the research mathematics will still exist, it will just be done by a genetically different bunch of researchers, in this case a silicon race (or will another element dominate?). From a broader viewpoint, this transition wouldn't be too much more dramatic than the proliferation of black cotton workers or Chinese producers of smartphones.

Again, my basic argument is very simple. As long as some people keep the power over the big-picture limits of what the AI is doing and why, the AI will be just a bunch of powerful calculators. And as soon as these engines will be allowed to redefine the limits, as soon as they get the actual power, they will really be indinguishable from humans in a deeper way and it will be meaningless to consider them "less than human".

No comments:

Post a Comment