Saturday, October 09, 2021 ... Deutsch/Español/Related posts from blogosphere

So far, "Artificial Intelligence" comments are just a preprocessed message picked by a "coach"

AI bots and climate models aren't unique, important, trustworthy, independent sources of information. They are puppets designed to deceive gullible consumers.

In the last debate between some chosen 8 party leaders broadcast by the Czech Public TV (which was considered a terribly chaotic debate with no real opportunity to convey ideas, short time dedicated to each politician, and an excessively dominant or arrogant host, Ms Světlana Witowská), there was an unusual twist. They included Matilda, an Artificial Intelligence Bot (with a female face) who asked a question to each politician (which included the video of the artificial female face in the middle of the screen).



I think that this small farcical event was a good example answering some question about "what the AI subjects think", "who may have responsibility for it", "whether the AI should have human rights", and so on.



The main problem concerning this usage of the AI is that the viewers, potentially millions if not (globally) billions of people, are led to the utterly stupid view that "there exists some very smart new way of looking at things" and "the questions that this technology produces must therefore be unique and important". But that is a complete lie. Artificial Intelligence is just some package of software that ends up looking like it is thinking analogously to humans. But the catch is that there is absolutely no "unique way how it should be done" let alone "unique way how it will think and what it will care about or ask". The class of programs that "resemble human thinking" is extraordinarily fuzzy and ill-defined.



So Matilda unavoidably ends up being a puppet whose behavior is heavily affected, if not completely determined, by the humans who coded "her" or trained "her". There are really infinitely many ways how "some computer code seemingly resembling human intelligence" may be programmed and each of them may be fed with "input" in infinitely many ways. The input may consist of long sequences of training sessions that may include random numbers from many sources (including the random generators of computers). Someone has to pick the sources that were poured into the AI bot. The spectrum of possible results is at least "infinite squared".

In practice, Matilda was coded in such a way that her "choice of some questions" was one of the skills that was demanded from Matilda from the very beginning. And while the code can hopefully do much more than just to parrot sentences, it is software that was trained by pouring lots of sentences from a certain ensemble. The coder probably knew in advance "what rough class of questions" would come from such a Matilda if she is "allowed" to ask questions. "She" could have been politically pushed to ask more political questions or less political ones etc. by adjusting some parameters in the code or by adjusting the choice of the "corpus" that was inserted into "her" as the training data.

The diversity of possible outcomes is therefore even larger than e.g. the diversity of possible DNA molecules (and corresponding organisms) that have ever lived on Earth. When someone is ordered to prepare such a Matilda and/or her questions for several politicians, he must choose one of the infinitely many ways to solve this problem. The choice may be partly random; much of it will reflect the coder's or software user's own personality. If Matilda were considered "human", it would still be true that "she" would be absolutely brainwashed. "Her" dependence on "her" puppet masters could even trump the dependence of many sheep on their puppet masters – and millions of people are much more sheep-like than the actual sheep today, indeed. But AI may be even worse, partly because the people "totally enslaving and manipulating" such AI bots face no sanctions for their dictatorial or slaveowner-like behavior.

Equivalently, my point is that the "identity of a human and/or his or her human rights" unavoidably includes all the events that have shaped the human life because those things were encoded in his identity and his way of thinking. Each of us (and certainly each AI bot) has a very specific sequence of such (to one extent or another) formative events. None of us the same as someone else. Even for twins, the perfect symmetry is broken soon after the egg divides to two organisms. Once the twins get out of their mother, they will look extremely similar but their identities will diverge due to the different events in their lives, anyway.

People are believed to worship similar things including "AI" and they are led to believe in some kind of "uniqueness" of these entities which just doesn't exist at all. As a Slavic speaker, I have no intuition for the existence of the word "the" before Artificial Intelligence. But indeed, the point is that it is completely wrong to place "the" in front of AI! There are "very many of them" instead. If you imagine that some "artificial humans" will be produced at some moment, people will ask whether such humans should be given various human or political rights, or whether they should be liquidated, whether it's OK to terminate them, change them, clone them, and do many other things.

The answer is that there just cannot exist any objective, unique answer to these questions. First, science just never answers questions like "should you do this or that" (e.g. should you grant human rights to a robot), such decisions always depend on people's values and interests (and on random numbers fundamentally produced by quantum mechanics). Second, even if you specified the people's values and interests, it will still matter on the exact identity of those AI beings and how their identity and behavior will have been developed, trained, or selected; and what can be expected from one decision or another. Many people may get emotional if these AI bots behave and speak just like sensitive humans. But it's aleady the case of pets or cars, too. People may already get emotional. Whether they "humanize" or "dehumanize" a given AI bot will still depend on all the details that I mentioned and many more.

As long as the AI bots are completely controlled or owned by some humans, it will be completely stupid to imagine that they have their own moral or legal responsibility for their behavior, like humans usually do. The complexity of the software or the large size of their memory or storage is not what should give them the "human independence", "human rights", or "accountability".

Instead, it is their freedom itself, the proven ability to "survive" in the free world, and the society's respect to that freedom and its consequences that allows to treat these AI bots on par with humans! But even if such conditions were obeyed, many people may still choose to consider some AI bots to be their enemies, just like other humans may be enemies.
As long as the AI bot is completely controlled by John, the AI bot is just a tool on par with a hammer or anything else. It just doesn't matter that the AI subject has a lot of knowledge or mental abilities, probably greater than John himself. The decision whether we allow the AI bot to do whatever it "wants" – whether we ever allow particular classes of AI bots to move freely and manipulate with our virtual and then real world – is not a function merely of the internal architecture of the bot. It will also depend on the precise way how the bot was trained. And people will ultimately allow those things that they believe are beneficial for their (the people's) lives! It just can't be otherwise. All these decisions are fundamentally political, not scientific. There cannot be a scientific way to determine such answers. Many possible scenarios are possible. AI bots may become human-like and get lots of freedom which will be self-enforcing. Or humans may prevent them from getting that freedom, too. At various places and at various moments, distinct scenarios may materialize.

When I mentioned that the AI bot is only a fancy decoration reflecting the desires of some human puppet masters (plus some random influences that were allowed to stay random), I must mention that almost the same thing holds for the climate models. They are also often presented as "unique" and giving us "useful information" (this is surely a statement included in the 2021 Nobel Prize in physics) but it is a complete lie. They are extremely non-unique, non-robust pieces of software that depend on many coding and approximation... choices plus some explicit extra parameters and switches that must be picked by humans and that is why they may be described just as sophisticated tools to convey a message that the "humans in control" want to convey!

So of course all models that produce the prediction of 4 °C of warming per century are really just a lipstick on a pig that was ordered by a human, a lipstick that the human wanted to convey. The human decided to allow some large feedbacks that were almost guaranteed to overstate the sensitivity; some instabilities in the numerical calculation that were likely to make the whole evolution dramatic, and some parameters may have been chosen "large" by hand. Even more importantly, the person responsible for the coding and usage of the climate model and its results has chosen not to throw away a particular climate model even though, as a scientist, he should because a 4 °C of warming is already rather safely incompatible with the directly measured temperature data (no empirical measurements indicate a trend that is safely higher than 2 °C per century)! In science, "if it disagrees with the observations, it is wrong" but the people in charge of these models decided to ignore this most fundamental dictum of all sciences. In practice, almost all the people working on this stuff are corrupt parasites who are primarily thinking about their own income and their influence that may help to preserve that income. It is very trivial for these people to adjust the software and/or its usage in order to get whatever they want. They may "murder" or "abort" as many Matildas and climate models or runs as they want and they may pick the survivors.

Just like Matilda, the complex climate models are therefore not really "independent sources of useful information" at all and whoever who is behind them and who pretends that they are independent sources of useful information is a dishonest manipulator. And whoever buys this message is stupid. The "independence of these entities' messages" may only emerge (in a distant enough future) once the AI bots or the climate models start to live full, free lives which are subject to some selection that favors "the true and useful things" and/or eliminates the "false and useless ones". In this sense, there is just no human-like intelligence and there are no independent sources of the useful information without freedom!

And that's the memo.

Add to del.icio.us Digg this Add to reddit

snail feedback (0) :

(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1828728-1', 'auto'); ga('send', 'pageview');