Tuesday, December 08, 2020 ... Deutsch/Español/Related posts from blogosphere

On the departure of an AI ethics researcher from Google



Synchlavier has told us about the removal of Timnit Gebru (Wikipedia, Heavy.com, Twitter) from Google's AI ethics research team:

We read the paper that forced Timnit Gebru out of Google. Here’s what it says
...MIT Technology Review...
She was born in Addis Abeba, Ethiopia, and her mother is black African. Both parents moved from Eritrea (which is small and was independent from Ethiopia since 1991-1993). Her father died when she was a kid and he was an electrical engineer. My understanding is that he was white but I am not sure. Even with a half-white heritage, you won't find too many Eritrea-born Silicon Valley researchers, I think! She has done AI research in Apple, Microsoft, and Google. Most recently, she was de facto fired from Google.



The first assumption that I am reasonably certain about – feel free to argue if you disagree – is that she was fired because her bosses at Google had both the power to fire her; and the desire to fire her. What they used as an excuse to fire her (dates of vacations, ban on disagreement with XY, whatever) seems irrelevant to me. The second assumption I am almost certain about is that they fired her because the content of some of her papers was inconvenient – and probably an inconvenient truth.



My significant uncertainty begins once we ask which papers or statements in them were actually inconvenient; and what was the character of the inconvenience. Concerning the character of the inconvenience, there are two main hypotheses that I see:

* Her findings were politically incorrect (despite her being a "black female")
* Her findings were dangerous for Google's profits

Of course, some combination is possible. Her former colleagues wrote a letter to protest and it's all about her being a black female – those mustn't ever be fired, these far left PC scumbags basically claim while they are entirely missing all the points.

Now, the papers and their claims that may actually be inconvenient are mainly the following:

* She was one of the two authors of a paper showing that the facial recognition software is more successful with whites and/or males
* Large language model paper: the huge systems to deal with the natural language consume too much energy
* Large language model paper: the huge systems produce parrots and reduce the creativity and originality of new texts

Now, the 2018 paper about the sex and race in facial recognition (which has over 1,000 citations by now) may be too old to be a reason for her dismissal but I find it plausible that it was the actual reason, anyway. Such a finding – which seems obviously correct to me, just building on the data that I have collected by eyeballing throughout my life – is inconvenient for both reasons, PC and profits.

It's inconvenient because it says that "whites" and "males" are "better" in some respect. In this case, it's relevant that they display a greater variability (of facial appearance) which is something that is demonstrated even by the ability of the AI to identify individuals (a higher success rate for whites and/or men; the words "and/or" mean that the success rate is higher for whites separately and males separately but it is even higher for white males). For men, the statement is nothing else than the usual "men have a greater variability of everything than women" (IQ, body size, everything; it's because women can't afford as big experiments and deviations as men because they need to spend 9 months with pregnancy). For the whites, I still feel almost certain that it is the case. Just think about the most trivial traits such as the eye color and hair color. Whites can have blue, green, brown, and some others; brown, blonde, red. All blacks have basically the same hair and eye color as other blacks; and with different colors, it holds for East Asians, too. But the colors are just the most easily quantifiable trait. The variability of the "shape-like" traits is even greater among whites.

OK, so this finding is politically incorrect because the only politically correct conclusion is that sexes and races are totally equivalent, and therefore must also have the same ability to be recognized by AI. Too bad, PC snowflakes, the reality contradicts your lies. I am sure that you will prefer to uphold your lies instead of the facts.

Second, the relative inability of AI to distinguish the women's and colorful people's faces is also a bad news for the profits that come from this whole section of Google's industry. I am pretty sure that Google – which has abandoned the slogan "don't be evil" many years ago – has switched to the approach "it's normal to produce lies about all these things". They just want to pretend that everything works equally well for women and blacks, among others!

Now, she had the newer paper about the "huge language models". Very recently, I was just amazed by the translations to Czech that Google Translate gave me – they were even more amazing than a year ago. The percentage of flawless Czech sentences has gone up. Sometimes, the wow factor is amplified by some incredible sequences of words such as "Enyaqův úložní prostor" (Enyaq's bootspace). It really looks like the Google Translate has mastered declension and various "words derived from each other". The word "Enyaqův" is bizarre (maybe not a single human has ever used it) but linguistically refined and basically correct (although some may claim that this -ův ending should only be reserved for animate nouns).

The overwhelming part of this success is almost certainly brute force. The Google software (and similar programs from other companies) have been fed an incredible amount of real world texts to be trained. So they can really produce a "new text" that looks almost perfectly human. Of course, the problem is that increasingly long sequences of words may be copied verbatim. This has ethical consequences which Gebru has discussed in her paper.

Well, if some software to produce "new texts" works because it's been trained by reading and emulating a staggering amount of "old texts", I think it is fair to say that the description "new texts" is fraudulent. The program really generates some rearrangement of "old texts" instead! Needless to say, the same comment applies to the human beings, not just computer programs. Lots of people and programs are just parrots. A possibly inconvenient truth is that it is intellectually bad to be a parrot. She makes this correct point at least in the title and this may already be enough to trigger some people at Google (because, according to their wishes, a parrot is a role model that the human race should be retrained to resemble).

The other problem she mentions is that these programs manipulating massive bodies of text (and learning from them) consume a huge amount of energy. In the climate alarmist jargon, "they produce a huge amount of CO2", the gas that we call life. I think that mankind can afford it and CO2 is harmless. However, I am shocked by the immense hypocrisy – Google is one of the companies that constantly flood the minds of everyone with the anti-CO2 hysteria. But for them, it's just OK when they emit huge amounts of the gas, for heavily "non-essential" reasons, right? The same comments apply to the cryptocurrency mining which is a 100% useless (and certainly "non-essential") activity but the Bitcoin mining itself already consumes more energy than the Czech Republic! And it's this Czech Republic that produces some 1.5 million cars per year and that's less than 10% of my homeland's GDP! But we are increasingly living in a system where "those who have the power" can effectively do everything while most others are impoverished. The profits of the big Silicon Valley companies are so disproportionately high that they can easily afford even the insane carbon taxes etc. For this reason, the carbon taxes must be understood primarily as a tool to impoverish everyone else, something that the Silicon Valley companies are incentivized to do in order to increase their relative power.

Some of these observations were almost certainly "inconvenient truths" and that is the real reason why she was fired. A big picture problem is one that deals with the "academic freedom of researchers". She is an AI ethics researcher and even Google has pretended to allow her to do science – which includes going in whatever direction where the evidence points. I think that a company has the right to pay its thinkers, a research and development team. In most cases, we are thinking about some "self-evidently applied science" that is being done by the research and development corporate departments. But when a company pretends that a team – and this AI ethics team is surely an example – is doing some "universal science" that has a value as pure science, not just a method to increase profits of a particular company, it should be so.

My statement is the following: If this company (or any other company, or even a public sector) pays someone as a "scientist", and it claims that this is what the employee is doing (which improves the image of the company or body that is paying for the scientist because "it's nice to pay someone for science"), then the company just cannot have the right to fire the person just because the person ends up with findings that are inconvenient for the company's plans or finances! To use the word "scientist" for someone who is removed as soon as she is inconvenient is just a fraudulent practice – and I think that it should become a crime. People who are employed under these rules should be honestly called "hired guns doing something to make Google even wealthier and more evil than ever before". The name of the the research department should be obliged to make it clear to everyone that Google is an evil company that we should better try to liquidate before it is too late.

At the end, I must say that I just defended her despite the fact that I obviously disagree with much of Gebru's politics. I think that her two key ideological claims in the two papers are "facial recognition is unethical because it helps to make whites and males even more successful" and "large language models are unethical because they reinforce the power of those who already own the language". Of course, I don't think that there is anything unethical about those things and I don't believe that facial recognition or language models should be banned because "they highlight the differences between groups of people". But she has the right to express her moral opinions and the papers weren't just about opinions. She has also (and especially) done the hard work to establish e.g. that the facial recognition software doesn't work equally for the groups. And if a company uses this software, it shouldn't be capable of claiming to be keep equality between the groups.

Add to del.icio.us Digg this Add to reddit

snail feedback (0) :

(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1828728-1', 'auto'); ga('send', 'pageview');