Thursday, June 12, 2014 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere

Barack Obama passes the Turing test, too

The famous computer science pioneer Alan Turing decided to define "artificial intelligence" as the machine's ability to speak in such a way that it fools people around into thinking that he or she or it is an actual human being. I don't think that this very definition of intelligence is deep – this will be discussed later.



Barack Obama and his Japanese friend

But let's first cover the story. As the chatbot's namesake Eugene S told us, the media have been full of hype about a chatbot pretending to be a 13-year-old Ukrainian boy Eugene Goostman (see his or her or its web where you may chat with Eugene) who has tricked 1/3 of a London committee into believing the words were produced by a human. The programmer of the chatbot remained modest and he would probably agree that his program isn't dramatically more advanced than Eliza that was created half a century ago.

(I still remember my encounter with a 130-cm robot who came to me and shook my hand at the Rutgers Busch Campus Cafeteria sometime in 1999. The discussion with this robot – about Czechia, Werner Heisenberg, and other things – was much more inspiring than similar talks one may have with 99% of the people. For a day or so, I was stunned: has the artificial intelligence improved so much? Beware spoilers: After the day, I assured myself that the robot has had cameras, microphones, and speakers converting human voice to a funny robotic noise, and this "artificial personality" was controlled remotely from about a 50-meter-distant location.)

Here's my interview with another one that has tricked almost all Americans and people in the world that his sentences are genuine human creations rather than decorated rhetorical patterns invented by semi-automatic politically correct speechwriters.

Motl: Did you know about the policy of selective targeting of conservative groups by the Internal Revenue Service?

Obama: Let me make sure that I answer your specific question. I can assure you that I certainly did not know anything about the IG report before the IG report had been leaked through the press.

Motl: But that wasn't my question. I was asking generally about the harassment of right-wingers, not about a report of yours.

Obama: Let me be clear. Now, could you tell me where you live?

Motl: Hmm. What about the relationships with Eastern Europe? Don't you think that America should support the independently working prosperity of countries such as Poland instead of their obsession with permanently viewing Russia as the culprit behind all their failures?

Obama: Let me make sure: Poland is one of our strongest and closest allies. Using a phrase from boxing, Poland punches above its weight. ;-D




Motl: My question wasn't specifically about Poland, it could have been about Estonia or any other country in the region. But I am asking about the principle and about the real and desired American attitude.

Obama: Let me make sure: Estonia is one of our strongest and closest allies. Using a phrase from boxing, Estonia punches above its weight. ;-D




Motl: That's a disappointing response. Maybe you are just misunderstanding my Pilsner, nearly Hungarian, accent. But it seems that it was easier for a chatbot pretending to be a politically correct politician to pass the Turing test because people got used to their absence of nontrivial ideas and instead, the constant repetition of dull clichés. Millions of people are not only willing to accept the human origin of these clichés but even to vote for politicians who don't say anything else. Did you understand that my question was focusing on America and its states?

Obama: Now, make no mistake: Over the last fifteen months, we've traveled to every corner of the United States. I've now been to fifty-seven states. So, could you tell me where you live?

Motl: If I return to the frustrating events in Ukraine. Is the situation already improving?

Obama: Let me be clear. Change isn't easy. It won't happen overnight. There will be setbacks and false starts. So much of America needs to be rebuilt. We have crumbling roads and bridges.

Motl: I have already heard it somewhere. At least you're not repeating that most of the girls in your class are either fat or pimpled.

OK, let me stop this silly conversation which was inspired by a virtual dialogue of Scott Aaronson with another virtual chatbot. I didn't vote for Obama and (probably) wouldn't vote for Obama. But most American voters did – both in 2008 and 2012 – so Obama has passed the Turing test.

Instead of continuing the conversation that isn't too fruitful, let me offer you some serious words.

Summary

I think that a human judge who is not sufficiently attentive and clever may get easily fooled into believing that a computer program is controlled by a human. After all, humans and their masses get manipulated all the time and they are often forced to believe things that are much less likely than the existence of a computer program fully indistinguishable from a human.

A cleverer human judge will be able to manipulate any existing program into corners where its differences from the regular human behavior will be amplified. As Scott's example shows, the simplest questions involving some everyday life experience or kindergarten knowledge are enough to unmask the artificial origin of (almost?) all existing programs that emulate humans.

Concerning Eugene Goostman, it was much easier for the program to fool a committee because committees are stupider than human beings – and perhaps stupider than most chatbots in the world, too. The program has some really easy defects that should have been fixed a long time ago. In particular, its repetition of the exact long phrases is utterly inhuman (the pimpled girls are the best example here). Humans sometimes also repeat things verbatim but these segments are usually shorter and the humans get bored by that perfect repetition soon.

Also, the decision of the program to speak about topics that almost certainly cannot be relevant for the question – because the chatbot misunderstood the question in its detail – deviates from the expected behavior of the humans. (Eugene the chatbot began to talk about wealth and architecture in Russia after he or it was asked about a clearly unrelated question involving the post-Soviet realm.)

If a human misunderstands a question, he or she either gives up and makes it clear or he or she is trying to comprehend what the question meant. The latter approach is much harder, of course: you may see the human potential to learn and ultimately understand what is needed to be understood. The existing computer programs really lack this ability. You can easily predict that you won't be able to teach them everything that needs to be taught to understand a certain question, and that's how you identify that they're not human beings, at least not intelligent ones.

In other words, computer programs of the usual type only have the potential to exhibit some behavior within a certain "class of responses" that is already envisioned when the program is written down. On the other hand, intelligent humans have the potential to increasingly deepen, filter, and crystallize their knowledge and to offer complicated responses that (and whose organization) wasn't clear when Nature wrote the self-improving program for the first time (which was really when it created the first RNA/DNA/protein molecule!).

Scott Aaronson asked whether the excessive hype about the Eugene chatbot boils down to a defect of the Turing test as a profound paradigm; or just to the journalists' misinterpretation of the deep ideas that Alan Turing brought us.

Nothing against Alan Turing but I think it is the former. Turing was trying to make the concept of "artificial intelligence" more well-defined. He made it slightly more well-defined – the ability to imitate human beings may be decided to be there or not to be there more clearly, by an operational procedure – but the price he paid was that the content of "artificial intelligence" became shallower at the same moment.

It is simply not hard to fool many people – and perhaps most people. Politicians know it much like the authors of various spam e-mail messages that pretend that the writer is someone who needs to help, chatbots enhanced by videos of nude women who are waiting for sex with you on the web servers, and so on. I think that computer programs are already able to emulate the behavior of some stupider organisms and perhaps stupider human beings, too. Artificial insect may fly. Spambots are sometimes filling physics blogs with incoherent, worthless, repetitive rubbish about the unfalsifiability of a theory, or any theory. And some human beings are adding their own comments because these individuals are exactly as stupid as the spambots – and as obnoxious as some insect, too.

The real problem is for a machine to imitate an intelligent human being, one that has the capacity to learn new things and deepen his or her understanding of a subject matter – to the depth and breadth that isn't incorporated or envisioned or pre-planned or thought about at the very beginning, at the moment of conception (or programming). And this ability to deepen the knowledge and especially the coherence, structure, and inner organization of the knowledge is what makes intelligent people intelligent.

This ability – the real artificial intelligence – has very little to do with the much more superficial ability to fool human judges or committees that or who may be insufficiently attentive or clever by themselves.

Everyone understands what the adjective "artificial" is supposed to mean: the behavior doesn't result from the behavior of DNA-powered biological neural (and other) cells. The hard part of the phrase "artificial intelligence" is "intelligence" and that's exactly the weakness of the Turing test as a criterion, too. The human intelligence is a wonderful thing – but only when it's deep enough. The Turing rates programs according to average human judges, and because average humans are probably stupider than they were 50 years ago (or at least, to be certain, not significantly smarter), it shouldn't be shocking that the programs that pass the Turing test in 2014 may be stupider (or at least not much more advanced than) the programs that passed the same test (with different judges, however) half a century ago.

A definition of "artificial intelligence" that is more valuable must be independent of the quality and depth of intelligence of undefined groups of people. The human-like intelligence may look remarkable but if you look sufficiently closely, even many – if not most – people are really shallow, repetitive, dumb, and uninteresting which is why the programs emulating their behavior are inevitably uninteresting, too! In fact, really average humans may be emulated by recording and copying terabytes of generic human responses and dialogues and choosing the most appropriate one in a given context. That strategy – abusing the fact that the artificial agents' memory may be larger than the human memory – may be sort of enough from many judges' viewpoint.

The fascinating challenge that remains largely open – and may remain open for quite some time if not forever – is for a program to emulate some of the most creative, intelligent humans in the history. Computers and people are converging – computers are getting smarter while people are getting stupider. But only the former component of the convergence process may impress us.

The general character of the human, self-improving algorithms differ from the usual classical computer algorithms that are also behind Eliza or Eugene Goostman. But I believe that the biological material is in no way necessary for this biological-like intelligence to arise and at some moment, silicon-based engines using the same fuzzy, self-improving algorithms to become more intelligent will be produced and programmed, too.

Add to del.icio.us Digg this Add to reddit

snail feedback (19) :


reader lukelea said...

A computer could only act like a human if it had true human experiences. That means feelings and emotions.


reader sahan said...

I think Mr Zeman is ignorant, or he hides his intellectual awareness that the Holocaust happened near his country and even inside it. At the time Jews were living in a complete harmony with Arabs. Extremism is everywhere, it's included even in his speech; everything it open to interpretation, included my comment. I will advise Mr Zeman, -- since I found in our days even kids are mature enough to advise politicians-- that he should not dehumanize Arabs because they lost track how to be part of modern culture because and sadly they suffer a multidimensional crisis, but you he should estimate as a politician the great contribution might this part of the world can add to the course of human maturation. '
Cheers Mr President


reader Rathnakumar said...

Dr. Motl,

You may like this:

http://www.youtube.com/watch?v=f36ZbzL-9Yo


reader Luboš Motl said...

Dear Sahan, your comment is a good example of the incompatibility of the Islamofascism you mindlessly believe with the modern Western values.


The Holocaust indeed happened in our lands as well, and it is actually one of the many reasons that make it so important for Zeman, me, and many of us to defend the attitudes we are defending.


The fact that the Holocaust happened in our country as well doesn't make Mr Zeman or me guilty because we follow the Western rules where individuals are responsible for their deeds. But even if we adopted a principle of collective guilt, we as Czechs are not really guilty because all the Holocaust policies were organized by our German overlords.


reader Eugene S said...

Regarding the "complete harmony" between Arabs and Jews, you are poorly informed. I recommend going to amazon dot com and ordering one of the many fine books by Bat Ye'Or (Islam and Dhimmitude would be my recommendation.

Long story short, in the Islamic world -- with the notable exception of Turkey -- Jews were not second but third-class citizens, below the poorest Arab and below the Christian. There was a short time during Arab rule over Andalusia when their regime was tolerant and benevolent; it was the exception to the rule.

Arabs were not a world apart from the European holocaust. For one thing, the Nazis carried on their genocidal actions in North Africa, as well. In this, some Arabs collaborated with the Nazis. Some Arabs courageously aided their Jewish neighbors, for which they deserve to be remembered and honored.

A sordid chapter is how Haj Amin Al-Husaini, the Mufti of Jerusalem, collaborated with Hitler and egged him on constantly to kill more Jews. The Mufti provided fighters to the Nazis and was dead set on continuing the genocide of the Jews in Palestine immediately after a hoped-for Nazi victory in the Middle East.

This chapter in history is told in a new book co-authored by the late Barry Rubin and a German historian: http://www.amazon.com/Nazis-Islamists-Making-Modern-Middle/dp/0300140908

The solution to your troubles lies not in a return to the past. Arab supremacy for the most part was brutally oppressive and you will never get its victims to return to its rule voluntarily.


reader Mikael said...

Dear Lubos,
I think in order to build intelligent computers we will have to understand what consciousness is. Or if we are able to do it nevertheless, we will learn what consciousness is on the way. We will also need to reconsider our moral values: Is pulling the plug of an intelligent computer and deleting its memory a similar thing than killing a human?


reader rsala said...

Hilarious bit on Obama as chatbot!!!

I have always thought that the Turing test wasn't very useful on either count. We are already seeing chatbots, that we don't ascribe any intelligence to, that can fool some people. With enough effort they could be scaled up so that they could fool most people, but they wouldn't really be any more intelligent in any profound way. Conversely, I would suspect that when actual artificial intelligence is developed, it will be easily recognized as different from our own.


reader scooby said...

"We" began the Arab Spring?


reader John Archer said...

I knew an Arab once.

We had a great time winding him up when Israel won the Six-Day War. What fun that was! Oh, happy days!

On occasion we used to sing the following at him, if he entered the room unexpectedly for example. For those of you who are unfamiliar with it, it is sung to the tune of the Eton Boating Song only much, much louder. And badly.

♬♩
The sexual life of the camel
Is stranger than anyone thinks
At the height of the mating season
It tries to bugger the Sphinx
But the Sphinx's posterior sphincter
Is blocked by the sands of the Nile
Which accounts for the hump on the camel
And Sphinx's inscrutable smile.
♩♬♩

He took it all in good spirits though. Actually he was my friend, and a very nice fellow. But then he was a Christian, as were we all ... well, nominally anyway.

The problem with Arabs is that most of them follow a creed that is wholly inimical to psychological, sexual, social, intellectual, cultural and political development. A very severe impediment indeed.

Moreover they don't have any decent flags.

BTW ours is lovely. It looks great on a shield.

http://oi61.tinypic.com/os4w76.jpg

P.S. I don't know what Zeeman's thinking in that snap, but he won't get a tune out of that — it doesn't even look like a piano!


reader Gene Day said...

Isn’t that possible? I can’t think of a single reason to doubt it.


reader QsaTheory said...

Enjoy your nonsense.

http://www.bbc.com/future/story/20131125-why-the-stupid-say-theyre-smart


reader Eugene S said...

My caption for the first picture:

One is a smiling robot, the other a pioneer from a long-maligned minority group

Great story about getting fooled by the Rutgers Chess Turk talking robot... wish I could have been there to witness it :)


reader mr. critic said...

Excellent review on this particular problem. Obviously, many people are fooled by chatbots, which suggest that a natural intelligence must be developed first, before the artificial one. The artificial part of the AI we already have, actually.

The deepness of the real intelligence come from the connection of every abstract concept to the "pixel" level, with all the intermediate levels. Convoluted philosophical thoughts can be simulated by Monte Carlo (Markov Chain) generators, but the simple common sense cannot. Every little kid is able to answer the question "can you walk on the wall?" but all of the "sophisticated" Turing passers are far away from even remotely reaching this goal, because they are completely disconnected from the universe.


AI must first learn to recognize and build taxonomies of images, sounds and forces before reaching the level of reasoning. Bottom-up is the way of a self-organizing universe. The top-down paradigm is the human way of ruining it, much like a government loaded with regulations.


reader John Archer said...

You reckon I should put the autographed CD on hold then?


reader cynholt said...

The whole point of this thing is so that they can completely fire everyone in call centers and replace them with AI and still make their customers feel like the corporation gives a crap about humanity. Then again, I'd take a computer voice over an indecipherable Indian voice any day!


reader cynholt said...

Big deal convincing people a computer is a person! People have been convinced corporations are people for how long now?


reader cynholt said...

The challenge of leadership is to affect their goals while placing the blame for its consequences on others. Obama and friends have been as good at this as just about any. We thought Clinton was Teflon, but Obama has an extra black layer of protection to ensure nothing will stick.


reader Gordon said...

https://www.youtube.com/watch?v=rFOeF9FUshg


reader Jim Z said...

Lubos,

The photo at the head of your post is a failure.

There is a photo, from the same encounter, of Obama *bowing* to the robot, as he *shakes hands* with the robot!!!