Thursday, July 11, 2019 ... Deutsch/Español/Related posts from blogosphere

Roboticist: self-simulation yields self-awareness amusingly specific attack on the mystery of consciousness...

John Pavlus wrote an article for the Quanta Magazine yesterday. I initially ignored it – like the commenters, there are still zero comments there – but it looks very interesting now:

Curious About Consciousness? Ask the Self-Aware Machines
The hero of the article is Hod Lipson, a robot expert at the Columbia University. He has played with similar robots for over a decade.

A starfish robot had some arms and those could touch things and, using the machine learning software, create a geometric model of its own body. It's cute because it's both a modest enough task for human-like machine learning that can be fully conquered, as well as a problem that is close enough to be equivalent to consciousness.

Like a small baby who learns its first steps, the machine is learning to use its body. I still don't understand what is the "utility function" that either the baby or the machine are trying to maximize – how they're learning what are good movements and what are bad movements. Who is giving them the rating? But I suppose he has some answers to that.

People have some self-awareness and it starts with realizing that they have a control over a geometric object of a somewhat flexible shape, the body. I agree with his brilliant observation that this creation of a self-model of the body is basically the first, simple enough part of self-awareness, and the path towards some full self-awareness and/or the human-like thinking is just a matter of a quantitative evolution from there.

More generally, I agree that all these self-referring self-things are going in the direction of the truly breathtaking, self-organizing, human-like thinking. Do you remember the poem "I wonder why. I wonder why I wonder. I wonder why I wonder why I wonder..."? This poem was written by a kid whose name was Richard Feynman. By looking at oneself and one's thinking, and even looking at one's looking at one's thinking, and so on, one can make the thinking increasing abstract, structured, and meta- which is a trend from the primitive, automatic, robotic thinking and behavior towards the most ingenious and creative humans' thinking.

Note that the article also mentions that consciousness is taboo – a C-word – in the robots and AI circles. It's said to be "fluffy", so people can't study it! Note that some very unwise people would love to impose the same blasphemy laws on basic physical words such as the "multiverse", as I discussed in the previous blog post.

Anki, a robot with a brain, connected to smartphones etc. Don't buy it for $144! It's for kids but there's no law saying that the kids can't be 85 years old, like the average TRF reader.

But the claim that everything related to the consciousness is "fluffy" is just a prejudice. Perhaps it is an extrapolation of some work from the past but there's no guarantee that this unflattering description will hold for any work on consciousness in the future! In particular, his robotics research seems more well-defined and tangible than the work of most of his colleagues, to say the least, which seems that the prejudice – the justification of the speech code – is invalid. So the people who are imposing the bans on the AI and robotics fields – like the ban of any discussion about consciousness – are ultimately harmful 21st Inquisitors, too.

The interview hasn't attracted any reactions by the Quanta Magazine readers – building on some of my experience with the comments over there, I would say that it's because almost all of them are superficial cranks who prefer to attack science over thinking about deep problems such as consciousness.

Add to Digg this Add to reddit

snail feedback (0) :