Alex Wissner-Gross' thoughts are probably too good to be true
Óscar Gómez asked me about a 12-minute November 2013 TED talk by Alex Wissner-Gross. He would talk about an equation or principle that produces all intelligent behavior.
I know Alex very well. We would often talk about his great inventions at Harvard where he was a PhD student. Now, he has degrees from physics, computer science, nanoscience, electric engineering, and various fields like that, and is an ambitious inventor and entrepreneur. Now he's affiliated with some Harvard and MIT artificial-intelligence-related institutes.
Some of his self-promoting paragraphs say that he's also trying to find a way "how to make people aware of climate change". That's of course distasteful, Alex. What about looking for a way to "make the people understand that the climate change activists are a dangerous terrorist pseudoscientific organization fighting a non-existing problem and trying to control the rest of the mankind"?
He starts with a quote – one he endorses and that was used to criticize the early computer scientists including Alan Turing – that asking whether a machine can be intelligent is as lame as asking whether a submarine may be swimming. I thought that he would discuss the intrinsic ill-definedness of the intelligence. What is intelligence? It depends, it has many layers and components, many manifestations etc., every definition will end up with slightly different results. And this ill-defined intelligence may exist to various degrees, too.
But Alex went exactly in the opposite direction. Intelligence is very well-defined and he may design a program (and claims to actually have designed a program called Entropica) that guarantees the pure intelligent behavior in every situation. The equation it follows is\[
F = T\cdot \nabla S_\tau
\] The intelligence is the force \(F\) that acts in the direction in the space of possible "actions" that tries to increase (or maximize) the number of options \(S_\tau\) (a quantity that Alex misleadingly interprets as the entropy) we will have at a future time \(\tau\). Here, \(T\) is an unspecified coefficient, much like \(\tau\) itself.
He marvelously claims that when this program is connected – in a way he doesn't specify – to the information about the stock prices, it starts to trade stocks and produce growing profits without being told about the "right goal". When it's connected to a small gadget that may balance on water, it starts to be balancing on water. It plays ping-pong and does lots of other things that look intelligent.
If true, it's amazing. But I just don't understand how the wonderful program could possibly work. In some sense, Alex with his inventor's mind is looking at things in some kind of a synthetic way. Quite generally, it seems to me that he is not decomposing the issues to the elementary building blocks at all.
First, he doesn't say what the timescale \(\tau\) is and what the coefficient \(T\) is. More importantly, the equation attempting to resemble classical physics is analogous to Newton's inverse square law \(F=Gm_1m_2/r^2\). But that law would be meaningless without some \(F=ma\) – a law that allows the force to be identified as the acceleration (second time derivative) of a coordinate. Concerning the "missing \(F=ma\) problem", let me assume that Alex claims that the intelligent behavior does include such a law. Perhaps, instead of a force, the force is linked to the first time derivative of coordinates? It should better be linked to something.
That's surely not where my problems end. As the letter \(S\) and his wording indicates, he wants to interpret the "future freedom" or the "future number of options" as some kind of entropy. But he clearly doesn't mean the full entropy – which is dominated by the entropy stored in atoms' chaotic thermal motion, even if we talk about the rather intelligent life on Earth. (A way to maximize the total entropy in the future is to burn lots of coal which may be intelligent but isn't automatically so because true intelligence has other, finer dimensions.) He must only mean some "part of entropy" carried by the relevant macroscopic degrees of freedom. But how does one precisely isolate them? Even if I know the exact description of a physical system (and most intelligent agents can't know it), it seems impossible to separate the relevant degrees of freedom.
To determine the right "intelligent behavior", the gadget must know how many "future options" different decisions right now produce. But the calculation of \(S_\tau\) for some future time \(\tau\) as a function of a decision at present is an example of a prediction. And the animals or machines simply cannot make predictions without intelligence, can they? So the definition is a circular one, kind of. I think that a major part of intelligence is something that allows people (or others) to invent the rules to predict the future. Much of their intelligent behavior follows from that. But this aspect is viewed as a "trivial input" that has to be calculated externally.
But the only way to correctly predict the future is to use some kind of the laws of physics or Nature. It's crazy to think that all intelligent subjects in the world are automatically equipped with the full knowledge of string theory. They don't really know almost anything about the future behavior of physical systems and their relationship to the past facts and they learn how to predict these things (except for things that are "hardwired to their brain", but these hardwired things are arguably straightforward and we may easily emulate them by computers; only the intelligence that one "adds" in his life is mysterious).
So most of the things are undefined in some way but the concept of intelligence as the "ability/desire to maximize the future freedom" is an attractive meme. I largely agree with it – I think that more intelligent people care about their freedom (multiplicity of options) more than the non-intelligent people, for example. But I don't believe it is a generally valid formula for intelligent behavior. And I think that in many cases when it is fine, it is vacuous.
By the previous sentences, I mean that in many cases, intelligent behavior is one that reduces the number of options or uncertainty about the future. Intelligence is needed for a NASA space probe to move to the right place where we want to have it. When we play chess, we want to reduce the freedom of the opponent. So these are examples how the "opposite law" sometimes seems more true than Alex's original law.
But I am really confused how his principle may produce the clever behavior e.g. in the case of the stock trading. What does it mean to maximize the future freedom? Even animals want to survive. Survival improves the future freedom because when you die at time \(\tau\), your option for times greater than \(\tau\) will be reduced to one option: lie in the grave, or be dispersed over India, or whatever is your preferred funeral format. So yes, the instinct to survive is a simple example of the desire to increase the number of future options.
Being rich also increases your freedom. You may choose any expensive hotel, send rockets to outer space, and so on, you may surely add 500 other things that you could do if you had a billion dollars. So it's sort of trivial that people want to have enough money when they trade stocks. But how does it tell the program to buy or sell stocks? The program must be equipped with the function \(S_\tau\) that calculates the freedom (or, well, money) at time \(\tau\), mustn't it? But that's probably impossible without predicting the future motion of the stocks. However, predicting the motion of stocks is the "bulk" of the problem we wanted solve in the first place. I can't see how Alex's equation has helped or may help to solve the problem. I would probably need to see more about Entropica's inner guts - but I am afraid that such a view into the interior would reveal that it doesn't really work as advertised.
Well, the very notion that the "most intelligent behavior" is objectively calculable sounds implausible to me, too. As I said, intelligence has many aspects and dimensions. But it also has many uses. I don't believe that science may really tell us what we should do. The issue is related to the claim that science cannot answer moral questions.
For the reasons above and others, I am skeptical. But even though I feel that despite his physics PhD, Alex's whole way of thinking is sort of incompatible with the theoretical physicist's understanding of the world as a consequence of impersonal laws of Nature (this perspective seemingly unavoidably renders most of the concepts interesting for the humans – such as intelligence – ill-defined), I still find some semi-mathematical principles like that applied to assorted philosophically appealing human concepts (intelligence, beauty, diverse, whatever) intriguing because I am sometimes worried that there could be an entirely different, yet scientifically robust, way to classify all events in our Universe. And even if the world is as sensible as I expect and none of these "laws" can be fundamentally true, I am still interested in possible applications of such paradigms because some of them could profoundly change the way how we live.