## Thursday, September 09, 2021 ... //

### Worshiping of complexity is junk physics, junk computer science, junk neuroscience

...and junk climate science, not to mention other disciplines of junk science...

After I read this Quanta Magazine article discussing whether individual neurons are "complex",

How Computationally Complex Is a Single Neuron?,
I realized that there is a whole industry of antiscience that has made it to numerous scientific disciplines including fundamental physics, computer science, and neuroscience. The article above describes the comparison of separate neurons in the real world biological brains on one side; and the artificial neurons used in the simulations that are meant to behave as natural neural networks on the other side. The workers have decided that they needed at least 5-8 layers of artificial neurons to fit a biological neuron and this kind of a result is claimed to be an insight about neurons and brains; and progress. They indicate that they want to be paid as neuroscientists for X more years in order to find out whether the right number is 5 or 6 or 7 or 8 or something like that.

Well, it's not progress. It's not cool. And neither is a simple algorithm that produces an irregular, chaotic pattern; nor the fact that we don't have a polynomially fast or otherwise effective algorithm for a problem X or Y. And it's bad when you use a complicated computer simulation (whose behavior you can't replicate in your brain, not even roughly) to answer a simple question about the climate. All these situations have something in common – and I called it the worshiping of complexity. These people think that their worshiping of complexity is a sign of good science or scientific progress; in reality, it is really a sign of bad science, failing science, or the absence of any scientific progress.

What the Quanta Magazine describes is nicely captured in a comment by Starigite that only has one "like", namely mine:
Artificial neural networks including deep nets are general purpose function fitting devices. Fitting such a function to the input-output relation of a biological neuron, and then examining the "complexity" of the fit as defined by a particular parameter of ill-understood status (number of layers in the DN), teaches us practically nothing about how the capabilities of a neuron lead to the capabilities of brains, as this article would imply.

Even a single ion channel is subject to multiple inputs (ligands, transmembrane voltages, including time dependence) and shows complex, stochastic behavior. Fitting such an input-output relationship would almost certainly reveal that the ion channel is "computationally complex" as defined by this article. What does one learn from that? One could play the same game with almost any complicated biological or non-biological system, and it would simply amount to a characterization of the particular function fit, provided by a particular functional form.

This is ironic science.
Right. This commenter basically elaborated upon the comment (or slogan) by Richard Silliker, "All complexities are simplicities in waiting."

What are these people doing? They want or they should want to understand how a single neuron works. They decide that it should be thought of as being "equivalent" to an artificial neural network belonging to a particular Ansatz. And then they determine some parameter (number of layers in their Ansatz artificial neural network) that seems OK enough to them (and even this criterion is muddy because things may look OK to you because you didn't check things carefully enough). Has some real neuroscience been done?

The answer is obviously No. A neuron is a cell whose diameter is between 4 and 100 microns (0.004-0.1 mm). It is pretty large, messy, and every person with any scientific intuition must find it unsurprising that the behavior of such a cell is not exactly equivalent to a simple transistor in our chips or something like that (not even ten transistors). There is no reason why they should be equivalent. A transistor was designed by humans with the purpose of doing a very specific operation according to some strict rules that agree with the mechanical arithmetic operations you can do on paper. And it is supposed to be a discrete operation. A neuron hasn't been designed to behave in a mathematically simple way; and it is an analog ingredient, not a "discrete" one. It is obviously a collection of trillions or quadrillion of atoms that do rather messy things, like other collections of trillions of atoms, and they were only selected to be produced by some other cells; and to be a part of an engine that allows some DNA code to spread and use+capture resources that allow the DNA to spread. It wasn't designed by an engineer who wanted to know "what it precisely does"; and it was never selected for a "mathematical simplicity of its behavior". So it is obvious that a large, structured, irregular collection of atoms (a neuron) just doesn't behave in a mathematically simple way.

OK, when they try to replace one real neuron by a collection of artificial neurons, they are just looking at a particular hypothesis, a model to think about the real neuron. But in science, when you have a hypothesis, it is not a victory yet. The hypothesese may be right or wrong, insightful or stupid, successful or unsuccessful. And if they can't get the right behavior of the neuron even if they greatly increase the "complexity" of their "equivalent artificial construction", it strongly suggests that their Ansatz or model is not very good. It seems likely that it is fundamentally wrong. They are looking in a wrong direction. It is really stupid to "impose" the idea on Mother Nature that a biological cell is equivalent to a bunch of discrete transistors. It is not equivalent. They are clearly different things. There is no reason why they should be "equivalent". A computer program may be a "simulation" but a "simulation" is not the same thing as a (good) scientific theory about the real world object! And whether a simulation would be realistic primarily depends on the validity or usability of the "theory" or "Ansatz" that the simulation was built upon. It is the theory that distinguishes correct science from wrong science; they completely ignore this (main) layer of science (whether the Ansatz is a good one).

The way how they fit the complex behavior of the real world cell may be conceptually and deeply wrong, much like if you tried to replace a qubit in a quantum computer with a collection of classical bits (or, more importantly and more generally, if you tried to replace any quantum theory by a classical one). No bunch of classical bits is a good replacement for a quantum bit! If you imagine a collection of classical bits when you are exposed to a qubit, you are clearly doing something very stupid and you are showing that you are not smart enough to do modern physics because your whole way of thinking is completely wrong and stuck in the 17th century dogmas (when physics was conceptually simpler for a regular person, or an animal). To some extent, it is almost certainly the case if someone tries to replace a real world neuron by a bunch of "sterilized artificial ones" (although I don't claim that something as vital and universal as quantum mechanics is hiding behind the behavior of the real world neurons). It is simply not a good way to proceed.

The fits are complicated and the artificial model may require many neurons and weights. But it is just wrong to call the largeness of these data "an example of complexity". A neuron is a large enough cell whose behavior is not mathematically trivial and trivially calculable. In that sense, it is "complex". But this "complexity" is completely trivial and expected. Everything that is this large is "complex" in this sense of the word. And when you find out that the behavior of a bound state of trillions of atoms is "complex", it is not progress.
Instead, in science, it is progress when you master the behavior of something and understand that it is simple when looked at from the right perspective, when described by a law that started with a clever and creative Ansatz that just turned out to be right.
In science, the heureka moment arrives when things didn't make much sense before; and they do make sense afterwards! Ask Archimedes or anyone else who has actually understood something.

But these people seem to brag because the thing that they study, a neuron, still looks complex to them! As long as a thing looks complex, it means that it hasn't been understood yet. The real understanding may arrive or not. Maybe there is nothing deep, no universal law to be learned when one does research of the behavior of individual neurons. But it is surely the case that no such a law that could be celebrated (or justify awards or grants) has been found. And when two people are looking at the neuron and the behavior looks simple to one of them and complex to the other one, it probably means that the first person is smarter and has understood the actual science better than the second person.

The whole logic of science and the "scientific success" has been turned upside down! Science has made an advance when things start to make sense, when the behavior may be explained, reproduced, or predicted from some laws that are rigid enough for scientists to be sufficiently certain about them or their importance and longevity. When things look messy and full of exceptions and breakdowns of a complex model, it means that the good science hasn't been made yet, or hasn't been made by these particular people. A messy program that fails to give the precise result is likely to be replaced by a totally different one when better researchers decide to do a similar "research" again.

Almost equivalent comments apply to numerous situations outside neuroscience. So people like Stephen Wolfram have been worshiping things like "Rule 30" for cellular automata. When some bits are connected with neighbors and asked to evolve according to deterministic rules, some rules may create patterns that are evolving neither to "full white", nor to "full black", or to "something periodic". One gets complex patterns for Rule 30 instead. With some quasi-fractal patterns in it, and so on. That's great but the three-body problem (3 objects in empty space that only act on the other two by the gravitational force) also lead to an unsolvable, chaotic behavior (even in classical physics). This "complex" situation is clearly no exception or some "precious gem" that has been found. It is the nearly universal situation. Instead, it is the regular and "integrable" or "solvable" problems that one needs to learn as a physicist (or a scientist) because, while they are rare and almost impossible to be precisely found in Nature, they are extremely important in parameterizing the "precise behavior of complex objects". Using the things that may be perfectly understood, one gradually organizes an increasing fraction of the phenomena that were originally completely misunderstood (and that looked "complex")! When science makes progress, things that may be called "complex" are losing steam, not gaining steam! In another, valid Quanta Magazine article, it is being reported that the term junk DNA is evaporating because some other implications (or even roles) of this previously misunderstood, and therefore seemingly useless, parts of the DNA molecule are increasingly understood. Here, "junk DNA" carries a similar flavor as "complex systems" and indeed, the more we understand science, the less room there is for this foggy flavor.

Complex systems that have been sufficiently understood in a scientific way are results of some simple rules; or they are solutions that are in some quantitative way close to solutions of integrable systems. The very statement that "some system in Nature [e.g. a separate neuron] is complex" is basically 100% vacuous and 100% worthless (but it also ends up being wrong at the end – and it is the precise way in which this statement may be shown to be wrong that is an important and real scientific development). So yes, I think that Wolfram's way of thinking about the "fundamental laws of Nature" isn't wrong because of some particular details. His whole way of thinking about the world and science is completely wrong and upside down. He celebrates when things are not understood.

I would like to argue that the religious cult among many computer scientists who order others to believe that $$P\neq NP$$ must be true, even though there is no evidence in either way, is also a symptom of this worshiping of the upside down science, worshiping of complexity. Why do these people find it important to say that some fast enough algorithms can't exist and/or that $$P\neq NP$$ even though the other logical possibility, fast enough algorithms may be found and/or $$P=NP$$, is still perfectly equally possible? Again, it is because they have chosen "complexity" i.e. the lack of understanding to be a religious value by itself. They love to say that things must be impossible to understand clearly, analyze quickly etc. It is pleasant for their muddy thinking. It is great for their egos because they are not capable of doing the uncontroversially good computer science – which clearly means to successfully find and understand new and universal and fast algorithms, and it doesn't mean to promote (and demagogically "justify") the faith that they can't exist.

A similar sentiment, the worshiping of the lack of progress and the surviving "complexity", affects many other scientific disciplines. In the climate science, it is a big part of the utterly antiscientific belief system called "climate change" or "global warming". Computer models are often used as powerful arguments in favor of the ludicrous, quasi-religious statement that a global problem with the climate is around the corner. People who have no talent to do science – and whose scientific skills end with the ability to press "enter" which is needed to run a stupidly designed computer program – get some results of the computer simulations. The aspects of the result that actually matter don't depend on the complicated calculations (almost) at all. They have been put in. They had to be put in because these dishonest scumbags are being funded for being useful idiots who produce lies of the type "a CO2 doubling heats the atmosphere by 4 degrees Celsius" and whose role is to make these totally wrong statements look "scientific" because lots of CPU power was wasted. On top of that, the long-distance and long-term behavior of the simulation is separated from the local and immediate laws that were put in, basically due to Ken Wilson's separation of scales.

But the funny thing about the climatology is that (like in all the previous disciplines of science) there are people who are actually talented, who are competent, who are educated, who are hard-working, and who are sometimes lucky and they have mastered some body of scientific insights. These people may produce correct qualitative answers; or accurate enough quantitative answers to scientific questions about the climate even without a computer. They can do it because they have actually understood some important laws that actually hold in the climate; they have mastered many methods and approximative schemes that allow one to explain or predict the observations of the climate! They simply became good theorists which is much more than "enter pressers". So one such a Richard-Lindzen-like person can do a better job with a pen and paper than 2000 lousy "enter pressers" who love to claim that there will always be some complexity, the complexity surely supports their religion (although the implication is non-existent), and the wasting of lots of CPU power and electricity is surely enough to neutralize the fact that they totally suck as thinkers, scientists (and also human beings).

Sorry, Gr@tins, but you can't compensate your being Gr@tins or your being mediocre scientists by being donated powerful computers by some other Gr@tins. It's because you just don't know how to use the computers properly to extract some valid answers to scientific questions (let alone deep universal principles about Nature as a whole). You are like an ape that randomly got to a server room of a Silicon Valley company. It's great that this ape can press things but it's just not enough to do good science. Some other people are capable of doing much better science without any computers whatsoever.

So I think that in all these disciplines and many others, the "worshipers of complexity as a scientific advance" are people who just don't have the skills to be genuine let alone good scientists (they may have had it in the past [thanks, Jason, for catching a typo] but they have lost it). Their worshiping of the complexity is a lame excuse for their inability to find new laws (or even to understand laws found by others). They want to turn the situation "the behavior still seems complex and doesn't make much sense" to the permanent state of affairs, and in fact, a reason to be celebrated – because they feel threatened by any actual progress of science (which finds specific laws and brings unambiguous and often simple answers to many questions) because every real advance in science only helps to highlight the fact that they have nothing to do with good science, let alone the cutting edge of the scientific disciplines that they pretend to study.