Sean Carroll has the remarkable ability to give a wrong answer to every single important question in thermodynamics and statistical physics. He has argued that the arrow of time must have a dynamical origin linked to cosmology - instead of the statistical properties of purely local physical systems whose probabilities also have to respect the logical arrow of time.
Instead of repeating my crisp and totally indisputable proofs that this is not possible, I will refer you to an authority because for many people, it's unfortunately more important than rational thinking. In a 2005 paper, Robert M. Wald, the author of a famous book on general relativity, proved that "it is not plausible that these special initial conditions have a dynamical origin."
In the best case, such explanations end up being examples of circular reasoning.
But incredibly enough, in a new text, Sean Caroll de facto argues that the perpetual-motion machine of the second kind is possible:
Nature - that was “using information to extract energy from a heat bath.” Holy crap. ;-)
What is this stuff all about?
People have always been dreaming about devices that could do mechanical work for free or almost for free. Such hypothetical engines that would not need any muscles consuming food or fossil fuels became known as perpetual-motion machines.
The simplest types of such machines would be various mechanical feedback loops: wheels and gears are connected to others, and they help to amplify their work, and so on. Or water is forced to run in the loops. The people who are still searching for such machines are known the crackpots of the first kind because they're looking for the perpetual-motion machines of the first kind.
Such machines are impossible simply because the total energy is conserved in the absence of cosmological effects. This conservation law, known as the first law of thermodynamics in the context of the heat engines, contradicts the very dream because the perpetual-motion machines of the first always assume that the energy is produced out of nothing - that it is not conserved.
Fine. So can we at least get close to that dream?
The plan B for the dreamers about "work for free" is the perpetual-motion machine of the second kind. This hypothetical gadget could do work for free and the only price you would pay is that an object would be getting cooler: the heat would be stolen out of this object and used to do work. In fact, according to the crackpots of the IPCC type, such a machine would be even nicer than the perpetual-motion machine of the first kind because it would do work for free and it would fight against global warming at the same moment. :-)
(We have to forget about the freezing Central European weather today - and the quickly melting snow.)
The perpetual-motion machine is impossible because heat always goes from the warmer object to the cooler object and it never reverts the direction: that's the simplest manifestation of the second law of thermodynamics that says that the total entropy never goes down (by macroscopic amounts). So if you want to increase the temperature differences, you actually have to invest some energy or work of yourself.
While the quantity that is conserved - because of the time-translational symmetry of the laws of physics, as demonstrated by Emmy Noether - is the energy "E", one may also define some kind of "useful energy" or the "total work that an object is capable to do" which is smaller than "E". This quantity is called "free energy" - and the adjective "free" is a technical term, unrelated to the money or other issues that the laymen may be interested in. ;-)
Chemists and physicists differ in their "favorite" free energy. But I am almost neutral about this controversy, so let me tell you about both types of free energy:
Physics: Helmholtz free energyThe difference is in the treatment of the variables p,V. Of course, physicists are a few IQ points smarter than the chemists so they don't add system-dependent and geometry-dependent terms such as pV; the quantities "E,T,S" are purely thermodynamical and don't require any geometric interpretation. But that's another subtlety. The important thing shared by both "free energies" is the term "-TS". If the entropy of an object gets higher (e.g. because the non-uniformities of the temperature decrease), its ability to do useful work decreases. This fact is hiding in the "-TS" term.
A = F = E - TS
Chemistry: Gibbs free energy
G = E + pV - TS
Fine. So the perpetual-motion machines of the second kind are impossible for a simple reason. The ability to do useful work - essentially "E - TS" - can never increase because "E" is conserved and "S" never decreases (and there is a negative coefficient in front of "S" because the temperature is positive). It's as simple as that.
Now, the increase of the entropy is a statistical proposition. The total entropy defined in a particular way may occasionally decrease by "dS" but the probability of such a process is proportional to "exp(-dS/k)" which becomes de facto zero for a macroscopic decrease of entropy (which is of order one in the J/K units) because "k" is nearly infinitesimal in these units.
James Clerk Maxwell wanted to correctly show that the second law of thermodynamics - the increase of entropy - is just a statistical law that can be occasionally violated by small amounts. So he invented a scenario that Lord Kelvin named Maxwell's demon in his 1874 article in Nature. Note that a demon is an inhuman, spiritual entity that can be controlled by physical beings, someone or something like a fallen angel. Because the humans who want to abuse spirits are usually evil, the demons themselves are also evil. Daemons are examples of such demons because they work for the evil operating system Unix as background processes. ;-)
Who is Maxwell's demon? Well, he is the devil on the picture above. What is his job? His job is to open and close the door separating two rooms so that the warmer molecules will end up in the left room while the cooler molecules will end up in the right room.
That was how Maxwell designed it originally. You may simplify the picture. Instead, you may ask the demon to open and close the door in such a way that all the molecules end up on the left side. At any rate, the demon will create an asymmetry in the picture - either an excess of temperature or an excess of pressure in the left room. And this asymmetry may be used to do useful work.
But can this gadget actually exist (and operate) in the real world?
Maxwell himself pretty much knew that the answer was No. It had to be No because we have pretty much proved that the free energy couldn't systematically increase - by a one-like argument above. Even before the era of statistical physics, thermodynamics was widely believed and its laws were considered generally valid principles. So it has been clear at the beginning that the total work needed to find the information, to open the door, and to close the door will actually destroy the profits of any company that employs Maxwell's demons, at least after a macroscopic period of time. ;-)
Leó Szilárd with his colleague in 1946
However, the original demon due to Maxwell was pretty complicated and one would have to study some difficult engineering to quantify how the energy is actually lost. Leó Szilárd, a great Jewish Hungarian physicist, simplified Maxwell's demon and invented something that became known as Szilárd's engine. He has also found some more specific facts about his simpler gadget.
Remarkably enough, you may read the English translation of his full 1929 article (from a German article in Zeitschrift für Physik) about his version of Maxwell's demon:
On the decrease of entropy in a thermodynamic system by the intervention of intelligent beings (full scanned PDF)See also Charles Bennett's 1987 review of demons in Scientific American published before the journal was overtaken by crooks, morons, and ideologues.
What is Szilárd's engine? Well, it is a small box with two rooms - just like Maxwell's demon - but there is only one molecule inside the engine. The engine learns where the particle is located - left or right - and inserts a piston that can do useful work as the molecule gradually increases its living room from one half of the volume to the full volume again.
(By the way, at the beginning of his article, Carroll is totally wrong about one more claim. He says that Szilárd's engine is "using information to extract energy from a heat bath." He is proud about the "heat bath", too. But the very point of Szilárd's simplified setup is that the box is not a heat bath. A heat bath is defined as a "system whose heat capacity is so large that when it is in thermal contact with some other system of interest its temperature remains effectively constant." The condition is clearly not obeyed for one molecule.)
The useful energy you could get if you had no other expenses - related to the installation of the piston and to the gathering of the information - would be "kT.ln(2)". It's easy to see why.
The entropy is the microscopic information that has been permanently lost.That's a more accurate version of Carroll's statement that the entropy is the information that we don't know. However, we have "learned" one bit of information about the position of the particle, so the measure of our ignorance "S" dropped by "k.ln(2)" ("k" is Boltzmann's constant) which is why the free energy "F=E-TS" increased by "kT.ln(2)". If you wonder how a strange numerical constant "ln(2)" may appear from computing work, note that "ln(2) = integral (from 1 to 2) dx/x".
But can you actually do something like that? Can you get the energy by monitoring the information about the molecules? If you could do it systetmatically, it wouldn't hurt that "kT.ln(2)" is tiny. You could repeat the process many times.
The answer is, of course, that you can never get a macroscopic amount of useful energy in this way. It's the whole point of Szilárd's paper. Szilárd actually does study the question that Carroll totally neglects - what is the entropy increase of the rest of the system that gathers the information and/or manipulates with the piston. And the answer is that the total free energy never goes up. My definition of the entropy - the information that is permanently lost - makes it easier for you to understand that you can't ever get it back and reduce the entropy in this way. ;-)
The best scenario is that you don't lose any useful energy - and even this scenario is just a limit that can never be achieved in the real world although you may get as close to it as you wish. If you don't lose useful energy, it just means that the energy is being transferred from one part of the system (the small box with one molecule and a piston) to other parts of the system (the measuring gadgets), or in the opposite direction.
But the whole gadget including all the measurement and piston devices that it requires will never be able to do work. Be sure that you can't never construct a useful version of Maxwell's demon. The statement that there is some transfer of "kT.ln(2)" of useful energy is totally trivial - it just reduces to the observation that one bit carries "ln(2)" of information - because the mathematically and physically natural base of logarithms is "e=2.718" rather than "2". This single bit of information may be interpreted as a change of entropy and the entropy appears in "E-TS". Everything is totally clear.
However, once again, the total free energy of the system never goes up.
The idea that it could go up is partly supported by Carroll's inaccurate definition of the entropy "S" as "the information that we don't have about the system". If it is formulated in this way, it probably leads you - and certainly Carroll - to believe that the total entropy may decrease if you learn some information. ;-)
But the total entropy may never decrease (macroscopically) whether or not you learn anything. To learn, your gadgets, senses, and brain have to work and they create extra disorder - usually much more than the amount of information. But even in the hypothetical limiting case when the total entropy stays constant (which is not possible in the real world, but we can get close to it), you're just moving free energy from one subsystem to another. There's nothing shocking about such transfers.
I remain puzzled why some people - e.g. Sean Carroll or the Bogdanov brothers whom I had the pleasure to meet two weeks ago - are so excited about the statements similar to the sentence "information creates (or is) energy". This is a silly, meaningless proposition. You can get energy from information by burning a book but the amount of energy you obtain is dictated by the amount and type of the paper, not by the information quality of the content of the book. ;-)
The "right" energy "E" is exactly conserved and the "free energy" that measures the ability to do work has the extra "-TS" term - because the ability to do work goes down if the entropy goes up - and of course, if the entropy "S" in this term is (mis)interpreted as some information, it also influences the free energy. Great but why so much buzz? The total "S" will never go down, anyway. ;-)
Just like there is "free (useful) energy", there's also "useful information". However, the latter is extremely hard to define (much harder than "F=E-TS"). While such a notion could be useful for your practical life (think about the useful information stored in the memory chips), it's surely not fundamental in physics. "Useful information" doesn't appear in any fundamental laws of physics - or statistical physics. The information only becomes useful by its applications, not by the fundamental laws themselves.
The entropy "S" that appears in the formulae of thermodynamics is "all information" and almost all of it is guaranteed to be "useless information" - which is, pretty much by definition of the entropy, "uninteresting" as information for the living organisms. If the amount of useless information is not vastly greater than the amount of useful information, thermodynamics actually breaks down because you're not in the thermodynamical limit! The translation of "useful information" (such as that one bit about the Szilárd particle, or any kind of information that people consider useful) to energy is a complete bogus based on the deliberate misinterpretation of the word "information".
By the way, in an otherwise correct Bennett's article mentioned above, the author make a big deal about the question whether the entropy increases when we measure something or when we reset the memory; he favors the latter. However, this is just a detailed question about whether or not the "useful information" should be subtracted from the entropy or not. Because the "useful information" has to remain negligible relatively to the total entropy, you won't see any macroscopic difference between the two conventions. The only thing you have to be careful about is that you increase your estimate of the entropy somewhere in the cycle and you must be consistent about your rule. But you should never try to make the entropy too occurate - as accurate as "k" or even more so. The entropy as a physical concept is only useful in the thermodynamic limit in which the terms of order "k" depend on the conventions.
But let me return to the main story here. The free energy never increases in similar gadgets. So the experimental apparatus in Nature obviously doesn't perform useful work. At most, it transfers free energy from one place to another. It's pretty much guaranteed that the total free energy is actually decreasing - and quickly - during their experiments; they're just not interested in the whole picture. Moreover, I think that what they do is just a straightforward realization of a thought experiment that obviously does what it does.
However, Szilárd's engine has a less trivial part of its budget, namely the "expenses", and it's exactly the author's ability to show that the total entropy can never decrease and that such versions of Maxwell's demon simply cannot do useful work. By looking at the "gains" only, Sean Carroll has totally missed this key point. Chances are that the authors in Nature - their article is also available on the arXiv - have missed this point, too. They have probably studied the trivial part of Szilárd's engine only.
Zeeya Merali has surely misunderstood everything that matters here. But he or she is just a journalist. Is that really necessary that physicists who consider themselves knowledgeable about thermodynamics in 2010 may misunderstand so many basic things that were understood in 1929 - and usually much earlier than that? It's sad.
And that's the memo.