Roman Buniy and Steve Hsu published a preprint named
Maybe accidentally, maybe because of the preprint, a user named "confused" asked the following question at the Physics Stack Exchange:
The user makes the following points:
Quantum entanglement is the norm, is it not? All that exists in reality is the wave function of the whole universe, true? So how come we can blithely talk about the quantum state of subsystems if everything is entangled? How is it even possible to consider subsystems in isolation? Anything less than the quantum state of the whole universe at once. Enlighten me.I am afraid that pretty much everything that is being written about these issues in the preprint as well as the discussion thread at Physics SE is confused, indeed.
First, it is true that the entangled states are "generic" according to any continuous uniform measure on the space of states (or density matrices). The states of the form\[
\] represent a measure-zero, infinitely "special" subset of all the pure states that describe a composite system, i.e. all states in the tensor product of the two Hilbert spaces. After all, the dimension of the set of states which are tensor products is roughly \(d_1+d_2\) while the dimension of the tensor product is \(d_1\cdot d_2\) which is larger if \(d_1,d_2\gt 2\). However, the vanishing of the measure doesn't mean that unentagled states are unimportant.
The key thing that all the folks seem to miss is that once we measure i.e. learn the value of any observable, we are learning that the state is an eigenstate of the corresponding operator with the measured eigenvalue. And whenever we measure two commuting quantities \(X,Y\), we know that the state of the system is an eigenstate of both of them. If we measure a complete commuting set of observables, we know the pure state completely.
In the misleading "materialist" interpretation of the wave function, every measurement "collapses" the state of the system into an eigenstate of the observables that were measured.
For example, if we measure the momentum and polarization of one photon in Boston and the same observables of another photon in San Francisco, it may indeed be true that their state was entangled to start with. The entanglement meant nothing else than a predicted correlation between various quantities we could measure using these two photons. But once we measure the values, we actually learn what the right momenta and polarization axes are. It's irrelevant that they were uncertain, correlated, or uncorrelated before the measurement. After the measurement, they're certain and uncorrelated. The entanglement simply disappears during the measurement.
The entanglement is nothing else than a correlation in the predicted sets of probabilities for various future measurements; it only exists if there's an uncertainty about the character of the future measurements and/or their results. But once a particular measurement yields particular results, the entanglement – or any property of the wave function or density matrix before the measurement that is linked to the measurement – becomes an irrelevant trivium about the history. The actual state-of-the-art state of the composite system is well-known and unentangled.
And when the measured quantities can be separated to two sets, \(A\) and \(B\), which contain observables that commute with each other (in the same set as well as the other set) and which make \(A\) fully describe one subsystem (it is a maximal set of commuting observables) and \(B\) describe another subsystem, then the resulting state is an eigenstate of all elements of \(A\) and \(B\) and it is inevitably a tensor product i.e. unentangled state of an eigenstate of elements of \(A\) and eigenstates of elements in \(B\). Because we're facing a similar situation all the time, whenever we actually measure local objects or systems fully, we have to deal with unentangled states all the time.
Buniy and Hsu also seem to be confused about the topics that have been covered hundreds of times on this blog. In particular, the right interpretation of the state is a subjective one. Consequently, all the properties of a state – e.g. its being entangled – are subjective as well. They depend on what the observer just knows at a given moment. Once he knows the detailed state of objects or observables, their previous entanglement becomes irrelevant.
One may also argue that the entanglement between observables \(X,Y\) whose measured values cannot be compared by any observer in the future (especially for causal reasons) is unphysical. In fact, the black hole complementarity uses this inability to operationally decide whether such quantities \(X,Y\) are entangled or not: it postulates that their entanglement is mandatory because the observables aren't quite independent from each other. The degrees of freedom describing the black hole interior are complicated functions of degrees of freedom that describe the exterior of the same black hole, the black hole complementarity principle postulates. This assumption is pretty much guaranteed to lead to no demonstrably wrong conclusions – no contradictions – exactly because the measurements inside and outside the black hole (with some extra constraints on the locations and times) cannot be compared by any observer in the future. A similar comment holds for cosmic horizons.
When I read papers such as one by Buniy and Hsu, I constantly see the wrong assumption written everything in between the lines – and sometimes inside the lines – that the wave function is an objective wave and one may objectively discuss its properties. Moreover, they really deny that the state vector should be updated when an observable is changed. But that's exactly what you should do. The state vector is a collection of complex numbers that describe the probabilistic knowledge about a physical system available to an observer and when the observer measures an observable, the state instantly changes because the state is his knowledge and the knowledge changes!
Whenever some regions' causal separation is just temporary, i.e. whenever they're guaranteed to return to contact in the future, it must be possible to talk about the observables that may be measured in both regions and about their correlation. But whenever it's not the case, the discussions about trans-horizon entanglement etc. may easily become unphysical. Don't get me wrong: if you had a crisp mathematical description that would force you to make a particular conclusion about the trans-horizon correlations, it could make sense to talk about it. But if you don't have any crisp mathematical description of the type, there exists no physical justification why a physicist should be able to answer the question whether the observables behind each others' horizon continue to be correlated or entangled once the observers in these two regions make their measurement. The answer can't be obtained by a well-defined operational procedure. So you don't have to "admit" that they have to be correlated. Instead, you may say that the entanglement is gone as soon as observers in these two regions make their first observations. After all, the unentangled Ansatz is natural for separated subsystems and subsystems separated by a horizon are as separated as they can be.
You're allowed to assume that they're not correlated and you're allowed to assume that they are correlated. Much like the "axiom of choice" or its negation in the axiomatic systems of set theory, none of these two assumptions may lead to contradictions with facts you may actually prove by experiments (and extend by calculations and logical thinking). So it's an unphysical question – a matter of subjective preference – what you think about the state of the observables behind your cosmic horizon. Only correlations that may be measured need to have "unique, calculable values" in a complete physical theory. Talking about such unobservable correlations independently of a physical theory is unphysical.