Tuesday, July 17, 2012

Qbism and why Quantum Physicists Hate It

Consider standard quantum mechanics QM (for which the Copenhagen Interpretation applies) and operators are in one-to-one correspondence with the closed subspaces of the Hilbert space, H. Then if P is a projection, its range is closed, and any closed subspace is the range of a unique projection. If u is any unit vector, then  P= [ Pu]2   is the expected value of the corresponding observable in the state represented by u. Since this is {0,1}- (binary) valued, we can interpret this as the probability that a measurement of the observable will produce the "affirmative" answer 1. In particular, the affirmative answer will have probability 1 if and only if Pu = u; that is, u lies in the range of P.

Now in the QBist or Quantum Bayesian view, this is not so. The quantum state assignments are relative to the one who makes them. So if I as a quantum Bayesian assert that: P = [ Pu]2  is the expected value of the corresponding observable in the state represented by u = 0 and the result is {0,0}, then P = 0.
Thus as N. David Mermin recently observed (Physics Today, July, p. 8):

“QBism eliminates the notorious measurement problem ….an agent unproblematically changes her probability assignments discontinuously whenever new experiences lead her to change her beliefs. It is just the same for her quantum state assignments. The change in either case is not in the physical system the agent is considering. Rather, it is in the quantum state the agent chooses by which to encapsulate her expectations.”

In other words, a large component of subjectivity enters the picture. But this is initiated, as Mermin points out, by the very choice of Bayesian probabilities ab initio. To fix ideas, the typical quantum physicist has a confirmed “frequentist” concept of probability. Hence, a probability is determined by the frequency with which event E appears in some ensemble K of events, all of which have been identically prepared in the same system.

For example, consider a system of ten coins, each of which is equally weighted, balanced to yield- for ten separate tosses:

H H T H T H H T T T

Then out of this ensemble of ten fair tosses, T appears as many times as H which is 5, so

P = E/ K = 5/ 10 = 0.5

Thus, the objective observer will assign a probability of P = 50% to a single event embodying any such coin toss and this will reflect the observer’s belief the event will occur at least half the time.

On the other hand, for the Bayesian the probability is not inherent in a similar system of events but is projected via different agents who may have different beliefs based on pre-supposition. Say Agent X has the pre-supposition a particular series of coin tosses is weighted toward tails to appear, for whatever reason, then his expectation will be: P(T) = 0.6 perhaps.

As Mermin points out (ibid.):

“This personalist Bayesian view of probability is widely held, though not by many physicists”

Of course, this implies a definite opposition between physics and other fields which employ Bayesian statistics.

Now, in formal quantum mechanics, arrival of states, probabilities is contingent on information, knowledge. For example, knowledge may be obtained using an Aspect-type device such (as depicted below) which acts to disperse the individual atomic "magnets" (net-spin atoms) and send them in pairs (always in pairs) to D1 and D2 simultaneously. The question is, what spin is detected by each detector at the instant of observation?:



D1 (+½ ) <-------------[D]------------->(- ½ )D2



The knowledge or information arrived at is correlations or anti-correlations, for the spin of an atom, say helium, captured at detectors D1 or D2.

If a  ½ spin appears simultaneously we have correlation, otherwise anti-correlation.

Prior to the observation (actual detection), neither spin value can be known according to the Heisenberg Uncertainty Principle of Quantum Mechanics. That is, while the atomic magnets are in transit - from device to either detector - there is no definite information concerning which spin is going where. The reason has to do with what is called the superposition of states. To fix ideas, consider the whole atomic magnet in the device, before being ejected. If it’s a helium atom, then there’ll be one up spin and one down spin and we can write for simplicity:

U = å i {(ups)i + (downs)i}

The obscurantist claim that the outcome "depends on perspective" is rubbish, since it is the observation that determines the outcome, and there is only one.

In the orthodox (and most conservative) interpretation of quantum theory, there can be no separation of observed (e.g. spin) state until an observation or measurement is made. Until that instant (of detection) the states are in a superposition, as described above. There’s nothing mysterious or strange about this as it follows entirely from the mathematics. More importantly, the fact of superposition imposes on all quantum phenomena an inescapable ‘black box’. In other words, no information other than statistical can be extracted before observation.

The late physicist Heinz Pagels, for example, has referred (in his excellent book, The Cosmic Code) to quantum measurement theory as an ‘information theory’ and noted the entire quantum world is embedded into what we observers can know about it. Obviously such knowledge is obtainable exclusively from observational or experiment results. Since only one apparatus is used, like I've shown there are no "differing perspectives" only one - at the instant of observation.

However, in QBism or quantum Bayesianism this gets chucked: Keep the set up for the experiment as shown and change observers for each sequence of say N trials. QBism asserts that each switch introduces a different probable outcome based on each observer’s belief about the system and how he makes state assignments.

Worse, in QBism there’a a bifurcation between the world –universe in which an agent or observer lives and her experience of it. This disconcerting aspect arises, according to Mermin:

“From a failure to realize that like probabilities, like quantum states, like experience itself….the split belongs to the observer.”

Each has its own split. Worse, an uncontrolled complementarity of experience enters with respect to any other observers. If “Judy” has experiences and observations that are macroscopic (i.e. related to the large scale world of planets, stars etc.), “Roy” will experience microscopic reality (the world of atoms, electrons and Higgs bosons!) To quote Mermin:

“Each split is between an object (the world) and a subject (an agent’s irreducible awareness of his or her own experience):

Mermin also makes the point, which I tend to agree with, that “ambiguities only arise if one fails to acknowledge that the splits reside not in the objective world but at the boundaries between that world and the experiences of the various agents who use quantum mechanics.”

Fair enough, but a couple questions: 1) Does this mean that the ‘Many worlds’ interpretation is now kaput? And 2) Does QBism indicate an objective difference between the microtubules of agents, with said entities evidently tied to consciousness and hence “irreducible awareness” ?

At least QBism does take care of one riddle, first posed by Einstein: Can a quantum wavefunction be collapsed by the observations of a mouse?

QBism answers ‘NO!’ since – according to Mermin – “the mouse lacks the mental facility to use quantum mechanics to update its state assignments on the basis of its subsequent experience?”

Hmmmmmm…But what if it’s a genetically engineered mouse, with the DNA of a human like Einstein spliced into its own?

8 comments:

Unknown said...

Hello,

I enjoyed reading your post.
While I am an engineer, I haven't had physics training above that mandated for a graduate electronics engineer.

I am wondering about a modified Schrödinger's cat thought experiment and how classical vs. QBism models would look at it's result. It is my personal feeling that QBism provides a slightly better answer, but I want your opinion.

In this new Schrödinger's cat experiment we have the same setup, but two observers, A and B, both physicists (to avoid the mouse argument), but unable to speak with one another.
The first observer, A, falls asleep just before the 1 hour time mark when they wanted to open the box.
The second observer, B, opens the box momentarily while A is asleep, and notices the state of the cat. He quickly closes the box, at which point A is awake again.

Now let's consider the experiment.

Observer A thinks that the box was never opened and thus the wave function has not collapsed.

Observer B knows that the function has collapsed since he has observed the cat and knows what happened to it.
So, has the wave function collapsed or not? Two knowledgeable physicists will claim differently.
Apparently the collapse happens in the brain of the observer and is not an actual property of the physical system being examined.
Which one of the two quantum models provides a better explanation to this "paradox"?

Copernicus said...

Hello, and thanks for your inriguing thought experiment. I would tend to go with the Copenhagen Interpretation that the observed dual wave states collapse (superposition of 'live cat' + 'dead cat') yields one of those states and it must then be the same for both A and B, irrespective of whether A was sleeping just before B observed it.

In other words, it was the observer (B) who incepted wave packet collapse yielding the determined single state - and so this observed, final state is the same for both observers.

My problem with the Qbist approach is that it assigns or allows way too much power to human observers, or any observers. It almost borders on an anthropocentric perspective not remarkably different from the well known nonsense embedded in the 'Anthropic Principle' i.e. that the cosmos' constants are 8uniquely adjusted for emergence of humans.

John Cowan said...

Of course it is the same for both observers! The cat is a classical system (which is another way of saying its d value is intractably large), so there is a legitimate hidden variable: it was either alive or dead all along. The difference is that observer A can only calculate its state, since he is in ignorance of it, whereas B knows by measurement what the state is. When you ask each observer, therefore, what he thinks the odds of the cat being alive are, observer A replies "1:1", whereas observer B says "1:0", or certainty. No surprises here — we know what we know, and what we don't know, we can only estimate.

There's a similar flavor of paradox, but no actual paradox, in this story: A man who has never seen or heard of the ocean, but is familiar with lakes, reaches the beach for the first time. You ask him to estimate the probability that the water will rise to cover the beach he is standing on. He naturally replies "Zero". Now by Bayesian principles, this prior is in the numerator, so the computed probability never rises above zero, no matter how much evidence is collected, whereas the true probability is one. How can this be?

The answer is that a prior of zero doesn't mean ignorance: it means certainty that something can't happen. If you ask me to assign a prior to the sun rising in the west tomorrow, I will say zero, not because I have never heard of such a thing, but because I have extremely strong reasons not to believe it. If I saw the sun rising over the Palisades (I live in NYC), you could never convince me that I wasn't drunk, or hallucinating, or the victim of a cosmic light show, or that New Jersey had suffered a nuclear bombardment. (If the earth tipped over or reversed direction, I wouldn't survive the event to be an observer.)

Luboš Motl said...

"The obscurantist claim that the outcome "depends on perspective" is rubbish, since it is the observation that determines the outcome, and there is only one."

You are completely missing the point. The outcome and the observation is the same thing - and this thing depends on the perspective i.e. it is subjective. It is the very point of quantum mechanics (or, if I want to flatter those who don't understand QM, it is the whole point of QBism).

In Wigner's friend scenario, Wigner's friend may subjectively observe the cat inside the lab, so he will have definite outcomes, but Wigner himself won't know any answer until he looks. Then he will find out that his perceptions are - as predicted by the dynamical quantum theory and the entangled state that results from the evolution - correlated with what his friend announced as the result of the previous measurement. There is absolutely no contradiction here. The intersubjectivity or objectivity is "emergent" but QM guarantees that it works whenever it should. But the objective reality is simply not fundamentally real in quantum mechanics.

Copernicus said...

Luboš Motl said:

"You are completely missing the point. The outcome and the observation is the same thing - and this thing depends on the perspective i.e. it is subjective. It is the very point of quantum mechanics (or, if I want to flatter those who don't understand QM, it is the whole point of QBism)."

I dispute that I am "missing the whole point" at all. Obviously, if the outcome and observation were "the same thing" then N. David Mermin would not have a problem with it.

Thus, the subjectivity enters via the particuolar observation made (by the particular observers) but not in the formation of the probabiliities which is at issue.

Thus, what I need you to show is that the Bayesian probability computed conforms to a one-to-one correspondence with the closed subspaces of the Hilbert space, H. Also, for the projection P yielded the range is closed (hence any closed subspace is the range of a unique projection.)

My objection is not with how Wigner and his pal respectively observe the cat, but rather in how each computes the probabilities for the cat being in state x> or y>. You understand?

I maintain the Bayesian probability confers an erroneous take and this is why most quantum physicists (like Mermin) object to it.

As Mermin points out (ibid.):

“This personalist Bayesian view of probability is widely held, though not by many physicists”

Now, can you please explain why this is so? Is it because most "do not understand QM" or perhaps because those who embrace QBism don't?

Again, my beef (and I believe Mermin's) is in computing quantum states based on Bayesian (as opposed to frequentist) probabilities. And please don't tell me they are "the same" - they are not.

Copernicus said...

Luboš Motl said:


"There is absolutely no contradiction here. The intersubjectivity or objectivity is "emergent" but QM guarantees that it works whenever it should"

True, QM "guarantees" such but only provided the relevant probabilities are computed from the frequentist paradigm, not the Bayesian.

"But the objective reality is simply not fundamentally real in quantum mechanics."


I believe that depends on which paricular interpretation one uses to consider psi - the wave function. In the Stochastic Interpretation (SIQM) of Bohm and Hiley the wave psi is real and its evolution is deterministic. In the Copenhagen interpretation (CIQM) the wave function is purely a statistical artifact (a la Max Born's original take).

But the choice of interpretation itselt is purely a subjective matter, nothing at all in reality demands the CIQM be chosen over the SIQM other than the latter violates the temperaments of too many quantum physicists who don't wish to appear "philosophical."

Again, I welcome your input!

Copernicus said...

See also:

http://philsci-archive.pitt.edu/9803/1/comments_on_QBism_-_final.pdf

Excerpt:

"Wigner's friend (another case for which qBism claims to have a ready explanation) can be seen as a variant of the above. Bob performs an experiment in a lab that is isolated from Alice (perhaps because Alice and Bob are
spacelike separated at the time).

Alice can later perform a measurement on the content of Bob's lab (either repeating Bob's measurement or asking him for a report). If she does repeat Bob's measurement, the result she
observes coincides with the result of the earlier measurement as reported by Bob, again suggesting that she should take the report seriously as describing objective data that were not yet available to her.

Unlike the EPR case, this is not particularly puzzling. But in the Wigner's friend scenario, we are invited to consider also the case in which Alice performs instead an
interference experiment on the entire contents of Bob's lab, and thereby `quantum erases' Bob's result. In qBist terms, this could be understood merely as Alice performing some manipulation that leads her to change her own credences about the results of her asking Bob what he has seen. She
now expects from Bob not some or other report of a de nite result, but a defnite report of not having performed the experiment. But this description misses out on the fact that Alice's manipulation has in fact obliterated also Bob's piece of data and any memory that Bob had of it (unless, that is, we
assume that Bob did not really possess any such piece of data in the first place).

Thus, if we believe that data obtained by di erent agents are equally objective, thus understanding `pooling of data' literally, we have problems. There
are no such problems once the data have been pooled together, but we have two puzzling cases in situations where Bob's data are not yet available to Alice.

In the EPR case, qBism remains silent on why Alice and Bob's data
should be correlated, and in the Wigner's friend case, it remains silent on how Alice can erase Bob's data. The choice for qBists seems to be between: (a) providing us with a further story about data and/or agents themselves, rather than just strategies for how agents update their credences in the face of new data; and (b) some kind of solipsism or radical relativism, in which we care only about single individuals' credences, and not about whether and how they ought to mesh."

Comments? Response? On whole paper?

Copernicus said...

From a comment by a physicist on physcisforums:

"“The Bayesian view, for me, is just a play with words, trying to give a physically meaningful interpretation of probability for a single event. In practice, however, you cannot prove anything about a probabilistic statement with only looking at a single event. If I predict a probability of 10% chance of rain tomorrow, and then the fact whether it rains or doesn't rain on the next day doesn't tell anything about the validity of my probabilistic prediction. The only thing one can say is that for many days with the weather conditions of today on average it will rain in 10% of all cases on the next day; no more no less. Whether it will rain or not on one specific date cannot be predicted by giving a probability.

So for the practice of physics the Bayesian view of probabilities is simply pointless, because doesn't tell anything about the outcome real experiments. “

Your take here?