Thoughts on Quantum Mechanical Measurement

The quantum prescription for measurement, entanglement, and collapse appears to be an iron-fortress : unassailable by any attack. Still, it is worthwhile to think about "strange" consequences of the theory to make sure one understands it properly. I think I present some novel thought experimentes here, and some possible new phenomena in the theory of measurement and the fundamentals of quantum theory.


Consider a cube, of volume V, containing a single particle with exactly known momentum (so its position is equally spread over the volume). Put a position- measuring device on each end of the cube. From a point equally spaced between the two devices, the signal to measure is sent, so both devices measure at exactly the same time (even if they are microscopically non- synchronous, as long as they are within L/c there is still a problem). Now both devices cannot actually measure the position, all they can do is measure whether or not the particle is located at them. Thus each has a 1/V amplitude (define p = 1/V) of finding a particle. The problem is that with a chance of p^2 both will find the particle. We can imagine that we are detecting strange hadrons, so they could not have leaked off of the measuring devices, and we can let V -> very large so that the leakage from the walls is negligible. Where did the second particle come from? Or are the laws of quantum mechanics wrong? We cannot tolerate measurement-collapse as the answer to this problem, because then different observers will predict different things : an observer close to device A sees devices A measure first, and then sees device B (since they are synchronous in "true time"), so that this observer predicts that A will have a slightly higher chance of seeing the particle: P(A) = p , P(B) = p - p^2 , since when A sees it, the collapse occurs and B cannot see it. However, an observer near B sees exactly the opposite. Thus we have observer-dependent physics without even talking about motion (a moving observer will see different time-ordering of spacelike separated points).

But hold hasty reader, quantum mechanics saves the day. Let us reconsider this more carefully. An observer x near device A sees device A measure the position before device B. Device A makes a pure measurement, so observer x predicts a p=1/V amplitude for finding the particle, and this is what happens. Now, if that particle was seen, B cannot see it, this decreases the effective initial amplitude at B by p^2) (p chance it's seen at A, then a p chance that B would have gotten a positive had collapse not occurred). But this is not all! If A doesn't see the particle, then INVERSE-collapse happens - so that when B measures, we know the particle is not at A, so the amplitude of getting it at B is slightly increased - and the increase is exactly (p^2), so as far as B is concerned, it still has a p chance of seeing the particle, which is exactly what an observer at y (near B) would have seen.

Actually, it's easy to make a general proof that any two orthonormal measurementa a & b are spacetime locations A & B are not dependent on the time-ordering of A & B. This is harder to show for non-orthonormal measurements.


Perhaps it is no surprise that a simple derivation of quantum measurement probabilities is impossible (considering an ideal theory to be simply Hx=Ex and then proving that probabilities follow from the unitary evolution of both system and observer). It is clear that measurement collapse is non-local, but it is also clear that the time-reversed action : spontaneous decollapse with "forgetting" of information is quite catastrophic : the sudden appears of finite probability in empty space seems quite intolerable. Similarly, many worlds only split, never combine, so that these processees are very strongly time-direction sensitive. The many worlds interpretation in particular seems to imply the boundary condition that the big bang started as a pure state of the universe, and splitting of correlations proceeded from there.

Thus a solution of the quantum-measurement problem would also be a solution of the arrow-of-time problem.


I just read an interesting paper by Seth Lloyd which mentions in the conclusion a few notes about EPR and Bohm's intepretation. Lloyd is talking about cause-effect models and the time assymetry of cause-effect. He notes that Bohm's interpretation of EPR has profound implications for any cause-effect theory : not only is it possible for there to be non-local causes which effect any given system, but it is *Impossible* to know whether or not a system is "closed" in the sense that it is locally deterministic (the future evolution of the system is influenced only by the state of the system). This means that whenever we discuss amplitudes, conditional probabilities, etc. we are really leaving out the implicit dependence on "everything else" possible including the entire universe!


Here's a nice gedanken-experiment for time-order-invariance of measurements.

Consider a sourse of particles, located at 0. It shoots a steady stream of particles - each particle going in the + direction paired with one going in the -, perfectly correllated (in spin or polarization). Now put an observer (me) at -L, and another (the demon) at x=L . So, if the demon and I both measure the spin, we will receive correlated particles from the source at exactly the same time. What if the demon moves to x>L ? Then particles arive at me which have not been measured by the demon. Thus I see the "pure" probabilities from source - 50/50. What if the demon moves to x less than L ? Then the demon makes some measurement, and the particle coming to me collapses before it reaches me, so the result I find is perfectly correlated to what the demon measured. So we see "modified" probabilities. If the modified probabilities were not equal to the pure probabilities, then the demon could send information to me, by moving back and forth across x=L, continuously measuring spins from the source at 0. Thus I would find two types of event ariving at me a = particle arrives directly from source, b = particle arrives after having been collapsed by the demon; you might send ababaaabb... However, these events are identical so far as I am concerned, since the demon could've gotten up or down, the "modified" probabilites are still 50/50. This "demonic telephone" is amusing because it demonstrates that quantum mechanics is insensitive to the time-ordering of measurements (which is necessary at spacelike separations) and also that you cannot use quantum correlations to send information. Quantum defeats both foes with one blow : you cannot tell the difference between a random local variable and a distributed nonlocal correlation.


Thinking about the "demon telophone" of my last gedandken, it
occurs to me that there may be a more general form of relativity.

First, relativity seems to imply that entanglement can only be
created locally.  That is, correlations may exist between spacelike
separated points (EPR) - that's fine, because the observers at
either point cannot tell their spins/polarizations are not random
unless they compare results - however, new correlations can
only be created locally (interactions are local, wavefunctions are
not).

However, under more positivistic considerations, we find a more
general possibility.  Consider the "demonic telephone".  I argued
that we cannot send information by the demon's motion which causes
particles to alternate between the pure-random output of the source,
and the "collapse" (correlated) particles from a post-demon
measurement.  However, because particles from the source and particles
post-demon appear identical to us, we can only tell if we got source
particles or non-random particles if we travel to the demon and
compare.  This means that something more general could have happened :
the correlation between my obeservations and the demons need not have
occured at the time of my measurement or his - they could have occured
at any time during the travel from the source.  Also, particles emitted
from the source need not have been initially correlated - they could
have become spontaneously entangled when spacelike separated.  This is
indeed a non-local interaction, but no information would be sent, and
this occurance would be indistinguishable from the case where the
entanglement had occured at the time of emission (as is usually assumed).

Now, this non-local correlation cannot occur in an arbitrary way, since
then a super-luminal telephone would exist.  Under first consideration
it seems that any interaction which conserves the separate probabilities
is allowed.  That is, in a (+/-) qubit/spin/polarization, an example is,
for two systems A & B:

X = [ (+A) + (-A) ] [ (+B) + (-B) ] / 2

Y = [ (+A)(+B) + (-A)(-B) ] / sqrt(2)

since P(A=+) , etc. is the same for X and Y.  Thus it seems that the
states X and Y could spontaneously change into one another.  In fact,
under a hamiltonian in which they are degenerate, the state could
oscillate back and forth between these possibilities.  Neither observer
would be able to tell, until they communicated by some other means.

This oscillation would seem to be invalidated when X & Y are close,
because the joint probabilities are indeed not equal, so it would be
suppressed by the closeness.

---------------

In fact, we can create explicit Hamiltonians of the form :

	H = H(A) * H(B)

which have the solution:

	S = sin(ft) [ 11 + 00 ] + cos(ft) [ 10 + 01 ]

where f is some frequency, t is time, and 0/1 is a spin/qubit
basis in systems AB.
This solution S has all the good properties of quantum
mechanics:

	{S|S} = 1
	{S|H|S} = 0

and also preserves local probability of all local
oberservables :

	{S| 1(A) |S} = {S| 0(A) |S} = 1/2
	{S| 1(B) |S} = {S| 0(B) |S} = 1/2

That is, each qubit is random as far as each observer is
concerned.  However, note that :

	{S| 1(A) 1(B) |S} = sin^2(ft)	

So that the correlations are constantly changing!

H and S represents an interesting class of hamiltonia and
their solutions : no local observation can tell the
difference between this hamiltonian and H = 0 !  However,
H does create a non-local change of entanglement which
is detectable when the measurement results from A & B are
brought together and compared!

This H is thus consistent with relativity IF AND ONLY IF
we take the viewpoint that only local measurements are
allowed.  This implies that positivism and the view that
the "wavefunction is real" are NOT compatible!

Thus, these Hamiltonians must have one of these three
effects:

	1. A new postulate must be made to exclude them.
	2. Some dynamical effect may prevent their operation
		in nature.
	3. Their effects are real, and we must re-interpret
		the foundations of QM and QFT.



Ok:

Some iron-ed out ideas.  QFT and QM theories of measurement
are indeed not the same, even though we have hilbert spaces
and entanglement in both.

The key difference is the problem of time in QM.  Modern
measurement theory is centered on decoherence, or many
worlds, or coherent histories (these are all just names
for the same thing, if you use the correct viewpoint).
This phenomenon is explicitly time-dependent!  Furthermore,
there is a ruination of x-t conjugacy.  Even in a non-
relavistic theory we have the beautiful quantum
conjugacy

H = i d/dt
P = i d/dx

But the whole idea of the Dirac Ket ruins this.  The Ket
conceptually does not depend on x.  That is, it is a vector
which describes the state; x dependence is only one
projection:

f(x) = {x|f}

However, |f> does contain explicit time dependence, in the
form of the shrodinger equation.  It is only this which
allows measurement (aka decoherence).  While I'm at it, I'll
note that the idea of a "closed system" is quite poor, because
the closed system is defined as one in which the total energy
is constant, but this is a very time-phyllic convention,
because choosing a constant total energy means the time evolution
is just a phase, so that the system is "spread uniformly over
time" (uncertainty principle).  This means that we are "allowed"
to talk about the "time" of an event only because the word "time"
is meaningless for a closed system - every point in time is
identical.  To choose a closed system with x-t conjugacy is to
choose a totally uniform state = totally random (not good).

We need x-t conjugacy for relativity, so these thoughts bode
ill for a relativistic and/or QFT theory of measurement.
Schwinger points out adroitly that in order to have decoherence
and/or measurement, we must be able to localize our wavepacket
in time.  But by QM this means we have let the energy go wild,
so that in RQM or QFT we will A. probe the divergences, and
B. create loads of particles from the vacuum, which will make
any measurement result suspect.

We also note that |x> is a good basis for a wavefunction, but
it seems |t> is not; I'm not sure whether or not it is profound
that the sum on times of |t}{t| seems to not function as a
complete set..


Actually, these assertions are related to the even more interesting fact that entanglement can occur non-locally in space, but CANNOT occur non-locally in time. This in turn is related to the fact that we demand total probability conservation as a function of time, but not of space.

Furthermore, we demand causality as a function of time. This is certainly artificial and should be done away with. For example, let us consider the EPR experiment. Correlations exist between spins at A & B, but are not locally detectable. One way to model this is to allow the spacially-separated, same-time correlations in the wave function. However, these correlations can only be tested when A & B come together again. Thus, rather than a spacial correlation between A & B they could be thought of as both being correlated to a different time state. One way is to correlate to the past : the state when measured at either A or B is enough to establish the correlations.

Of course, correllations to the future make physics trivial : any state can be correllated to the result you are going to measure, so that the result of measurements is explicitly encoded in the system. While this seems to not be ruled out (since you are part of the system and your behaviour is determined by physics as well) it is too bizarre for us to consider at the moment.


David Deutsch's Stern-Gerlach

I'm curious if there has been any work on
David Deutsch's modified Stern Gerlach? (DDSG)
Has anyone tried to perform the DDSG?

For those not familiar, the DDSG is capable
of distinguishing Everett's Many Worlds and
the Copenhagen TomFoolery.  The basic idea is:

perform a SG, the beams are split to postions
a and b, correlated with the input spins.
A perfect observer measures the fact that
a beam is at "a" or at "b", but does not
measure which of those has occured.
The beams are then recombined.

The result, using many worlds, is that you get
the same thing out at you put in - the observation
has changed nothing.
Using copenhagen, the observation collapses the
interference, so that the output is an ensemble -
either up or down (50/50) but not a superposition.

In article <69b9ae$g0a$1@agate.berkeley.edu>, badams@adept77.demon.co.uk says...
>Unfortunately the perfect observer Deutsch is 
>talking about is, a Artifically Intellegent computer
>with has passed a turning test (this is so its passes
>the definition of a conscious observer)

This is what others have also said in the response.  I'm not
convinced that we need an "observer" in this second step
which actually qualifies as a "conscious observer" ; I make
no distinction between conscious and unconscious; for
example a "test spin" in the system which would start down
and flip up only if a particle passed it would qualify
as an "observer".
It is well known in Quantum Mechanics that the word "observer"
can be replaced by "physical change from which it is possible
that a later observer can discern information".
It seems to me that rather than an AI, some sort of physical
"tripwire" would work just as well (it would not be as
convincing, but perhaps informative); any theory which requires
observers to be distinguished between conscious and unconscious
is highly dubious.


Let me make things precise by putting some demands on
the Copenhagen to be considered :

1. no future -> past communication
	(note that there is some apparent future -> past
	communication in QM, but this is an artifact from
	the fact that we are looking for a specific
	transition amplitude - ie a specific future)

2. collapse occurs or does not at a certain time ;
	collapse cannot be "deferred" conditional
	on events spread over time.

3. "consciousness" cannot be part of the criterion for
	collapse : a physical change which can be later
	"observed" must also cause collapse, 

The 3rd is not really a demand, it must be satisfied
(given #2) or paradoxes will arise; (ie consider
a loop of wire around a hole in the double-slit experiment).

Criterion 2 cannot be proven, but I think it is a reasonable
requirement on physical law; if this kind of "active"
universe were allowed, I could simply postulate that the
universe will do whatever it wants such that we are tricked
into think such-and-such is true.  (criterion 2 is actually
just a special case of criterion 1 ; it is known that well-
defined physics can be constructed without criterion 1
(Feynman's radiator-absorber theory) but no known viable
relativistic physics cannot be expressed with criterion 1
satisfied).

Now, given these demands, we CAN create a Deutsch-Gerlach
experiment without a quantum-AI between the gratings; all
we need is some clever device which measures if the beam
is at spot 1 or spot 2, and then records that a measurement
was made, and then clears its "state" (with quantum accuracy,
so that the final condition is identical for occurances 1 and 2).
This will cause the Copenhagen spins to decohere (become
up or down probabalistically, no longer a superposition).

Let me anticipate objections :

true enough, no "conscious" observer at the end could tell
whether the beam was at spot 1 or 2, since we have erased
that record (in fact this must be true for the many worlds
to maintain interference); Thus you might say that we need not
have collapse, since demand 3 does not apply.  However, demands
1 and 2 save us.  When the device in the Gerlach makes the
measurement, the "laws of physics" do not yet "know" that
it will clear its memory later.  Thus, we appear to have
a measurement that can be later observed; thus collapse must
happen.  Some of you may prefer to think of a human being
near by flipping a random coin - if he gets Heads, he looks
at the device before its erased itself (ignore decoherence
via interactions w/ atmosphere and eachother) - if gets Tails
he only looks at the final beam (to distinguish interference
vs. ensemble). Hence I am demanding that the physics at a
given time does not depend on the results of a random coin toss
at a later time. (note that when he gets heads, he is NOT
looking at the beams, which are already past, but looking at
the device, so the device must have earlier caused collapse in
the beams).

Deutsch's version, with the "conscious AI" would have freed
us of the need to externally demand criterion #2 (I say
criterion 2, because I noted that #3 follows directly given #2,
though 3 is more intuitive).



Charles Bloom / cb at my domain
Send Me Email

Back to the Physics Index

The free web counter says you are visitor number