von Neumann  and Gleason vs. Bell

Returning to physics topics today I want to talk about an important contention point between von Neuman and Gleason on one had, and Bell on the other. I had a series of posts about Bell in which I discussed his major achievement. However I do not subscribe to his ontic point of view and today I will attempt to explain why and perhaps persuade the reader with what what I consider to be a solid argument.

Before Bell wrote his famous paper he had another one in which he criticized von Neumann, Jauch and Piron, and Gleason. The key of the criticism was that additivity of orthogonal projection operators not necessarily implies the additivity of expectation values:

$$\langle P_u + P_v \rangle = \langle P_{u}\rangle + \langle P_{v}\rangle$$

The actual technical requirements in von Neumann and Gleason case were slightly different, but they can be reduced to the statement above and more importantly this requirement is the nontrivial one in a particular proof of Gleason's theorem

 Andrew Gleason

To Bell, additivity of expectation values is a non-natural requirement because he was able to construct hidden variable models violating this requirement. And this was the basis for his criticism of von Neumann and his theorem of the impossibility of hidden variables. But is this additivity requirement unnatural? What can happen when it is violated? I will show that violation on additivity of expectation values can allow instantaneous communication at a distance.

The experimental setting is simple and involves spin 1 particles. The example which I will present is given in late Asher Peres book: Quantum Theory: Concepts and Methods at page 191. (This book is one of my main sources of inspiration for how we should understand and interpret quantum mechanics. )

The mathematical identity we need is:

$$J_{z}^{2} = {(J_{x}^{2} - J_{y}^{2})}^2$$

and the experiment is as follows: a beam of spin 1 particles is sent through a beam splitter which sends to the left particles of eigenvalue zero for $$J_{z}^{2}$$ and to the right particles of eigenvalue one for $$J_{z}^{2}$$.

Now a lab on the right decides to measure either if $$J_z = 1$$ or if $$J_{x}^{2} - J_{y}^{2} = 1$$

For the laboratory on the right let's call the projectors in the first case $$P_u$$ and $$P_v$$ and in the second case $$P_x$$ and $$P_y$$

For the lab on the left let's call the projectors in the first case $$P_{w1}$$ and in the second case$$P_{w2}$$.

Because of the mathematical identity: $$P_u + P_v = P_x +P_y$$ the issues becomes: should the expectation value requirement hold as well?

$$\langle P_{u}\rangle + \langle P_{v}\rangle = \langle P_{x}\rangle + \langle P_{y}\rangle$$

For the punch line we have the following identities:

$$\langle P_{w1}\rangle = 1 - \langle P_{u}\rangle - \langle P_{v}\rangle$$
and
$$\langle P_{w2}\rangle = 1 - \langle P_{x}\rangle - \langle P_{y}\rangle$$

and as such if the additivity requirement is violated we have:

$$\langle P_{w1}\rangle \neq \langle P_{w2}\rangle$$

Therefore regardless of the actual spatial separation, the lab on the left can figure out which experiment the lab on the right decided to perform!!!

With this experimental setup, if additivity of expectation values is false, you can even violate causality!!!

Back to Bell: just because von Neumann and Gleason did not provide a justification for their requirements, this does not invalidate their arguments. The justification was found at a later time.

But what about the Bohmian interpretation of quantum mechanics? Although there are superluminal speeds in the theory, superluminal signaling is not possible in it. This is because Bohmian interpretation respects Born rule which is a consequence of Gleason't theorem and it respects the additivity of  expectation values as well. Bohmian interpretation suffers from other issues however.

A US Presidential Election Analysis

Once in a while, important events deserve to be discussed and they dislodge physics topics. I wrote in the past about Donald Trump, and today I want to revisit the topic and present some analysis on what is currently going on in US election politics. By now the election outcome is all but certain: Trump will lose, and Clinton will win, but what is the basis for this prediction?

If you never heard of it, there is an amazing site by Nate Silverhttp://projects.fivethirtyeight.com/2016-election-forecast/

Nate Silver has a huge well deserved prediction credibility and he performs in-depth analysis of the elections way more than what you find on the usual media outlets like CNN.

In the image below you see the daily graph of the winning chances for Trump (the red line) and Clinton (the blue line).

Mid July Trump got a post Republican convention boost and he was on the rise until Clinton had the democratic convention.The sharp Trump decline after that convention was due to his attack on the Khan family, whose son died for America. When that scandal faded, mid August, Trump's odds began improving following Clinton's erosion of trust due to the email server scandal, and also due to concerns about her health. Then came the first debate in which Trump had a very good first half an hour but was ill prepared for the long haul of the debate. That started a turn-off reaction for the independent voters who only now got the first serious look at him.

Still the slide was temporary and the fluctuations were comparable with the prior two weeks and for two days he was climbing back in the polls. At this point the famous tape of him bragging about grabbing women by their genitals surfaced and this started a a chain reaction mostly inside the Republican party. The tape reversed the trend, but what killed his election chances was his performance in the second debate. Trump made two strategic mistakes:

• he attacked Hillary (and Bill Clinton) instead of sincerely apologizing
• he dismissed the tape as locker room talk and claimed he did not do anything physical
Let's see what those were fatal mistakes for him. By going on the offensive when people expected genuine contrition made Trump appear as a rabid dog and people were hugely disgusted by his behavior. The general consensus of the independent people who watched the second debate was that they themselves felt dirty and in need of a shower. The second debate reduced Trump's changes into low teen numbers. If you look at the two prior cycles: June-August and August-October you notice the bouncing back rate for Trump and that there is not enough time for him to close the gap before election day.

Now even if the election is postponed a few months, Trump will never recover due to his second strategic mistake. For all his playboy behavior, it is impossible that he never did anything real as he was bragging on the tape. But by claiming it was all "only talk" as opposed to Bill Clinton's actions encouraged women to come forward to tell their stories. Once this started it cannot be stopped. Just ask Bill Cosby on how it happened in his case: the same pattern will repeat here.

When the tape was released, republicans running for reelection started deserting Trump out of fear that he will negatively affect their changes of reelection due to the backlash in the women's vote. But by now is is clear Trump's chances of election are virtually zero and this has the potential to split the Republican party.

After the election loss, the finger-pointing will begin. Rience Priebus has no real vision or power and will most likely lose his job. The power vacuum will start a chaotic period for the Republican party which will end either by a victory of anti-Trump forces, or a party split. My bet is that the party will remain intact since politicians tend to act as a pack: there is strength in numbers and it is hard to survive alone.

Local Causality in a Friedmann-Robertson-Walker Spacetime

A few days ago I learned about a controversy regarding Joy Christian's paper:
Local Causality in a Friedmann-Robertson-Walker Spacetime which got published in Annals of Physics and was recently withdrawn: http://retractionwatch.com/2016/09/30/physicist-threatens-legal-action-after-journal-mysteriously-removed-study/

The paper repeats the same mathematically incorrect arguments of Joy Christian against Bell's theorem and has nothing to do with Friedmann-Robertson-Walker spacetime. The FRW space was only used as a trick to get the wrong referees which are not experts on Bell theorem. In particular the argument is the same as in this incorrect Joy's one-pager preprint

The mistake happens in two steps:
• a unification of two algebras into the same equation
• a subtle transition from a variable to an index in a computation mixing apples with oranges
I will run the explanation in parallel between the one-pager and the withdrawn paper because it is easier to see the mistake in the one-pager.

Step 1: One-pager Eq. 3 is the same as FRW paper Eq. 49:

$$\beta_j \beta_k = -\delta_{jk} - \epsilon_{jkl} \beta_l$$
$$L(a, \lambda) L(b, \lambda) = - a\cdot b - L(a\times b, \lambda)$$

In the FRW paper $$L(a, \lambda) = \lambda I\cdot a$$ while in the 1-pager: $$\beta_j (\lambda) = \lambda \beta_j$$ where $$\lambda$$ is a choice of orientation. This make look as an innocuous unification but in fact it describes two distinct algebras with distinct representations.

This means that Eqs. 3/49 describe two multiplication rules (and let's call them A for apples and O for oranges). Unpacked, the multiplication rules are:

$$A_i A_j = -\delta_{jk} + \epsilon_{jkl} A_l$$
$$O_i O_j = -\delta_{jk} - \epsilon_{jkl} O_l$$

The matrix representations are:

$$A_1 = \left( \begin{array}{cc} i & 0 \\ 0 & -i \end{array}\right) = i\sigma_3$$
$$A_2 = \left( \begin{array}{cc} 0 & -1 \\ 1 & 0 \end{array}\right) = -i \sigma_2$$
$$A_3 = \left( \begin{array}{cc} 0 & -i \\ -i & 0 \end{array}\right)= -i \sigma_1$$

and $$O_i = - A_i = {A_i}^{\dagger}$$

Try multiplying the above matrices to convince yourself that they are indeed a valid representation of the multiplication rule.

There is even a ket and bra or column and row vector representation of the two distinct algebras, but I won't go into details since it requires a math detour which will takes the focus away from Joy's mistake.

Step 2: summing apples with oranges (or column vectors with row vectors)

The summation is done in steps 5-7 and 67-75. The problem is that the sum from 1 to n contains two kinds of objects apples and oranges and should be in fact broken up in two sums. If this needs to be combined into a single sum then we need to convert apples and oranges to orientation independent objects. Since $$L(a, \lambda) = \lambda I\cdot a$$ and  $$\beta_j (\lambda) = \lambda \beta_j$$ with $$I \cdot a$$ and $$\beta_j$$ orientation independent objects, when we convert the two kinds of objects to a single unified kind there is an additional missing factor of lambda.

Since $$O_j=\beta_j (\lambda^k) = \lambda^k \beta_j$$ with $$\lambda^k = +1$$ and $$A_j=-\beta_j (\lambda^k) = \lambda^k \beta_j$$ with $$\lambda^k = -1$$ where $$\lambda^k$$ is the orientation of the k-th pair of particles, in the transition from 6 to 7 and 72 to 73  in an unified sum we are missing a $$\lambda^k$$  factor.

Again, either break up the sum into apples and oranges (where the index k tells you which kinds of objects you are dealing with) or unify the sum and adjust it by converting it into orientation-free objects and this is done by multiplication by $$\lambda^k$$. If we separate the sums, they will not cancel each other out because there is -1 a conversion factor from apples to oranges  $$O = - A$$, and if we unify the sum as Joy does in Eq. 74 the sum is not of $$\lambda^k$$ but of $${(\lambda^k)}^2$$ which does not vanish.

As it happens Joy's research program is plagued by this -1 (or missing lambda) mistake in his attempt to vanquish a cross product term. But even if his proposal were mathematically valid it would not represent a genuine challenge to Bell's theorem. Inspired by Joy's program, James Weatherall found a mathematically valid example very similar with Joy's proposal but one which does not use quaternions/Clifford algebras.

The lesson of Weatherall is that correlations must be computed using actual experimental results and the computation (like the one Joy is doing at steps 67-75) must not be made in a hypothetical space of "beables".

Now back to the paper withdrawal, the journal did not acted properly: it should have notified Joy before taking action. However Joy did not act in good faith by masquerading the title to sneak it past imperfect peer review and his attempt at victimization in the comments section has no merit. In the end the paper is mathematically incorrect, has nothing to do with FRW spacetime, and (as proven by Weatherall) Joy's program is fatally flawed and cannot get off the ground even if there were no mathematical mistakes in it.

The whole is greater than the sum of its parts

The tile of today's post is a quote from Aristotle, but I want to illustrate this in the quantum formalism. Here I will refer to a famous Hardy paper: Quantum Theory From Five Reasonable Axioms. In there one finds the following definitions:

• The number of degrees of freedom, K, is defined as the minimum number of probability measurements needed to determine the state, or, more roughly, as the number of real parameters required to specify the state.
• The dimension, N, is defined as the maximum number of states that can be reliably distinguished from one another in a single shot measurement.
Quantum mechanics obeys $$K=N^2$$ while classical physics obeys $$K=N$$.

Now suppose nature is realistic and the electron spin does exist independent of measurement. From Stern-Gerlach experiments we know what happens when we pass a beam of electrons through two such devices rotates by an angle $$\alpha$$: suppose we pick only the spin up electrons, on the second device the electrons are still deflected up $$\cos^2 (\alpha /2)$$ percent of time and are deflected down $$\sin^2 (\alpha /2)$$ percent of time . This is an experimental fact.

Now suppose we have a source of electron pairs prepared in a singlet state. This means that the total spin of the system is zero. There is no reason to distinguish a particular direction in the universe and with the assumption of the existence of the spin independent of measurement we can very naturally assume that our singlet state electron source produces an isotropic distribution of particles with opposite spins. Now we ask: in an EPR-B experiment, what kind of correlation would Alice and Bob get under the above assumptions?

We can go about finding the answer to this in three ways. First we can cheat and look the answer up in a 1957 paper by Bohm and Aharonov who first made the computation, This paper (and the answer) is cited by Bell in his famous "On the Einstein-Podolsky-Rosen paradox". But we can do better than that. We can play with the simulation software from last time. Here is what you need to do:

-replace the generating functions with:

function GenerateAliceOutputFromSharedRandomness(direction, sharedRandomness3DVector) {
var cosAngle= Dot(direction, sharedRandomness3DVector);
var cosHalfAngleSquared = (1+cosAngle)/2;
if (Math.random() < cosHalfAngleSquared )
return +1;
else
return -1;
};

function GenerateBobOutputFromSharedRandomness(direction, sharedRandomness3DVector) {
var cosAngle= Dot(direction, sharedRandomness3DVector);
var cosHalfAngleSquared = (1+cosAngle)/2;
if (Math.random() < cosHalfAngleSquared )
return -1;
else
return +1;
};

-replace the -cosine curve drawing with  a -0.3333333 cosine curve:

boardCorrelations.create('functiongraph', [function(t){ return -0.3333333*Math.cos(t); }, -Math.PI*10, Math.PI*10],{strokeColor:  "#66ff66", strokeWidth:2,highlightStrokeColor: "#66ff66", highlightStrokeWidth:2});

replace the fit test for the cosine curve with one for with 0.3333333 cosine curve:

var diffCosine = epsilon + 0.3333333*Math.cos(angle);

and the result of the program (for 1000 directions and 1000 experiments) is:

So how does the program work? The sharedRandomness3DVector is the direction on which the spins are randomly generated. The dot product compute the cosine of the angle between the measurement direction and the spin, and from it we can compute the cosine of the half angle. The square of the cosine of the half angle is used to determine the random outcome. The resulting curve is 1/3 of the experimental correlation curve. Notice that the output generation for Alice and Bob are completely independent (locality).

But the actual analytical computation is not that hard to do either. We proceed in two steps.

Step 1: Let $$\beta$$ be the angle between one spin $$x$$ and a measurement device direction $$a$$. We have: $$\cos (\beta) = a\cdot x$$ and:

$${(\cos \frac{\beta}{2})}^2 = \frac{1+\cos\beta}{2} = \frac{1+a\cdot x}{2}$$

Keeping the direction $$x$$ constant, the measurement outcomes for Alice and Bob measuring on the directions $$a$$ and $$b$$ respectively are:

++ $$\frac{1+a\cdot x}{2} \frac{1+b\cdot (-x)}{2}$$ percent of the time
-- $$\frac{1-a\cdot x}{2} \frac{1-b\cdot (-x)}{2}$$ percent of the time
+-$$\frac{1+a\cdot x}{2} \frac{1-b\cdot (-x)}{2}$$ percent of the time
-+$$\frac{1-a\cdot x}{2} \frac{1+b\cdot (-x)}{2}$$ percent of the time

which yields the correlation: $$-(a\cdot x) (b \cdot x)$$

Step 2: integrate $$-(a\cdot x) (b \cdot x)$$ for all directions $$x$$. To this aim align $$a$$ on the z axis and have $$b$$ in the y-z plane:

$$a=(0,0,a)$$
$$b=(0, b_y , b_z)$$

then go to spherical coordinates integrating using:

$$\frac{1}{4\pi}\int_{0}^{2\pi} d\theta \int_{0}^{\pi} \sin\phi d\phi$$

$$a\cdot x = \cos\phi$$
$$b\cdot x = b(0, \sin\alpha, -\cos\alpha)\cdot(\sin\phi \cos\theta, \sin\phi\sin\theta, \cos\phi)$$

where $$\alpha$$ is the angle between $$a$$ and $$b$$.

Plugging all back in and doing the trivial integration yields: $$-\frac{\cos\alpha}{3}$$

So now for the moral of the story. the quantum mechanics prediction and the experimentally observed  correlation is  $$-\cos\alpha$$ and not $$-\frac{1}{3} \cos\alpha$$

The 1/3 incorrect correlation factor comes from demanding (1) the experimentally proven behavior from two consecutive S-G device measurements, (2) the hypothesis that the electron spins exist before measurement, and (3) and isotropic distribution of spins originating from a total spin zero state.

(1) and (3) cannot be discarded because (1) is an experimental behavior, and (3) is a very natural demand of isotropy. It is (2) which is the faulty assumption.

If (2) is true then circling back on Hardy's result, we are under the classical physics condition: $$K=N$$ which means that the whole is the sum of the parts.

Bell considered both the 1/3 result and the one from his inequality and decided to showcase his inequality for experimental purposes reasons: "It is probably less easy, experimentally, to distinguish (10) from (3), then (11) from (3).". Both hidden variable models:

if (Dot(direction, sharedRandomness3DVector) < 0)
return +1;
else
return -1;

and

var cosAngle= Dot(direction, sharedRandomness3DVector);
var cosHalfAngleSquared = (1+cosAngle)/2;
if (Math.random() < cosHalfAngleSquared )
return -1;
else
return +1;

are at odds with quantum mechanics and experimental results. The difference between them is on the correlation behavior for 0 and 180 degrees. If we allow information transfer between Alice generating function and Bob generating function (nonlocality) then it is easy to generate whatever correlation curve we want under both scenarios (play with the computer model to see how it can be done).

So from realism point of view, which hidden variable model is better? Should we insist on perfect anti-correlations at 0 degrees, or should we demand the two consecutive S-G results along with realism? It does not matter since both are wrong. In the end local realism is dead.

Explanation for Bell's theorem modeling program

Today I will explain in detail the code from last time and show how can you change it to experiment with Bell's theorem. The code below needs only a text editor to make modifications and requires only a web browser to run. In other words, it is trivial to play with provided you understand the basics of HTML and Java Script. For elementary introductions to those topics see here and here.

In a standard HTML page we start in the body section with 3 entrees responsible to plot the graph in the end.

<body>
<script src="http://jsxgraph.uni-bayreuth.de/distrib/jsxgraphcore.js" type="text/javascript"></script>

Then we have the following HTML table

<table border="4" style="width: 50%px;">
<tr><td style="width: 25%;">
<br />
Number of experiments: <input id="totAngMom" type="text" value="100" />
<br />
Number of directions: <input id="totTestDir" type="text" value="100" />
<br />

<input onclick="clearInput();" type="button" value="Clear Data" />

<input onclick="generateRandomData();" type="button" value="Generate Shared Random Data" />
<br />

<textarea cols="65" id="in_data" rows="7">
</textarea>
<br />

<input onclick="clearTestDir();" type="button" value="Clear data" />

<input onclick="generateTestDir();" type="button" value="Generate Random Alice Bob directions (x,y,z,x,y,z)" />
<textarea cols="65" id="in_test" rows="4">
</textarea>
<br />
<input onclick="clearOutput();" type="button" value="Clear Data" />

<input onclick="generateData();" type="button" value="Generate Data from shared randomness" />
<br />
Legend: Direction index|Data index|Measurement Alice|Measurement Bob
<textarea cols="65" id="out_measurements" rows="4">
</textarea>
<input onclick="clearBoard();" type="button" value="Clear Graph" />
<input onclick="plotData();" type="button" value="Plot Data" />

</td>
</tr>
<tr>
<td>
<div class="jxgbox" id="jxgboxCorrelations" style="height: 400px; width: 550px;">
</div>

</td></tr>
</table>

and we close the body:

</body>

The brain of the page is encapsulated by script tags:

<script type="text/javascript">
</script>

which can be placed anywhere inside the HTML page. Here are the functions which are declared inside the script tags:

//Dot is the scalar product of 2 3D vectors
function Dot(a, b)
{
return a[0]*b[0] + a[1]*b[1] + a[2]*b[2];
};

This simply computes the dot product of two vectors in ordinary 3D Euclidean space. As a Java Script reminder, the arrays start at index zero and go to N-1. Also in Java Script comments start with two double slash // and lines end in semicolon ;

Next there is a little utility function which computes the magnitude of a vector:

//Norm computes the norm of a 3D vector
function GetNorm(vect)
{
return Math.sqrt(Dot(vect, vect));
};

This is followed by another utility function which normalizes a vector:

//Normalize generates a unit vector out of a vector
function Normalize(vect)
{
//declares the variable
var ret = new Array(3);
//computes the norm
var norm = GetNorm(vect);

//scales the vector
ret[0] = vect[0]/norm;
ret[1] = vect[1]/norm;
ret[2] = vect[2]/norm;
return ret;
};

To create an random oriented vector we use the function below which first randomly generates a point in a cube of side 2, eliminated the points outside a unit sphere, and then normalizes the vector:

//RandomDirection create a 3D unit vector of random direction
function RandomDirection()
{
//declares the variable
var ret = new Array(3);

//fills a 3D cube with coordinates from -1 to 1 on each direction
ret[0] = 2*(Math.random()-0.5);
ret[1] = 2*(Math.random()-0.5);
ret[2] = 2*(Math.random()-0.5);

//excludes the points outside of a unit sphere (tries again)
if(GetNorm(ret) > 1)
return RandomDirection();
return Normalize(ret);
};

The rest of the code is this:

var generateData = function()
{
clearBoard();
clearOutput();
//gets the data
var angMom = new Array();
var t = document.getElementById('in_data').value;
var data = t.split('\n');
for (var i=0;i<data.length;i++)
{
var vect = data[i].split(',');
if(vect.length == 3)
angMom[i] = data[i].split(',');
}

var newTotAngMom = angMom.length;
clearBoard();
var varianceLinear = 0;
var varianceCosine = 0;
var totTestDirs = document.getElementById('totTestDir').value;

var abDirections = new Array();
var AliceDirections = new Array();
var BobDirections = new Array();
var t2 = document.getElementById('in_test').value;
var data2 = t2.split('\n');
for (var k = 0; k < data2.length; k++)
{
var vect2 = data2[k].split(',');
if (vect2.length == 6)
{
abDirections[k] = data2[k].split(',');
AliceDirections[k] = data2[k].split(',');
BobDirections[k] = data2[k].split(',');

AliceDirections[k][0] = abDirections[k][0];
AliceDirections[k][1] = abDirections[k][1];
AliceDirections[k][2] = abDirections[k][2];
BobDirections[k][0]   = abDirections[k][3];
BobDirections[k][1]   = abDirections[k][4];
BobDirections[k][2]   = abDirections[k][5];
}
}

var TempOutput = "";

//computes the output
for(var j=0; j<totTestDirs; j++)
{
var a = AliceDirections[j];
var b = BobDirections[j];
for(var i=0; i<newTotAngMom; i++)
{
TempOutput = TempOutput + (j+1);
TempOutput = TempOutput + ",";
TempOutput = TempOutput + (i+1);
TempOutput = TempOutput + ",";
TempOutput = TempOutput + (GenerateAliceOutputFromSharedRandomness(a, angMom[i]));
TempOutput = TempOutput + ",";
TempOutput = TempOutput + (GenerateBobOutputFromSharedRandomness(b, angMom[i]));
if(i != newTotAngMom-1 || j != totTestDirs-1)
TempOutput = TempOutput + " \n";
}
}

apendResults(TempOutput);
};

var plotData = function()
{
clearBoard();
boardCorrelations.suspendUpdate();
//gets the data
var angMom = new Array();
var t = document.getElementById('in_data').value;
var data = t.split('\n');
for (var i=0;i<data.length;i++)
{
var vect = data[i].split(',');
if(vect.length == 3)
angMom[i] = data[i].split(',');
}

var newTotAngMom = angMom.length;
var varianceLinear = 0;
var varianceCosine = 0;
var totTestDirs = document.getElementById('totTestDir').value;

//extract directions
var abDirections = new Array();
var AliceDirections = new Array();
var BobDirections = new Array();
var t2 = document.getElementById('in_test').value;
var data2 = t2.split('\n');
for (var k = 0; k < data2.length; k++)
{
var vect2 = data2[k].split(',');
if (vect2.length == 6)
{
abDirections[k] = data2[k].split(',');
AliceDirections[k] = data2[k].split(',');
BobDirections[k] = data2[k].split(',');

AliceDirections[k][0] = abDirections[k][0];
AliceDirections[k][1] = abDirections[k][1];
AliceDirections[k][2] = abDirections[k][2];
BobDirections[k][0]   = abDirections[k][3];
BobDirections[k][1]   = abDirections[k][4];
BobDirections[k][2]   = abDirections[k][5];
}
}

var tempLine = new Array();
var Data_Val = document.getElementById('out_measurements').value;
var data_rows = Data_Val.split('\n');

var directionIndex = 1;
var beginNewDirection = false;

var a = new Array(3);
a[0] = AliceDirections[0][0];
a[1] = AliceDirections[0][1];
a[2] = AliceDirections[0][2];
var b = new Array(3);
b[0] = BobDirections[0][0];
b[1] = BobDirections[0][1];
b[2] = BobDirections[0][2];
var sum = 0;

for (var ii=0;ii<data_rows.length;ii++)
{
//parse the input line
var vect = data_rows[ii].split(',');
if(vect.length == 4)
tempLine = data_rows[ii].split(',');

//see if a new direction index is starting
if (directionIndex != tempLine[0])
{
beginNewDirection = true;
}

if(!beginNewDirection)
{
var sharedRandomnessIndex = tempLine[1];
var sharedRandomness = angMom[sharedRandomnessIndex];
var aliceOutcome = tempLine[2];
var bobOutcome = tempLine[3];
sum = sum + aliceOutcome*bobOutcome;
}

if (beginNewDirection)
{
//finish computation
var epsilon = sum/newTotAngMom;
var angle = Math.acos(Dot(a, b));
boardCorrelations.createElement('point', [angle,epsilon],{size:0.1,withLabel:false});

var diffLinear = epsilon - (-1+2/Math.PI*angle);
varianceLinear = varianceLinear + diffLinear*diffLinear;
var diffCosine = epsilon + Math.cos(angle);
varianceCosine = varianceCosine + diffCosine*diffCosine;

//reset and start a new cycle
directionIndex = tempLine[0];
a[0] = AliceDirections[directionIndex-1][0];
a[1] = AliceDirections[directionIndex-1][1];
a[2] = AliceDirections[directionIndex-1][2];
b[0] = BobDirections[directionIndex-1][0];
b[1] = BobDirections[directionIndex-1][1];
b[2] = BobDirections[directionIndex-1][2];
sum = 0;
var sharedRandomnessIndex = tempLine[1];
var sharedRandomness = angMom[sharedRandomnessIndex];
var aliceOutcome = tempLine[2];
var bobOutcome = tempLine[3];
sum = sum + aliceOutcome*bobOutcome;
beginNewDirection = false;
}

}
//finish computation for last element of the loop above
var epsilon = sum/newTotAngMom;
var angle = Math.acos(Dot(a, b));
boardCorrelations.createElement('point', [angle,epsilon],{size:0.1,withLabel:false});
var diffLinear = epsilon - (-1+2/Math.PI*angle);
varianceLinear = varianceLinear + diffLinear*diffLinear;
var diffCosine = epsilon + Math.cos(angle);
varianceCosine = varianceCosine + diffCosine*diffCosine;
//display total fit
boardCorrelations.createElement('text',[2.0, -0.7, 'Linear Fitting: ' + varianceLinear],{});
boardCorrelations.createElement('text',[2.0, -0.8, 'Cosine Fitting: ' + varianceCosine],{});
boardCorrelations.createElement('text',[2.0, -0.9, 'Cosine/Linear: ' + varianceCosine/varianceLinear],{});
boardCorrelations.unsuspendUpdate();
};

var clearBoard = function()
{
JXG.JSXGraph.freeBoard(boardCorrelations);
boardCorrelations = JXG.JSXGraph.initBoard('jxgboxCorrelations',{boundingbox:[-0.20, 1.25, 3.4, -1.25],axis:true,

boardCorrelations.create('functiongraph', [function(t){ return -Math.cos(t); }, -Math.PI*10, Math.PI*10],{strokeColor:

"#66ff66", strokeWidth:2,highlightStrokeColor: "#66ff66", highlightStrokeWidth:2});
boardCorrelations.create('functiongraph', [function(t){ return -1+2/Math.PI*t; }, 0, Math.PI],{strokeColor: "#6666ff",

strokeWidth:2,highlightStrokeColor: "#6666ff", highlightStrokeWidth:2});
};

var clearInput = function()
{
document.getElementById('in_data').value = '';
};

var clearTestDir = function()
{
document.getElementById('in_test').value = '';
};

var clearOutput = function()
{
document.getElementById('out_measurements').value = '';
};

var generateTestDir = function()
{
clearBoard();
var totTestDir = document.getElementById('totTestDir').value;
var testDir = new Array(totTestDir);
var strData = "";
for(var i=0; i<totTestDir; i++)
{
//first is Alice, second is Bob
testDir[i] = RandomDirection();
strData = strData + testDir[i][0] + ", " + testDir[i][1] + ", " + testDir[i][2]+ ", " ;
testDir[i] = RandomDirection();
strData = strData + testDir[i][0] + ", " + testDir[i][1] + ", " + testDir[i][2] + '\n';
}

document.getElementById('in_test').value = strData;
};

var generateRandomData = function()
{
clearBoard();
var totAngMoms = document.getElementById('totAngMom').value;
var angMom = new Array(totAngMoms);
var strData = "";
for(var i=0; i<totAngMoms; i++)
{
angMom[i] = RandomDirection();
strData = strData + angMom[i][0] + ", " + angMom[i][1] + ", " + angMom[i][2] + '\n';
}

document.getElementById('in_data').value = strData;
};

var apendResults= function(newData)
{
var existingData = document.getElementById('out_measurements').value;
existingData = existingData + newData;
document.getElementById('out_measurements').value = existingData;
};

function GenerateAliceOutputFromSharedRandomness(direction, sharedRandomness3DVector) {
//replace this with your own function returning +1 or -1
if (Dot(direction, sharedRandomness3DVector) > 0)
return +1;
else
return -1;
};

function GenerateBobOutputFromSharedRandomness(direction, sharedRandomness3DVector) {
//replace this with your own function returning +1 or -1
if (Dot(direction, sharedRandomness3DVector) < 0)
return +1;
else
return -1;
};

var boardCorrelations = JXG.JSXGraph.initBoard('jxgboxCorrelations', {axis:true, boundingbox: [-0.25, 1.25, 3.4, -1.25], showCopyright:false});

clearBoard();
generateRandomData();
generateTestDir();
generateData();
plotData();

clearBoard();
generateRandomData();
generateTestDir();
generateData();
plotData();

The key to the whole exercise are the following two functions:

function GenerateAliceOutputFromSharedRandomness(direction, sharedRandomness3DVector) {
//replace this with your own function returning +1 or -1
if (Dot(direction, sharedRandomness3DVector) > 0)
return +1;
else
return -1;
};

function GenerateBobOutputFromSharedRandomness(direction, sharedRandomness3DVector) {
//replace this with your own function returning +1 or -1
if (Dot(direction, sharedRandomness3DVector) < 0)
return +1;
else
return -1;
};

To experiment with various hidden variable models all you have to do is replace the two functions above with your own concoction of hidden variable which uses the shared variable "sharedRandomness3DVector".

There are certain models for which if we return zero (which in the correlation computation is equivalent with discarding the data since the correlations are computed by this line in the code: sum = sum + aliceOutcome*bobOutcome;) a certain number of times as a function of the angle between direction and sharedRandomness3DVector, then one can obtain the quantum mechanics correlation curve. This is the famous detection loophole (or (un)fair sampling) for Bell's theorem.

If we talk about the detection loophole the paper to read is an old one by Philip Pearle: http://journals.aps.org/prd/abstract/10.1103/PhysRevD.2.1418 In there Pearle found an entire class of solutions able to generate the quantum correlations. The original paper is hard to double check (it took me more than a week and I was still not done completely), but Richard Gill did manage to extract a useful workable detection loophole model out of it: https://arxiv.org/pdf/1505.04431.pdf

Manipulating the generating functions above one can easily test various ideas about hidden variable models. For example an isotropic model of opposite spins generates -1/3 a.b correlations. It is not that hard to double check the math in this case: a simple integrals will do the trick. in particular this shows that the spins do not exist independent of measurement.

More manipulations using the detection loophole are even able to generate super-quantum Popescu-Rohrlich box correlations, but I let the user to experiment with this and discover how to do it for themselves. Happy computer modeling!

Playing with Bell's theorem

In this post I'll write just a little text because editing is done straight in the HTML view which is very tedious. Below I have a Java script program which illustrates Bell's theorem. If you want to play with this code just right click on the page to view the source and extract it from there. If you do not know how to do that then you are not going to understand it in a few sentences. Next time I'll describe the code and how to experiment with various hidden variable models.
This is about an EPR-B Alice-Bob experiment where each ("measurement") generate a deterministic +1 or -1 outcome for a particular measurement direction using a shared piece of information: a random vector. Then the correlations are computed and plotted. No matter what deterministic model you try the correlation near the origin you generate a straight line vs. a curve of zero slope in the case of quantum mechanics. For this particular program, given a measurement direction specified as a unite vector in Cartesian coordinates I am computing the scalar product and I return +1 if positive I and -1 if negative. The experiment is repeated a number of times on various random measurement directions.
If you do not trust the randomly generated data, you can enter you own random Alice-Bob shared secret and your own measurement directions. Part of the credit for this program goes to Ovidiu Stoica.
 Number of experiments: Number of directions: Legend: Direction index|Data index|Measurement Alice|Measurement Bob

A sinister mystification

Once in a while, events in the society at large overshadow all other considerations. I will put on hold the series about Bell's theorem for this week because, such an event occurred: Mother Teresa was proclaimed to be a saint. So what? What is the big deal?

Growing up in Romania, all I heard about her was that she was the symbol of selfless devotion to the poor, a truly remarkable person symbolizing all that it is good in mankind. Coming to US, the public perception was on similar lines and her 1979 Nobel piece prize seemed well deserved. Her recent sainthood was only the realization of a natural public expectation.

However things are not always what they appear and in this case the truth it is complete opposite with the perception. The sainthood outcome is the result of public gullibility masterly exploited by a morally bankrupt Catholic Church in collusion with dirty politicians, media, corrupt businessman, a dictatorship monarchy, and at the center of it all a pure evil person advancing a religious fanatic agenda for the benefit of the Catholic Church and her own perverted pleasure: Mother Teresa.

The person who blew the whistle on Mother Teresa's con artist mystification was a remarkable person: Christopher Hitchens with his book: The Missionary Position. I never heard of Mr. Hitchens until a year ago when I discovered by accident his anti theistic stance. Coming from a country which suffered under communism for decades, I was turned off by his hints of admiration for Marxist ideas. It took me some time to properly asses his integrity and the value of his arguments. In the end I found him a very sharp clear thinker with a courageous attitude. I was surprised to discover he was a mini celebrity in the left political circles in US who alienated part of that audience due to his hawkish attitude and support for the Iraq war and who was also a personal friend of late Justice Scalia-the most conservative member of the US Supreme Court.

Now I don't think I will change the minds of the devout Catholics about Mother Theresa, so if you are such a person either take the blue pill and stop reading the rest of this post or take the red pill and keep reading to be shaken from your intellectual complacency and maybe stop buying the bridge the church keeps selling you.

To start I encourage you to watch the following videos:

It's too long to explain the whole mystification story but here is the gist:

Mother Teresa was not a friend of the poor but of poverty and suffering. She derived a perverted gratification from witnessing and encouraging suffering because she thought this would bring her closer to her salvation. This is the mark of a psychopath which derives meaning and pleasure from other's suffering. The places she established were not designed to alleviate suffering but decrepit places of abject poverty and suffering were people were simply brought to die. Young people were denied simple medical care which could have easily saved their lives because their suffering was sanctioned by a fanatical religious agenda.

So maybe Mother Teresa applied the same principles on herself. Not at all. When she was ill no expenses were spared and she took advantage of the latest medical advances. What a cosmic hypocrite. But where did the money come from to establish her places of suffering? Among other sources from the brutal Duvalier dictatorship family in Haiti responsible for the deaths of tens of thousands of people over decades, and from a corrupt business person convicted of stealing the life savings of thousands of people. But perhaps the Catholic Church upon learning about this returned the blood and dirty money to the victims? After all Mother Teresa acted with the full blessing of the church. Think again.

Hitchens has a simple but true position: religion poisons everything. It takes time to evaluate his claims and try to refute it if you can. It is much more convenient to ignore it, but then I assume that if you reached this paragraph you took the red pill. Many religious figures and "religious scholars" tried debating with  Hitchens only to be shamefully debunked. None won the debates. Hitchslap is now an urban dictionary verb.

On Mother Teresa Hitchens put it like this: "It is a certainty that millions of people died because of her work. And millions more were made poorer, or stupider, more sick, more diseased, more fearful, and more ignorant."

Her sainthood is a scandal due to a sinister and cynical mystification perpetuated by many people over decades for their own benefits. The shame list includes the Catholic Church with all its popes who sanctioned Mother Teresa fanatical religious agenda, politicians like Ronald Reagan, public figures like Princess Dianna, the Nobel Peace prize committee, media like CNN, all who exploited public opinions for their own agendas, as well as corrupt businessman and dictators who provided the money in exchange of whitewashing their public image.