Contents¶

Section 1. Text of the blogpost¶

section 2. Python code supporting the blog post¶

The HTML for this notebook can be found at https://robhendrik.github.io/InformationCausalityAsPhysicalPrinciple

For the Jupyter notebook check: https://github.com/robhendrik/InformationCausalityAsPhysicalPrinciple

Information causality as a physical principle¶

Since 1935, a discussion has been going on in Physics on non-locality in quantum mechanics [1]. Quantum mechanics predicts that events in different locations can be highly correlated but does not describe the mechanism that creates this correlation. For a long time, scientists argued that the theory had to be incomplete. Based on John Bell's theoretical work [2], Alain Aspect provided experimental proof of quantum non-locality in 1982 [3]. Around the same time, Boris Tsirelson published his analysis, indicating an upper limit to quantum non-locality [4]. The work of Tsirelson is deeply rooted in quantum mechanics and not easy to understand. In this post, we will discuss the work of Marcin Pawlowski and co-workers [5], who provided a more intuitive derivation of the Tsirelson bound. Surprisingly, Pawlowski bases his arguments on information theory rather than on arguments from physics.

CHSH¶

The CHSH inequality is a version of Bell's inequality. The letters in CHSH stand for the scientists John Clauser, Michael Horne, Abner Shimony, and Richard Holt. In the CHSH inequality, we consider a correlation parameter S, which describes the correlation between events occurring at different locations. Suppose the events occur at a large enough distance, and the time difference between the events is small enough. In that case, we know from relativity theory that information cannot travel between events. The events are relativistically independent. The value of S ranges from 0 to 4, and within this range, we can identify different domains. John Bell proved that if S has a value of 2 or lower, we can explain the correlation without assuming non-locality (in practice, we could use classical objects like coins, dice, or bananas for the experiments. See [6] for the banana reference). If the value of S is higher than 2 but not above 2√2, we need to accept non-locality to explain the observed correlation. This range for S we can achieve with quantum mechanics (so we can use objects like photons or electrons for the experiments). Alain Aspect observed a value of almost 2√2 in his experiments on photons. 2√2 is the Tsirelson bound for S, indicating the upper limit for quantum mechanics. If S exceeds the Tsirelson bound, the quantum mechanical effect cannot cause this correlation; we must consider 'superquantum' correlations. Sandu Popescu and Daniel Rohrlich demonstrated that S could have a value of up to 4. This triggers the question of why nature limits non-locality (as expressed in the value of S) in quantum mechanics and whether there is an intuitive way to understand this limit.

image.png

Figure 1: Boris Tsirelson as a first-year student in 1967. https://en.wikipedia.org/wiki/Boris_Tsirelson

The CHSH inequality relates to a simple (thought-)experiment. We have two observers (typically called Alice and Bob) at a large enough distance. They both receive an object (for instance, a particle, a photon, a box, a set of two coins, or a banana) on which they can do two measurements (typically called the T or the S measurement). If they receive photons, they could measure in one polarization direction for T and another polarization direction for S. If Alice and Bob each receive an electron, they could measure spin in one direction for T and another direction for S. For the set of two coins Alice and Bob could decide to check the orientation of one coin or the other and for a banana they could check taste after peeling the banana from the top or the stem. So, in our thought experiment, we can consider any object as long as Alice and Bob can perform two distinct measurements on their object. Bob and Alice can get two possible outcomes from their measurements: They can detect the photons and electrons or not detect the particle for the selected measurement. For the coins, they can see the selected coins heads up or tails up. The banana (peeled from top or bottom) can taste 'ordinary' or 'intense.' So, in our thought experiment, we consider two distinct outcomes for any measurement performed by Alice and Bob. For one observer, we can distinguish four different situations. They can select two measurements (T or S) and get two possible outcomes (we label them as 1 or -1). We have 4 x 4 = 16 different outcomes for Bob and Alice combined. We look at correlation and are only interested in whether Alice and Bob have the same result or whether they measure different outcomes. So if they both observe outcome 1, or both observe -1, we say that the correlation for that experiment is 1. If they get a different measurement outcome, the correlation is -1. With this, we can distinguish eight different situations. There are four possible measurement combinations for Alice and Bob (TT, TS, ST, or SS. The first letter indicates Alice's measurement choice, and the second letter is Bob's choice). For each of these four measurement combinations, Alice and Bob find the same outcome (scoring a correlation of 1) or a different outcome (scoring a correlation of -1). Four measurement combinations and two outcomes give 8 different situations in total. We can repeat the experiment several times and look at the average values for the different situations. For every experiment, the value of TT is either 1 or -1; we know that the average cannot exceed these values: -1 ≤ <TT> ≤ 1 (here, the brackets <> indicate the average over a large number of repetitions). The parameter S is defined as: 

S = <SS> + <ST> + <TS> -<TT>

We can define different domains depending on the value of S.

  • For objects that behave according to the laws of classical physics, like coins, S is subject to Bell's inequalities. In this situation, we use the CHSH inequality (a type of Bell inequality).

-2 ≤ <SS> + <ST> + <TS> -<TT> ≤ 2

  • For quantum mechanical objects, like entangled photons or electrons, S is subject to Tsirelson's bound:

-2√2 ≤ <SS> + <ST> + <TS> -<TT> ≤ 2√2

  • In the superquantum domain, we use objects that show behavior not permitted in quantum mechanics. In this scenario, S can exceed Tsirelson's bound:

-4 ≤ <SS> + <ST> + <TS> -<TT> ≤ 4

Superquantum¶

In 1994, Sandu Popescu and Daniel Rohrlich proposed hypothetical objects (later called NS boxes) that can show correlations beyond the Tsirelson bound [7]. These boxes have the following properties:

  • NS boxes come in pairs; for each pair, one box is for Alice and one for Bob.
  • Bob and Alice can each measure T or S (together, they measure TT, TS, ST, or SS. The first letter represents Alice's choice, and the second is Bob's).
  • The answer they get is always 0 or 1 with 50% probability.
  • The boxes are one-time-use; after a measurement, the box is disabled.
  • A perfect box always gives Alice and Bob the same result, except when both measure T. In that case, they always get the opposite result.
  • The k-value determines the probability that a box gives the perfect result; this probability is P(k) = (1/2) * (1 + k/4)
  • For k = 4, the box is always perfect.
  • For k = 0, the box is random (50% correct answer, 50% wrong answer).

If one uses these NS boxes in a correlation experiment (for instance, Alain Aspect's experiment), the CHSH correlation equals the k-value. The manufacturer of NS boxes with a k-value of two or lower could use local variables (in fact, these boxes would be perfectly feasible). A manufacturer of boxes with k-values up to 2√2 could use entangled particles. These boxes are, in principle, possible, but storing entangled photons in a box will lead to practical limitations. NS boxes with a k-value above 2√2 are hypothetical. These boxes are impossible in quantum mechanics as the correlation exceeds the Tsirelson bound, but we can still consider them in a thought experiment.

The guessing game¶

We consider the following guessing game: Alice has a string of n bits, where n is a power of 2. So the length of Alice's bit string can be 2,4,8 or 16. Bob does not know this string. Alice is allowed to communicate one bit to Bob. After this communication, Bob gets to guess a specific bit in Alice's string (so he has to guess bit 1, bit 5, or bit 10). Alice does not know which bit Bob has to guess. Alice and Bob win if Bob guesses correctly.

Fundamentally, this game is about information and information sharing. Alice has a certain amount of information (n bits, to be exact). Alice shares 1 bit with Bob. When Bob guesses the value of any of Alice's bits, we can deduce that the higher his success rate, the more information he has on Alice's bit string.

In his 2009 article, Marcin Pawlowski introduced the concept of information causality: "It states that information that Bob can gain about a previously completely unknown to him data set of Alice, by using all his local resources (which may be correlated with her resources) and a classical communication from her, is bounded by the information volume of the communication. In other words, if Alice communicates m bits to Bob, the total information access that Bob gains to her data is not greater than m".

In our game, Alice shares one bit of information with Bob. Based on information causality, Bob cannot gain more than one bit of information on Alice's data set. In information theory, the shared information between Alice and Bob is called 'mutual information,' so for our game, we know that mutual information (I) cannot grow by more than one bit.

The limit on mutual information in the previous paragraph stems from Bob only receiving one bit of information. We can also approach it from the other side. Given that Bob has a probability p of guessing any bit in Alice's sequence, can we determine how much information he minimally has available?

We introduce the concept of information entropy H. If Alice does not communicate anything, then from Bob's perspective, the entropy of Alice's bit string is n; any of the 2-to-the-power-n possible bitstrings are, from Bob's perspective, equally likely. Labeling the bit string as x means the entropy H(x) = n. If Alice now shares the value of the first bit, then for Bob, the entropy has reduced to n-1 (he knows the first bit, so he only has n-1 unknown bits left). If the bit received by Bob is labeled y, we write for the entropy of bitstring x, under the condition that bit y is known: H(x|y) = n -1. We then define mutual information as the difference in entropy as a result of the fact that Bob knows the value of bit y.

I(x:y) = H(x) - H(x|y) =n - (n-1) = 1. 

We see the very intuitive result that communicating one bit creates one bit of mutual information. Now, evaluate the situation in which Alice communicates one bit to Bob but adds some uncertainty. The probability that she communicates the right bit is p. In that case, the probability that Bob will guess right is also p. In this case, the entropy of bit string x under the condition that Bob received bit y is 

H(x|y) = Hb(p) = -p log(p) - (1-p) log(1-p). 

Here, Hb(p) is called 'binary entropy.' If p is either 0% or 100%, the binary entropy is 0 (meaning there is no uncertainty left); if p is 50%, the binary entropy peaks at 1, meaning uncertainty is maximum.

So, if Bob can guess one specific bit, the mutual information is I(x:y) = n -Hb(p). If he can guess any of Alice's n bits, he must have n times the information he had for one bit, so I(x:y) = n - n Hb(p). We derived before that by Alice's communication of 1 bit; the mutual information cannot grow more than one bit, so I(x:y) ≤ 1. Combining these results, we get the result n - n Hb(p) ≤ 1, or 

Hb(p) ≥ 1–1/n.

We have derived a lower limit for binary entropy from information theoretical principles. This lower limit on entropy leads to an upper limit for Bob's success rate when guessing Alice's bit. If Bob is more successful than this upper limit for a given length of Alice's bit string, we know that the experiment violates fundamental information theoretical principles. We would have created information out of nothing!

Popescu-Rohrlich boxes¶

Now, let's return to Sandu Popescu and Daniel Rohrlich's work. In a previous post, we showed that if Alice and Bob share a large enough supply of NS boxes, they can achieve a success rate of 100%. They would need 'perfect' NS boxes with a k-value of 4. The success rate will decrease if they use boxes with a lower k-value.

image-2.png

Figure 2 plots the success rate on the left and binary entropy on the right. The Python code to generate these graphs is posted on GitHub. Both plots represent the same data; we only use a different vertical axis. We see that with perfect boxes (with a k-value of 4) for any bit string length, the success rate is 100%, or the binary entropy is 0. If the k-value decreases, the probability of success decreases. The longer the bit string, the faster the decrease of the success rate as a function of p. For low k-values (boxes with high randomness) and large bit strings, the success rate approaches 50% (which would mean Bob makes a wild guess as he has no actual information on the bit he needs to guess).

Clearly, for any length of Alice's bitstring, the success rate of 100% is above the limit posed by information theory. An interesting question is: at what k-value does the success rate drop to the maximum information theory allows? So, at what k-value is the success rate of Bob's guess such that Hb(p) = 1–1/n?

image-3.png

In Figure 3 (see GitHub for the code generating this plot), we add the limit from information theory as a horizontal dotted line to the binary entropy curves. The different colors indicate the various lengths of Alice's bitstring (n). For any given n, the k-value where the two curves cross indicates the 'quality' of the NS boxes for which Bob's success rate is precisely the maximum allowed by information theory. For higher k-values (and the resulting higher success rates), Bob seems to use more 'information' than he can have obtained. Figure 3 shows that the curves cross at a k-value just above 3 for four bits. For eight bits, the crossing point drops to a k-value below 3; the maximum allowed k-value further reduces for a longer bit string.

Figure 4 plots the maximum allowed k-value as a function of Alice's bit string length. We see that for increasing n, the maximum allowed k-value asymptotically approaches a limit. This limit is a k-value of 2.81, or 2√2 to be exact.

image-4.png

From the guessing game with 'imperfect' NS boxes, we derive that the existence of boxes with a k-value higher than 2√2 would allow Bob and Alice to achieve a higher success rate than what fundamental information theoretical limits permit. It would enable Bob to know more about information at Alice's side than is possible given the amount of communication between them. This reasoning provides a fundamental argument to explain why nature would not permit NS boxes with a k-value higher than 2√2 to exist.

Now, recall that the NS boxes originate from the discussion on quantum non-locality. These boxes can be used in the EPR experiment to test Bell's inequalities. When we utilize boxes with a certain k-value, we find that the CHSH correlation equals k. So, according to quantum mechanics, NS boxes with a k-value above 2√2 are impossible. We already encountered this upper limit as the 'Tsirelson bound.'

Analyzing the guessing game with 'imperfect' NS boxes leads to the conclusion that the maximum allowed k-value is 2√2. This conclusion means a CHSH correlation above 2√2 is impossible, so we provide an alternative derivation of the Tsirelson bound. According to Marcin Pawlowski's analysis, the answer to why nature limits non-locality in quantum mechanics is that information causality is a fundamental physical principle. To quote Pawlowski and co-authors: "We suggest that Information Causality (…) might be one of the foundational properties of Nature."


[1] A. Einstein, B. Podolsky, and N. Rosen, "can quantum-mechanical description of physical reality be considered complete?" Phys. Rev. 47, 777 (1935).

[2] J.S. Bell, "On the Einstein Podolsky Rosen paradox," Physics 1, 195 (1964).

[3] Aspect, J. Dalibard and G. Roger, "Experimental Test of Bell's Inequalities Using Time-Varying Analyzers," Phys. Rev. Lett. 49, 1804 (1982).

[4] B. Tsirelson, "Quantum generalizations of Bell's inequality," Letters in Mathematical Physics 4, 93, (1980).

[5] M. Pawłowski, T. Paterek, D. Kaszlikowski, V. Scarani , A. Winter and M. Zukowski, "Information causality as a physical principle," Nature 461, 1101 (2009).

[6] J.Bub "Bananaworld: Quantum Mechanics for Primates," (2013).

[7] S. Popescu and D. Rohrlich, "Quantum Nonlocality as an Axiom," Foundations of Physics 24, 397 (1994).

Python code supporting the blog post¶

In [ ]:
# Generic imports
import sys  
import importlib
import math
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.lines
import matplotlib.colors
import random
In [ ]:
# See https://github.com/robhendrik/FockStateCircuit
import fock_state_circuit as fsc
In [ ]:
class NS_box:
    def __init__(self, K: int = 4):
        # first set all 4 outputs of the box to a random number 1 or 0
        random_number = np.random.randint(2, size=1)
        self._AS = np.random.randint(2, size=1)[0]
        self._AT = np.random.randint(2, size=1)[0]
        self._BS = np.random.randint(2, size=1)[0]
        self._BT = np.random.randint(2, size=1)[0]

        # determine whether this is a good box or a bad box, with
        # probability determined by the K-value
        self.probability_of_guessing_right = (1/2)*(1+ K/4)
        if np.random.randint(0,10000, size=1)[0] < self.probability_of_guessing_right * 10000:
            self._this_box_is_right = True
        else:
            self._this_box_is_right = False
   
    def Alice_S(self):
        if self._AS >= 0: # if Alice measures 'S'
            self._AT = -1 # disable measurement 'T'
            if self._BS >=0 and self._BT >= 0:
                if self._this_box_is_right: # depending on whether this box is 'good' or 'bad'
                    self._BS = self._AS # set the right outcome for Bob
                    self._BT = self._AS
                else:
                    self._BS = (self._AS+1)%2 # or set the wrong outcome for Bob
                    self._BT = (self._AS+1)%2
            return self._AS
        else:
            raise Exception('Alice can only perform one measurment on an NS box')
    def Alice_T(self):
        if self._AT >= 0:
            self._AS = -1
            if self._BS >=0 and self._BT >= 0:
                if self._this_box_is_right:
                    self._BS = self._AT
                    self._BT = (self._AT+1)%2
                else:
                    self._BS = (self._AT+1)%2
                    self._BT = self._AT
            return self._AT
        else:
            raise Exception('Alice can only perform one measurment on an NS box')
    def Bob_S(self):
        if self._BS >= 0:
            self._BT = -1
            if self._AS >=0 and self._AT >= 0:
                if self._this_box_is_right:
                    self._AS = self._BS
                    self._AT = self._BS
                else:
                    self._AS = (self._BS+1)%2
                    self._AT = (self._BS+1)%2
            return self._BS
        else:
            raise Exception('Bob can only perform one measurment on an NS box')
    def Bob_T(self):
        if self._BT >= 0:
            self._BS = -1
            if self._AS >=0 and self._AT >= 0:
                if self._this_box_is_right:
                    self._AS = self._BT
                    self._AT = (self._BT+1)%2
                else:
                    self._AS = (self._BT+1)%2
                    self._AT = self._BT
            return self._BT
        else:
            raise Exception('Bob can only perform one measurment on an NS box')
        
    def __str__(self):
        if self._AS < 0:
            first_letter = 'T'
            first_number = str(self._AT)
        elif self._AT < 0:
            first_letter = 'S'
            first_number = str(self._AS)
        else:
            first_letter = '?'
            first_number = '?'
        if self._BS < 0:
            second_letter = 'T'
            second_number = str(self._BT)
        elif self._BT < 0:
            second_letter = 'S'
            second_number = str(self._BS)
        else:
            second_letter = '?'
            second_number = '?'
        return first_letter + second_letter + '/' + first_number + second_number
In [ ]:
print('NS-Boxes can be measured by Alice and by Bob')
box = NS_box()
print("SS ",box.Alice_S(),box.Bob_S())
box = NS_box()
print("ST ",box.Alice_S(),box.Bob_T())
box = NS_box()
print("TS ",box.Alice_T(),box.Bob_S())
box = NS_box()
print("TT ",box.Alice_T(),box.Bob_T())

print('\nNS-Boxes cannot be measured twice at the same "side"')
box = NS_box()
print("Alice measures T-gate ",box.Alice_T())
try:
    print("Alice measures S-gate on same box",box.Alice_S())
except:
    print("Error occured when Alice tried to measure S-gate after measuring T-gate")

print("\nBefore measurement boxes have an 'unknown' state")
box = NS_box()
print("Box before mearement", box)
Alice = box.Alice_T()
print("Box after measurement by Alice", box)
Bob = box.Bob_T()
print("Box after measurement by Bob", box)
NS-Boxes can be measured by Alice and by Bob
SS  1 1
ST  0 0
TS  0 0
TT  1 0

NS-Boxes cannot be measured twice at the same "side"
Alice measures T-gate  1
Error occured when Alice tried to measure S-gate after measuring T-gate

Before measurement boxes have an 'unknown' state
Box before mearement ??/??
Box after measurement by Alice T?/0?
Box after measurement by Bob TT/01
In [ ]:
class Box_For_Bob():
    def __init__(self, box: NS_box):
        self.box = box

    @property
    def T(self):
        return self.box.Bob_T()
    
    @property
    def S(self):
        return self.box.Bob_S()
    
class Box_For_Alice():
    def __init__(self, box: NS_box):
        self.box = box

    @property
    def T(self):
        return self.box.Alice_T()
    
    @property
    def S(self):
        return self.box.Alice_S()

def order_boxes_at_Amazon(quantity: int = 1000, K_value: int = 4):
    boxes = [NS_box(K_value) for _ in range(quantity)]
    set_of_boxes_for_Bob = [Box_For_Bob(box) for box in boxes]
    set_of_boxes_for_Alice = [Box_For_Alice(box) for box in boxes]
    return (set_of_boxes_for_Alice,set_of_boxes_for_Bob)

def perform_random_measurement(set_of_boxes):
    outcomes = []
    measurements = []
    while len(set_of_boxes)>0:
        box = set_of_boxes.pop()
        measurement = random.choice(['T', 'S'])
        measurements.append(measurement)
        if measurement == 'T':
            outcomes.append(box.T)
        else:
            outcomes.append(box.S)
    return (measurements, outcomes)

def assess_CHSH_correlation(bobs_result, alices_result):
    bobs_measurements, bobs_outcomes = bobs_result
    alices_measurements, alices_outcomes = alices_result
    number_of_datapoints = len(bobs_measurements)
    counter = 0
    while len(bobs_measurements)> 0:
        if 'S' in (bobs_measurements.pop(), alices_measurements.pop()):
            counter += +1 if bobs_outcomes.pop() == alices_outcomes.pop() else -1
        else:
            counter += -1 if bobs_outcomes.pop() == alices_outcomes.pop() else +1
    return 4*counter/number_of_datapoints
In [ ]:
set_of_boxes_for_Alice, set_of_boxes_for_Bob = order_boxes_at_Amazon(quantity=4000, K_value = 4)
measurements = ['TT', 'TS', 'ST', 'SS']
outcome = []
for measurement in measurements:
    counter = 0
    for _ in range(int(len(measurements)/4)):
        Bobs_box = set_of_boxes_for_Bob.pop()
        Alices_box = set_of_boxes_for_Alice.pop()
        if measurement == 'TT':
            counter += 1 if (Bobs_box.T == Alices_box.T) else -1
        elif measurement == 'ST':
            counter += 1 if (Bobs_box.S == Alices_box.T) else -1
        elif measurement == 'TS':
            counter += 1 if (Bobs_box.T == Alices_box.S) else -1
        else:
            counter += 1 if (Bobs_box.S == Alices_box.S) else -1
    outcome.append(counter/int(len(measurements)/4))

print("CSHS correlation is: ", -1*outcome[0] + outcome[1] + outcome[2] +outcome[3])
CSHS correlation is:  4.0
In [ ]:
K = 4
for K in [2, 2.2, 2.4, 2.6, 2.8, 3.0, 3.5, 4.0]:
    set_of_boxes_for_Alice, set_of_boxes_for_Bob = order_boxes_at_Amazon(quantity=4000, K_value = K)
    bobs_result = perform_random_measurement(set_of_boxes_for_Bob)
    alices_result = perform_random_measurement(set_of_boxes_for_Alice)

    S = assess_CHSH_correlation(bobs_result, alices_result)

    print('With K set to', K, 'the CHSH correlation is: ', S)
With K set to 2 the CHSH correlation is:  1.93
With K set to 2.2 the CHSH correlation is:  2.23
With K set to 2.4 the CHSH correlation is:  2.382
With K set to 2.6 the CHSH correlation is:  2.592
With K set to 2.8 the CHSH correlation is:  2.772
With K set to 3.0 the CHSH correlation is:  3.062
With K set to 3.5 the CHSH correlation is:  3.484
With K set to 4.0 the CHSH correlation is:  4.0
In [ ]:
def length_of_bitstring(number):
    length = int(np.ceil(np.log2(number+1)))
    n = int(2**int(np.ceil(np.log2(length))))
    return n

def number_of_NS_boxes_needed(number):
    n = length_of_bitstring(number)
    stages = int(np.log2(n))
    counter = 0
    for k in range(stages):
        counter += 2**k 
    return counter

def generate_single_bit_by_Alice(number, 
                                number_of_bits_in_number,
                                set_of_boxes_for_Alice):
    # make a list of '1's and '0's for the number
    bitvalues = [
        (number >> position)%2 for position in range(number_of_bits_in_number-1,-1,-1)
        ]
    # determine the communication_bit
    list_of_Alice_results = [bitvalues]
    while len(bitvalues) > 1:  
        list_of_Alice_results.append([])
        for pair_index in range(len(bitvalues)//2):
            box = set_of_boxes_for_Alice[0]
            if bitvalues[pair_index*2] == bitvalues[pair_index*2 + 1]:
                result = 0 if box.S == bitvalues[pair_index*2] else 1
            else:
                result = 0 if box.T == bitvalues[pair_index*2] else 1
            list_of_Alice_results[-1].append(result)
            set_of_boxes_for_Alice = set_of_boxes_for_Alice[1:]
        bitvalues = list_of_Alice_results[-1]
    communication_bit = list_of_Alice_results[-1][0]
    return communication_bit

def generate_guess_by_Bob(bit_index, 
                        number_of_bits_in_number, 
                        communication_bit_Alice, 
                        set_of_boxes_for_Bob):
    result_per_stage = []
    bit_index_per_stage = bit_index
    boxes_in_previous_line = 0
    stages = int(np.log2(number_of_bits_in_number))
    for k in range(stages):
        pair, position = divmod(bit_index_per_stage,2)
        index = boxes_in_previous_line + pair
        boxes_in_previous_line += number_of_bits_in_number//(2**(k+1))
        box = set_of_boxes_for_Bob[index]
        if position == 0:
            result = box.S
        else:
            result = box.T
        result_per_stage.append(result)
        bit_index_per_stage = pair
    return str((communication_bit_Alice+sum(result_per_stage))%2)
In [ ]:
# quick check for smallest string of length 2, We can cycle through all possible combinations
K = 4
max_number = 4
L = 2
format_string = '{0:0'+str(L)+'b}'

N = number_of_NS_boxes_needed(max_number)
for number in [n for n in range(max_number)]:
    for bit_index in [i for i in range(L)]:
        set_of_boxes_for_Alice, set_of_boxes_for_Bob = order_boxes_at_Amazon(quantity=N, 
                                                                        K_value = K)
        bit_string = format_string.format(number)
        expected = bit_string[bit_index]
        m = generate_single_bit_by_Alice(number=number , 
                                    number_of_bits_in_number = L, 
                                    set_of_boxes_for_Alice=set_of_boxes_for_Alice)
        guess = generate_guess_by_Bob(bit_index=bit_index, 
                                number_of_bits_in_number = L, 
                                communication_bit_Alice=m, 
                                set_of_boxes_for_Bob=set_of_boxes_for_Bob)
        print(number, format_string.format(number), bit_index, expected, guess)
0 00 0 0 0
0 00 1 0 0
1 01 0 0 0
1 01 1 1 1
2 10 0 1 1
2 10 1 0 0
3 11 0 1 1
3 11 1 1 1
In [ ]:
K = 4
max_number = 4
L = length_of_bitstring(max_number)
N = number_of_NS_boxes_needed(max_number)

success_counter = 0
for _ in range(1000):   
    # step 1, generate the NS boxes before the number is known
    set_of_boxes_for_Alice, set_of_boxes_for_Bob = order_boxes_at_Amazon(quantity=N, 
                                                                        K_value = K)
    # generate random number and give to Alice
    number = np.random.randint(1,max_number)
    m = generate_single_bit_by_Alice(number=number , 
                                    number_of_bits_in_number = L, 
                                    set_of_boxes_for_Alice=set_of_boxes_for_Alice)
    # randomly select which bit Bob has to guess
    bit_index = np.random.randint(L)
    guess = generate_guess_by_Bob(bit_index=bit_index, 
                                number_of_bits_in_number = L, 
                                communication_bit_Alice=m, 
                                set_of_boxes_for_Bob=set_of_boxes_for_Bob)
    # check if guess if correct
    if guess == format(number, '0'+ str(L) + 'b')[bit_index]:
        success_counter += 1
    
# print the success rate for 1000 tries
print(success_counter)
1000
In [ ]:
def get_success_probabilities(K, max_number, trials):
    L = length_of_bitstring(max_number)
    N = number_of_NS_boxes_needed(max_number)

    success_counter = 0
    for _ in range(trials):   
        # step 1, generate the NS boxes before the number is known
        set_of_boxes_for_Alice, set_of_boxes_for_Bob = order_boxes_at_Amazon(quantity=N, 
                                                                            K_value = K)
        # generate random number and give to Alice
        number = np.random.randint(1,max_number)
        m = generate_single_bit_by_Alice(number=number , 
                                        number_of_bits_in_number = L, 
                                        set_of_boxes_for_Alice=set_of_boxes_for_Alice)
        # randomly select which bit Bob has to guess
        bit_index = np.random.randint(L)
        guess = generate_guess_by_Bob(bit_index=bit_index, 
                                    number_of_bits_in_number = L, 
                                    communication_bit_Alice=m, 
                                    set_of_boxes_for_Bob=set_of_boxes_for_Bob)
        # check if guess if correct
        if guess == format(number, '0'+ str(L) + 'b')[bit_index]:
            success_counter += 1
    return success_counter
In [ ]:
Ks = [4]
numbers = [8,128, 1024, 1024*1024]
trials = 1000
for K in Ks:
    for max_number in numbers:
        success = get_success_probabilities(K, max_number, trials)
        L = length_of_bitstring(max_number)
        print('For K =', K, ' and length of bit string = ', L,' the success rate is: ', success)
For K = 4  and length of bit string =  4  the success rate is:  1000
For K = 4  and length of bit string =  8  the success rate is:  1000
For K = 4  and length of bit string =  16  the success rate is:  1000
For K = 4  and length of bit string =  32  the success rate is:  1000
In [ ]:
Ks = [i/10 for i in range(20,42,2)]
n_bit = [2**n for n in range(2,6)]
trials = 10000
res_e = []
for n in n_bit:
    res_en = []
    for K in Ks:
        L = length_of_bitstring(2**(n-1))
        success_experiment = get_success_probabilities(K, 2**(n-1), trials)
        res_en.append(success_experiment)
    res_e.append(res_en)
In [ ]:
Ks = [i/10 for i in range(20,42,2)]
n_bit = [2**n for n in range(2,6)]
res_c = []
for n in n_bit:
    res_cn = []
    for K in Ks:
        L = length_of_bitstring(2**(n-1))
        success_calculated = (1/2)*(1+(K/4)**np.log2(n))
        res_cn.append(success_calculated)
    res_c.append(res_cn)
In [ ]:
fig, ax = plt.subplots()
for n in range(len(n_bit)):
    ax.plot(Ks, res_e[n], label = 'no of bits: ' + str(n_bit[n]), linestyle = 'none', marker = 'o')
    ax.plot(Ks, [y*trials for y in res_c[n]], label = str(n_bit[n]), linestyle = 'solid', marker = 'none')

ax.set(xlabel='K', ylabel='success',
       title='Success rate for different K- values and bit-string lengths')
ax.grid()
plt.legend()
plt.show()
No description has been provided for this image
In [ ]:
def binary_entropy(p):
    return (-p*np.log2(p) - (1-p)*np.log2(1-p) )

def H_limit(n_bit):
    return 1 - 1/(n_bit)
def P(n_bit,K):
    n_bit = np.longdouble(n_bit)
    n = np.log2(n_bit)
    return (1/2) * (1+(float(K)/4.0)**float(n))
def H(P):
    P = float(P)
    return -P * np.log2(P) - (1-P)*np.log2(1-P)
def find_K(n_bit):
    n_bit = np.longdouble(n_bit)
    K_min = 1.0
    K_max = 4.0
    H_target = H_limit(n_bit)
    while ((K_max - K_min)/((K_max + K_min))) > 0.001:
        K_try = 1/2*(K_min + K_max)
        P_try = P(n_bit,K_try)
        H_try = H(P_try)
        if H_try > H_target:
            K_min = K_try
        elif H_try < H_target:
            K_max = K_try
        else:
            return K_try
    return K_try
In [ ]:
BinEn_c = [[binary_entropy(s) for s in res_cn[:-1]] + [0] for res_cn in res_c]
BinEn_e = [[binary_entropy(s/trials) for s in res_en[:-1]]+ [0] for res_en in res_e]
In [ ]:
colors = ['blue', 'green', 'purple', 'red']
fig, ax = plt.subplots(1,2)
fig.set_figwidth(15)
fig.set_figheight(6)
fig.suptitle('Results of the guessing game for different quality level of the boxes (k-value) expressed in success rate (left) and binary entropy (right)')
for n in range(len(n_bit)):
    ax[0].plot(Ks, res_e[n], label = 'no of bits: ' + str(n_bit[n]), linestyle = 'none', marker = 'o', color = colors[n])
    ax[0].plot(Ks, [y*trials for y in res_c[n]], label = str(n_bit[n]), linestyle = 'solid', marker = 'none', color = colors[n])

ax[0].set(xlabel='k-value', ylabel='success rate (out of 10.000 repeats)',
       title='Success rate for different k-values and bit-string lengths')
ax[0].grid()
ax[0].legend()
Ks = [i/10 for i in range(20,42,2)]
n_bit = [2**n for n in range(2,6)]

for n in range(len(n_bit)):
    ax[1].plot(Ks, BinEn_c[n], label = 'no of bits: ' + str(n_bit[n]), linestyle = 'solid', marker = 'none', color = colors[n])
    ax[1].plot(Ks, BinEn_e[n], label = str(n_bit[n]), linestyle = 'none', marker = 'o', color = colors[n])
ax[1].set(xlabel='k-value', ylabel='Binary entropy',
       title='Binary Entropy for different k-values and bit-string lengths')
ax[1].grid()
plt.show()
No description has been provided for this image
In [ ]:
ns = [np.longdouble(2**n) for n in range(2,6)]
K_cross = [find_K(n) for n in ns]
H_cross = [H_limit(n) for n in ns]
n_bit = [2**n for n in range(2,6)]

fig, ax = plt.subplots()
colors = ['blue', 'green', 'purple', 'red']
for n in range(len(n_bit)):
    ax.plot(K_cross[n], H_cross[n],  linestyle = 'none', marker = 'o', color = colors[n])
    ax.plot(Ks, BinEn_c[n], label = 'no of bits: ' + str(n_bit[n]), linestyle = 'solid', marker = 'none', color = colors[n])
    ax.plot([2,4], [(1-1/n_bit[n]), (1-1/n_bit[n])], linestyle = (0, (3, 5, 1, 5)),color = colors[n] )
#ax.plot([2*np.sqrt(2),2*np.sqrt(2)], [1,0], linestyle = (0,(1,10)), color = 'black')
ax.set(xlabel='k-values', ylabel='Binary entropy',
       title='Binary Entropy for different k-values and bit-string lengths.')
ax.text(2,0.4,'The dot represents the k-value where \nthe guessing game result is exactly at the limit \nallowed by information theory')
plt.legend()
plt.show()
No description has been provided for this image
In [ ]:
ns = [np.longdouble(2**(i**2)) for i in range(1,8)]
K_cross = [find_K(n) for n in ns]


fig, ax = plt.subplots()
ax.plot(ns,K_cross, label = 'K values for breaking IC', linestyle = 'none', marker = 'o')
ax.plot([1,np.max(ns)],[2*np.sqrt(2), 2*np.sqrt(2)], label = 'Tsirelson bound')
ax.set(xlabel='no of bits (n)', ylabel='k-value',
       title="k-values threshold for breaking information\n causality as a function of Alice\'s bit string length")
plt.xscale("log")
plt.legend()
plt.show()
No description has been provided for this image
In [ ]:
ns = []
Ks = []

for i in range(1,6):
    n = 2**i
    L = 2**n
    K = find_K(L)
    ns.append(n)
    Ks.append(K)
fig, ax = plt.subplots()
ax.plot(ns,Ks, label = 'K values for breaking IC', linestyle = 'none', marker = 'o')
ax.plot([1,40],[2*np.sqrt(2), 2*np.sqrt(2)], label = 'Tsirelson bound')
ax.set(xlabel='n, no of bits = 2**n', ylabel='k-value',
       title="k-values threshold for breaking information\n causality as a function of Alice\'s bit string length")
plt.legend()
plt.show()
No description has been provided for this image