Contents¶

Section 1. Text of the blogpost¶

section 2. Python code supporting the blog post¶

The HTML for this notebook can be found at https://robhendrik.github.io/Non-Locality-Versus-No-Signalling

For the Jupyter notebook check: https://github.com/robhendrik/Non-Locality-Versus-No-Signalling

Non-locality versus no-signalling¶

The year 2024 marks the 60th birthday of John Bell's famous inequalities. Interestingly enough, the debate about what these inequalities mean and what conclusions we can draw is as lively as ever.

Bell's inequalities distinguish 'local' theories and 'non-local' theories. Here, we loosely define a 'local' theory as a theory where the likelihood of events occurring (i.e., the possibility to observe specific measurement results) purely depends on the direct environment. If you consider an 'Alice' and a 'Bob' at different sides of the world, then in a local theory, the actions of Alice cannot instantaneously influence what Bob observes.

Nowadays, no scientist disputes that Nature violates Bell's inequalities. There is enough (Nobel prize-winning) experimental evidence for this. The academic debate around Bell's inequalities centers around whether we can conclude from the violation of Bell's inequalities that Nature is 'non-local.' For some, this is a clear conclusion. For instance, Nicolas Gisin wrote in 2023 [1]:

Quantum non-locality is here to stay; the experimental evidence is clear on that point.

For others, like Sabine Hossenfelder, the situation is less clear when she refers to the experimental evidence [2]:

Contrary to what is often stated, these observations do not demonstrate that "spooky action at a distance" is real and Nature therefore non-local.

The debate on (non-)locality can get heated. Some participants identify 'factions' of localists, whom they accuse of lack of rigor [3]:

The terms 'non-locality' or 'quantum non-locality' are buzzwords in foundations of quantum mechanics and quantum information. Most scientists treat these terms as a handy expression equivalent to the clumsy "violation of Bell's inequalities". Unfortunately, some treat them seriously.

Some scientists (like Justo Pastor Lambare in his recent contribution [4]) go further and state that the analysis of the 'localists' is wrong:

The alleged non-local character of quantum mechanics is inextricably related to the formulation of the Bell theorem. That relation, however, is commonly incorrectly assessed.

Information Causality¶

In this post, we analyze the work that Marcel Pawłowski published in his 2021 paper [5], "Information Causality without Concatenation." This paper focuses on the distinction between what we could loosely call 'useless non-locality' and 'useful non-locality.' Useless non-locality can lead to correlations between events for Alice and Bob but does not enhance communication. On the other hand, useful non-locality can be used to improve communication between the two parties. If we consider a communication channel between Alice and Bob with a certain capacity, then with 'useless non-locality,' the amount of information shared is limited by the capacity of that channel. On the other hand, useful non-locality could enhance communication and allow Alice and Bob to exceed the channel's capacity.

Pawłowski phrases this as 'Information Causality' [6]. Information causality is about forbidding more information to be potentially available to the receiver than has been sent by the sender [7]: 

If for instance Alice has a certain amount of information, then the amount of information potentially available to Bob about Alice's input cannot exceed M bits. Notice that it does not matter how this information is encoded: when we refer to 'sending the M bit message', it should be understood as a single use of a channel with classical communication capacity M.

If we can demonstrate that we can violate Information Causality with quantum non-locality, then the statement that non-locality is merely a buzzword would not hold. If, on the other hand, we find that the non-locality observed in quantum mechanics cannot violate Information Causality, we have created a significantly sharper definition of the boundary posed by Nature.

Guessing game¶

Imagine that Alice and Bob play a 'guessing game' (we discussed this game in an earlier post). In this game, Alice has a string of n bits and communicates m bits to Bob. After receiving Alice's bit, Bob's task is to guess the value of a single bit in Alice's bitstring for a given index. So, Alice receives a bitstring as an assignment, and Bob gets an index. Bob does not know the value of Alice's bits, and Alice does not know which index Bob should guess. They can communicate m bits, and then Bob has to make his guess. Bob's success rate is directly related to the amount of knowledge he has on Alice's bitstring. If he knows the full bitstring, his success rate will be 100%; if he does not know anything about Alice's bits, his success rate will be 50%.

Pawłowski considers this guessing game for the situation that Alice has a bit string of length 2 (n =2) and communicates one bit to Bob (m = 1). The communication channel that Alice and Bob share is a 'noisy' channel, i.e., when Alice sends a bit through this channel, there is a probability that the bit is flipped before it arrives at Bob's side.

For this game, Information Causality dictates that the amount of information available to Bob does not exceed the capacity of the communication channel. We have modeled this game in Python (see the Jupyter Notebook on GitHub for more details).

First, we can establish the information capacity of the channel from the probability that a bit is flipped:

$$ \Large \begin{array}{lcl} \text{Channel capacity for noisy channel} &:& 1 - H_{B}(p_{channel}) \\ \text{Probability of correct pass through classical channel}&:&p_{channel} \\ \text{Channel error rate (probability that a bit is flipped)}&:&1 - p_{channel} \\ \end{array} $$

Here we use the concept 'binary entropy' from information theory which depends on the probability that a bit is correct:

$$ \Large \begin{array}{lcl} H_{B}(p) &=& -p \log_{2}(p) - (1-p)\log_{2}(1-p) \end{array} $$

As the next step, we establish the amount of information Bob needs to achieve a certain success rate. If Alice's bitstring length is 2, then the mutual information is

$$ \Large \begin{array}{lcl} \text{Probability for Bob to guess correctly} &:& p_{total} \\ \text{Mutual information required by Bob to come to this success rate}&:&2(1-H_{B}(p_{total}))\\ \end{array} $$

Deriving this formula for mutual information is beyond the scope of this blog. Still, to get some intuitive feel, consider the case where Bob's success rate is 100%. The binary entropy for this probability is 0, so I is 2. Two bits of mutual information make sense as Bob can correctly guess the value of the two bits on Alice's side. On the other hand, if Bob's success rate is 50%, the binary entropy is 1. The mutual information is, in this case, zero. Also, this outcome makes sense since, for guessing a bit, a success rate of 50% means Bob has no prior information.

Information Causality now states that the mutual information cannot exceed the capacity of the channel, so

$$ \Large \begin{array}{lcl} \text{Information causality} &:& 2(1-H_{B}(p_{total})) \leq 1 - H_{B}(p_{channel}) \\ \end{array} $$

In Figure 1, we plot Bob's maximum allowed success rate for a classical channel with a given error rate. We see that if the error rate is near zero, Bob's success rate is between 85% and 90%. When the error rate increases, Bob's success rate will decrease, and for a channel with a 50% error rate, his success rate should not be above 50% (effectively, for a channel with this noise level, Bob receives a random bit from Alice and the two are not able to communicate).

image-5.png

Figure 1. Maximum allowed success rate for Bob after receiving one bit from Alice.Note that the success rate shown in Figure 1 is the maximum information theory allows.

Of course, Bob and Alice can agree on an algorithm with a lower success rate. The only conclusion we can draw is that we do not expect to see higher success rates unless additional communication is established.

Non-local resources¶

As the next step, we allow Alice and Bob to utilize non-local resources. Specifically, we will enable them to use entangled photons or 'superphotons.' Sandu Popescu and Daniel Rohrlich proposed these 'superphotons' in 1994 [8] (see also our earlier post). These Popescu-Rohrlich photons show a correlation beyond the Tsirelson bound. We can also say that they show non-locality beyond quantum mechanics. In our code (see GitHub), we implemented the Popescu-Rohrlich photons such that we can set their 'quantumness' by a parameter q. For a q-value of 1, they behave as regular entangled photons, and for increasing values for q, their behavior becomes more and more non-local. Ultimately, for very high q-values, the photons would exhibit the largest conceivable violation of Bell's inequalities.

We implemented Pawloswki's algorithm for the guessing game in Python, using the package FockStateCircuit to model the quantum behavior of photons. This package has a built-in feature that works with Popescu-Rohrlich photons.

image-6.png

Figure 2. Circuit for the guessing game as created in the Python package 'FockStateCircuit.' 

In Figure 2, you can see the circuit schematics. We have two optical channels for Bob and two for Alice (per photon, we need one channel to model horizontal and one to model vertical polarization, so we need two channels per photon). Then, we also have the classical channels for Alice and Bob, where they receive input from 'Charlie' and can store their measurement results. Charlie is the independent judge handing out the assignment. Charlie channel 0 contains the index of the bit Bob has to guess. Charlie's channel 1 contains the number representing Alice's bitstring, and we use Charlie's third channel to store Bob's final guess.

We can run this circuit for various q-values and noise levels in the communication channel. Figure 3 depicts the result. The green dots represent Bob's success rate for a q-value of 1 (so for regular entangled photons). We see that for this q-value, the success rate is always below the maximum allowed by Information Causality. So, although the entangled photons violate Bell's inequalities (and would be considered non-local), Alice and Bob cannot utilize this non-locality to enhance their communication.

image-7.png

Figure 3. Bob's success rate for a q-value of 1 (regular entangled photons) does not exceed what information theory permits. For q-values above one, Information Causality is violated. This is why non-locality in quantum mechanics is limited.

The blue dots in Figure 3 show what happens if we give Alice and Bob a pair of Popescu-Rohrlich photons with a q-value above 1. For q-values larger than 1, Bob's success rate can exceed what Information Causality permits. If quantum mechanics had allowed slightly stronger non-locality, or if quantum mechanics had broken Bell's inequalities just a bit more, this would have immediately broken information causality. The Tsirelson bound for quantum mechanics has precisely the correct value to avoid that we can use non-locality to enhance communication. Quantum mechanics is non-local but does not permit faster-than-light communication.

The points marked with an 'x' in Figure 3 indicate the highest allowed q-value for which we do not break Information Causality. In Figure 4 we plot these q-values against the error rate in the communication channel. We see that when then error rate in the channel is low (i.e., the noise level is low) in principle q-values beyond quantum mechanics would be allowed. However, when the channel becomes more noisy the maximum q-values becomes equal to the Tsirelson bound. So we see that Nature places the upper limit for quantum non-locality exactly where this non-locality would switch from 'useless' to 'usefull' 

image-8.png Figure 4. When the noise level in the communication channel approaches 0.5, the highest allowed q-value is the Tsirelson bound.

Conclusion¶

Nature might be non-local, but Nature does ensure that that non-locality is not used for communication. In fact, it appears that the non-locality is limited to exactly the level where we could use it to enhance communication. Possibly this means that Information Causality is one of the governing principles of physics, and that we can derive the characteristics of quantum mechanics from this principle. This conclusion still leaves open the question on non-local correlations which are observed in experiments where Bell's inequalities are violated. Whether these correlation truly indicate non-locality, or whether we can identify alternative mechanisms remains an open debate.

Please leave your build and comments on this post. 


[1] N.Gison, "Quantum non-locality: from denigration to the Nobel prize, via quantum cryptography," Europhysics News, vol. 54, 1, pp 20–23, 2023. https://doi.org/10.48550/arXiv.2309.06962

[2] Hance, J.R. and Hossenfelder, S. "Bell's theorem allows local theories of quantum mechanics," Nat. Phys. 18, 1382 (2022). https://doi.org/10.1038/s41567-022-01831-5

[3] M. Żukowski and Č. Brukner, "Quantum non-locality - it ainʼt necessarily so…," J. Phys. A: Math. Theor. 47, 424009 (2014). http://dx.doi.org/10.1088/1751-8113/47/42/424009

[4] J. Lambare, "A Critical Analysis of the Quantum Nonlocality Problem: On the Polemic Assessment of What Bell Did," https://doi.org/10.20944/preprints202205.0015.v4

[5] N. Miklin and M. Pawłowski, "Information Causality without Concatenation," Phys. Rev. Lett. 126, 220403 (2021). https://doi.org/10.48550/arXiv.2101.12710

[6] M. Pawłowski, T. Paterek, D. Kaszlikowski, V. Scarani, A. Winter and M. Zukowski, "Information causality as a physical principle," Nature 461, 1101 (2009). https://doi.org/10.1038/nature08400

[7] M. Pawlowski and V. Scarani, "Information causality," https://doi.org/10.48550/arXiv.1112.

[8] S. Popescu, D. Rohrlich, "Quantum nonlocality as an axiom," Found Phys 24, 379 (1994). https://doi.org/10.1007/BF02058098

Screenshots from the original article¶

N. Miklin and M. Pawłowski, "Information Causality without Concatenation," Phys. Rev. Lett. 126, 220403 (2021). https://doi.org/10.48550/arXiv.2101.12710

Article summary

image.png

The basic principle of Information Causality

image-5.png

For the guessing game with noisy channel we can have a correct guess if the Popescu-Rohrlich photons give the correct result AND the channel does not flip the bit, OR when both the PR photons give a wrong result and the channel flips the bit. Two errors can cancel out, and lead to correct result!

image-2.png

image-3.png

This leads to an expression for mutual information versus channel capacity

image-4.png

Python code supporting the blog post¶

We discussed the algorithm to run the guessing game in detail in https://github.com/robhendrik/Guessing-game-with-superphotons-A-Python-simulation This post adds the aspect of noisy channel. Please check the mentioned notebook for more insight in te guessing game algorithm

Math used in the code¶

In the code with work with probabilities. Note that in the article the authors work with 'biases', defined as $ p = \frac{1+e}{2} $.

Definitions¶

$$ \Large \begin{array}{lcl} \text{Probability of correct pass through classical channel}&:&p_{channel} \\ \text{Probability of correct result from PR photons}&:&p_{photon} \\ \text{Integral probability of correct result }&:&p_{total} \\ \end{array} $$

Binary entropy¶

$$ \Large \begin{array}{lcl} H_{B}(p) &=& -p \log_{2}(p) - (1-p)\log_{2}(1-p) \end{array} $$

This is implemented in binary_entropy(p) and in binary_entropy_reverse(H, accuracy_in_digits: int = 8)

The upper limit for Bob's success rate¶

We can calculate the upper limit for the overall success rate ($p_{photon}$) for a given channel noise level ($p_{channel}$) $$ \Large \begin{array}{lcl} 2(1-H_{B}(p_{total})) &=& 1 - H_{B}(p_{channel}) \\[10pt] H_{B}(p_{total})) &=& \frac{1}{2}(1+H_{B}(p_{channel}) ) \\[10pt] p_{total} &=& H_{B}^{-1}(\frac{1}{2}(1+H_{B}(p_{channel}) )) \\[10pt] \end{array} $$

Deriving the success rate for photon detection from the overall success rate¶

The total success rate is the sum of the rate of no error in either photon detection and communication PLUS the rate of two errors which cancel out:

$$ \Large \begin{array}{lcl} p_{total} &=& p_{photon}p_{channel} + (1-p_{photon})(1-p_{channel})\\ &=& p_{photon}(2p_{channel}-1) + (1-p_{channel})\\ p_{photon} &=& (p_{total} + p_{channel} - 1)/(2p_{channel}-1) \end{array} $$

k-value and q-value¶

The success rate for photon detection is related to the k-value (CHSH correlation that can be achieved with these photons) $$ \Large \begin{array}{lcl} K &=& 4(2p_{photon}-1) \end{array} $$

From work presented in an earlier post (https://github.com/robhendrik/Guessing-game-with-superphotons-A-Python-simulation) we can establish a relation between the k-value and the q-value for which the no-signalling boxed and popescu-rohrich photon pairs will generate the same CHSH correlation. For these k- and q-values the we have a similar level of 'superentanglement'.

$$ \Large \begin{array}{lcl} K &=& 4 \sqrt[q]{\frac{1}{2}\sqrt{2}} \\ q &=& \frac{\log(\frac{1}{2}\sqrt{2})}{\log(\frac{K}{4})} \end{array} $$

These functions are implemented in quantumness_to_K(quantumness) and K_to_quantumness(K)

Python code¶

In [ ]:
# Generic imports
import importlib
import math
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.lines
import matplotlib.colors
In [ ]:
# Install the package via: pip install FockStateCircuit
import fock_state_circuit as fsc

Functions¶

Helper functions¶

In [ ]:
def number_to_bit_list(number, bit_string_length):
    """ Generate list of bits (as [1,0,0] with least significant bit first. So the reverse of a string notation). 
    This function can be used to translate generate Alice's bitstring from a random number provided to her by 
    Charlie """
    return [((number >> i) & 1) for i in range(bit_string_length)]

def bit_list_to_number(bit_list):
    """ Generate an integer from a list of bits. Least significant bit in list at index 0. This function is
    the reverse of number_to_bit_list(). """
    bit_string = "".join([str(n) for n in bit_list])[::-1]
    return int(bit_string,2)

def merge_two_lists_on_even_indices(list_1, list_2):
    """ Create a new list from two lists. New list has half size of the original lists. 
    If the two lists have equal value at even index new list is 0, else 1. The values
    at the odd indices are ignored.
    Example:
        list_1 = [1,0,1,0,1,0,1,0]
        list_2 = [1,0,0,0,0,0,1,0]
        outcome = [0,1,1,0]
    """
    return [int(value_1 != value_2) for index, (value_1, value_2) in enumerate(zip(list_1,list_2)) if index%2 ==0]
    
def length_of_bitstring(number):
    """ Function to calculate the bit string length for guessing a specific number. The guessing game algorithm
    works for bit strings that are a power of 2 (so n = 2,4,8,16,..). This function calculates the lowest value
    for n that can capture the input number."""
    length = int(np.ceil(np.log2(number+1)))
    n = int(2**int(np.ceil(np.log2(length))))
    return n

def binary_entropy(p):
    """ Calculate Shannon's binary entropy from a probability."""
    if p >= 1 or p <=0:
        return 0
    else:
        return -p*np.log2(p) - (1-p)*np.log2(1-p)
    
def binary_entropy_reverse(H, accuracy_in_digits: int = 8):
    """ Calculate the probability that belongs to a given value of the binary entropy. The probability will be between 0.5 and 1 (including boundaries)."""
    p_min = 0.5
    p_max = 1
    p_average = (p_max + p_min)/2
    while np.round(binary_entropy(p_average),accuracy_in_digits) != np.round(H,accuracy_in_digits):
        if np.round(binary_entropy(p_average),accuracy_in_digits+2) < H:
            p_max = p_average
            p_average = (p_max + p_min)/2
        else:
            p_min = p_average
            p_average = (p_max + p_min)/2
    return p_average

def quantumness_to_K(quantumness):
    """ Translate the q-value to a K-value. The q-value is the paramater for the Popescu-Rohrich photons and the K-value is the CHSH correlation
    we can achieve with these photons. """

    return 4*np.power((1/2)*np.sqrt(2),1/quantumness)

def K_to_quantumness(K):
    """ Translate the K-value to a q-value. The q-value is the paramater for the Popescu-Rohrich photons and the K-value is the CHSH correlation
    we can achieve with these photons. """

    return np.log((1/2)*np.sqrt(2))/np.log(K/4)

Classical Channel functions¶

In [ ]:
def determine_wave_plate_settings_for_alice(current_values, new_input_values, affected_channels):
        """ Current values will be all classical channels, new_input_values is not used. Affected_channels will be 
            a list with Alice's channels and Charlie's channel
            Example: If bit string length is 8, charlie's number for Alice is 11 and charlie's bit index for Bob is 2 then
            for this function input is 
                current_values = [0,0,0,0,0,0,0,0,b,b,b,b,b,b,b,b,2,11,0] (with b bob's channels)
                affected_channels = [0,1,2,3,4,5,6,7,17]
                return values = [pi,0,pi,pi/16,pi,0,pi,0,b,b,b,b,b,b,b,b,2,11,0]
        """
        # define the wave plate angles for the measurements
        S_alice = np.pi/8 # polarization double wave plate angle, so at 45 degrees
        T_alice = 0  # polarization double wave plate angle, so at 0 degrees
        S_bob = np.pi/16  # polarization double wave plate angle, so at 22.5 degrees
        T_bob = np.pi*(3/16) # polarization double wave plate angle, so at 67.5 degrees
        # for all combinations relative angle is 22.5 degrees, except TT which has relative angle of 67.5 degrees

        # determine the length of the bit string for this circuit
        length_of_bit_string = int((len(current_values)-3)/2)
        
        # Split the classical channel values so the processing is a bit more clear
        channels_for_alice = current_values[:length_of_bit_string]
        channels_for_bob = current_values[length_of_bit_string:-3]
        channels_for_charlie = current_values[-3:]
        
        # the second channel of Charlie contains the number representing Alice's bit string
        charlies_number = channels_for_charlie[1]
    
        # generate the bit string with least significant bit left
        alices_bit_list = number_to_bit_list(charlies_number, length_of_bit_string)
        
        # we now determine the settings for the wave plates for Alice
        # example: Charlie number is 11 and bit string length is 8. The bit string is 11010000
        # (remember LSB left). The first photon should be measured "S" since first
        # two bits are equal, the second photon should be measured "T"
        new_values = []
        previous_bit_value = None
        for index, this_bit_value in enumerate(alices_bit_list):
            if index%2 == 1:
                if previous_bit_value is not None and this_bit_value == previous_bit_value:
                    measurement = S_alice
                else:
                    measurement = T_alice
                channels_for_alice[index] = measurement
            else:
                channels_for_alice[index] = np.pi # it will be a half wave plate so phase shift is 180 degrees, or pi radians                    
            previous_bit_value = this_bit_value

        return channels_for_alice + channels_for_bob + channels_for_charlie

def determine_wave_plate_settings_for_bob(current_values, new_input_values, affected_channels):
        """ current values will be all classical channels, new_input_values is not used. Affected_channels will be 
            a list with Bob's channels and Charlie's channel
            Example: If bit string length is 8, charlie's number for Alice is 11 and charlie's bit index for bob is 2 then
            for this function input is 
                current_values = [a,a,a,a,a,a,a,a,0,0,0,0,0,0,0,0,2,11,0] (with a alice's channels)
                affected_channels = [8,9,10,11,12,13,14,15,16]
                return values = [a,a,a,a,a,a,a,a,pi,0,pi,0,pi,0,pi,0,2,11,0]
        """
        # define the wave plate angles for the measurements
        S_alice = np.pi/8 # polarization double wave plate angle, so at 45 degrees
        T_alice = 0  # polarization double wave plate angle, so at 0 degrees
        S_bob = np.pi/16  # polarization double wave plate angle, so at 22.5 degrees
        T_bob = np.pi*(3/16) # polarization double wave plate angle, so at 67.5 degrees
        # for all combinations relative angle is 22.5 degrees, except TT which has relative angle of 67.5 degrees

        # determine the length of the bit string for this circuit
        length_of_bit_string = int((len(current_values)-3)/2)
        
        # Split the classical channel values so the processing is a bit more clear
        channels_for_alice = current_values[:length_of_bit_string]
        channels_for_bob = current_values[length_of_bit_string:-3]
        channels_for_charlie = current_values[-3:]


        # the first channel of Charlie contains the number representing the index for which Bob has to guess the value
        charlies_bit_index = channels_for_charlie[0]
        pair, position = divmod(charlies_bit_index,2)
        if position == 0:
            measurement = S_bob
        else:
            measurement = T_bob

        # we now determine the settings for the wave plates for Bob
        # if the index is even, measurement should be "S" and if the index is odd measurement shoudl be "T"
        new_values = []
        for index in range(len(channels_for_bob)):
            if index%2 == 0:
                channels_for_bob[index] = np.pi # it will be a half wave plate so phase shift is 180 degrees, or pi radians
            else:
                channels_for_bob[index] = measurement

        return channels_for_alice + channels_for_bob + channels_for_charlie
 
def prepare_for_next_stage_Alice(current_values, new_input_values, affected_channels):
    """ Alice assesses the measurement results at her side to generate a new integer, as basis for generating
        the new bit string for the next stage. If this is the last stage Alice generates the single bit she 
        communicated to Bob. The new integer is stored in Charlies second channel.
    """
    # determine the length of the bit string for this circuit
    length_of_bit_string = int((len(current_values)-3)/2)
    
    # Split the classical channel values so the processing is a bit more clear
    channels_for_alice = current_values[:length_of_bit_string]
    channels_for_bob = current_values[length_of_bit_string:-3]
    channels_for_charlie = current_values[-3:]

    # Alice's results are stored in the first even classical channels
    # the algorith is that if for each measurement the outcome is the same as the first bit in the pair we store 0, else we store 1
    # first regenerate the bit string from the number as input to this stage
    charlies_input_number = channels_for_charlie[1]
    input_bit_list = number_to_bit_list(charlies_input_number, length_of_bit_string)
    channels_for_charlie[1] = bit_list_to_number(merge_two_lists_on_even_indices(channels_for_alice,input_bit_list))

    return channels_for_alice + channels_for_bob + channels_for_charlie

def prepare_for_next_stage_Bob(current_values, new_input_values, affected_channels):
    """ Bob assess the measurement results at his side and stores the result in Charlies third channel.
    """
    # determine the length of the bit string for this circuit
    length_of_bit_string = int((len(current_values)-3)/2)
    
    # Split the classical channel values so the processing is a bit more clear
    channels_for_alice = current_values[:length_of_bit_string]
    channels_for_bob = current_values[length_of_bit_string:-3]
    channels_for_charlie = current_values[-3:]

    # the bit index for Bob has to be adjusted for bit_string that is half the size
    charlies_bit_index = channels_for_charlie[0]
    pair, position = divmod(charlies_bit_index,2)

    # read the measurement from Bob and add that to the number in measurement result
    bobs_measurement_result = channels_for_bob[pair*2]
    channels_for_charlie[2] += bobs_measurement_result

    if length_of_bit_string == 2: #this was the last stage, we have to generate Bob's guess
        bit_value_that_alice_will_communicate_to_bob = channels_for_charlie[1]
        bobs_results_so_far = channels_for_charlie[2]
        bobs_guess = (bit_value_that_alice_will_communicate_to_bob + bobs_results_so_far)%2
        channels_for_charlie[2] = bobs_guess

    return channels_for_alice + channels_for_bob + channels_for_charlie

def prepare_for_next_stage_Charlie(current_values, new_input_values, affected_channels):
    """ Charlie prepares for the next stage by generating a new bit index for Bob. """
    # determine the length of the bit string for this circuit
    length_of_bit_string = int((len(current_values)-3)/2)
    
    # Split the classical channel values so the processing is a bit more clear
    channels_for_alice = current_values[:length_of_bit_string]
    channels_for_bob = current_values[length_of_bit_string:-3]
    channels_for_charlie = current_values[-3:]

    # the bit index for Bob has to be adjusted for bit_string that is half the size
    charlies_bit_index = channels_for_charlie[0]
    pair, position = divmod(charlies_bit_index,2)

    # the bit index for Bob has to be adjusted for bit_string that is half the size
    channels_for_charlie[0] = pair

    return channels_for_alice + channels_for_bob + channels_for_charlie

def shift_charlies_channels_to_place_for_next_stage(current_values, new_input_values, affected_channels):
    """ Charlie's channels are shifted to be in the right place for the next iteration (except when 
    this is the last stage, then we do nothing). """
    # if this was last stage do nothing
    if len(current_values) == 2 + 2 + 3:
        return current_values
    else:
        old_length = len(current_values) - 3
        new_length = old_length // 2
        for index, value in enumerate(current_values[-3:]):
            current_values[new_length+index] = value

    return current_values

def implement_noise_in_classical_channel(current_values, new_input_values, affected_channels):
    """ Only for the last stage apply 'noise' to Alice's communication to Bob """
    
    channel_error_rate = new_input_values[0]
    random_number = np.random.randint(1000) # random number ranging from 0 to 999
    channel_flips_the_bit = random_number < 1000 * channel_error_rate

    if channel_flips_the_bit:
        current_values[-1] = (current_values[-1] + 1)%2 # flip the value of the communication bitS
  
    return current_values

Function to create one stage of the circuit.¶

This function will be called for every stage, each time with the bit string length reduced by a factor 2

In [ ]:
def generate_circuit_for_stage_in_guessing_game_with_noisy_channel(length_of_bitstring, quantumness,channel_error_rate,):

    # the actual length of the bitstring has to be a power of 2. Round up to the nearest power of 2
    actual_length = 2**int(np.ceil(np.log2(length_of_bitstring)))

    # we need one photon pair per 2 bits (so Alice has one photon per 2 bits at her side. Same for Bob)
    number_of_photon_pairs = length_of_bitstring//2

    # define the PR photon pairs
    popescu_rohrlich_correlations = [
        {'channels_Ah_Av_Bh_Bv':[n*2,n*2+1,number_of_photon_pairs*2 + n*2,number_of_photon_pairs*2 + n*2 + 1],
         'quantumness_indicator':quantumness} for n in range(number_of_photon_pairs)
         ]

    # make lists to easily identify the channel numbers Bob, Alice and Charlie
    optical_channels_for_alice = []
    for box in popescu_rohrlich_correlations:
        optical_channels_for_alice.append(box['channels_Ah_Av_Bh_Bv'][0])
        optical_channels_for_alice.append(box['channels_Ah_Av_Bh_Bv'][1])

    optical_channels_for_bob = []
    for box in popescu_rohrlich_correlations:
        optical_channels_for_bob.append(box['channels_Ah_Av_Bh_Bv'][2])
        optical_channels_for_bob.append(box['channels_Ah_Av_Bh_Bv'][3])

    # two channels per photon pair are needed for both Alice and Bob (they need one wave plate for each photon at their side, and need
    # two classical channels to control that waveplate)
    classical_channels_for_alice = [index for index in range(2*len(popescu_rohrlich_correlations))] # one channel to measure each box
    classical_channels_for_bob = [(1+ max(classical_channels_for_alice)+ index) for index in range(2*len(popescu_rohrlich_correlations))]
    classical_channels_for_charlie = [2*length_of_bitstring + index for index in range(3)]

    # we then add two classical channels for 'Charlie'. One channel where the number represented by the original bit-string can be stored, and one channel
    # where the bit index is stored. So Charlie's channels contain the 'assignment' in: "Guess the n-bit in the binary representation of number m"
    classical_channels_for_charlie = [  1+max(classical_channels_for_bob),
                                        2+max(classical_channels_for_bob),
                                        3+max(classical_channels_for_bob)] # first channel to store result, second channel to give bit index as input

    # determine number of channels in this stage of the circuit by adding up the channels needed by each player
    number_of_optical_channels = len(optical_channels_for_alice) + len(optical_channels_for_bob)
    number_of_classical_channels = len(classical_channels_for_alice) + len(classical_channels_for_bob) + len(classical_channels_for_charlie)

    circuit = fsc.FockStateCircuit(length_of_fock_state=2,
                                no_of_optical_channels=number_of_optical_channels,
                                no_of_classical_channels=number_of_classical_channels,
                                circuit_name ="")

    # first create the photon pairs
    circuit.popescu_rohrlich_correlation_gate(pr_correlation=popescu_rohrlich_correlations)

    # execute the classical operation at Alice's side to set wave plates based bit string
    node_info = {'label' : "Alice"}
    circuit.classical_channel_function( function = determine_wave_plate_settings_for_alice,
                                        affected_channels=classical_channels_for_alice + [classical_channels_for_charlie[1]],
                                        node_info=node_info)
    
    # set the wave plates for Alice
    for index, box in enumerate(popescu_rohrlich_correlations):
        circuit.wave_plate_from_hamiltonian_classical_control(  optical_channel_horizontal= box['channels_Ah_Av_Bh_Bv'][0],
                                                                optical_channel_vertical= box['channels_Ah_Av_Bh_Bv'][1], 
                                                                classical_channel_for_orientation= classical_channels_for_alice[index*2 + 1],
                                                                classical_channel_for_phase_shift= classical_channels_for_alice[index*2])
    
    # execute the classical operation at Bob's side to set wave plates based bit index
    node_info = {'label' : "Bob"}
    circuit.classical_channel_function(function = determine_wave_plate_settings_for_bob, 
                                       affected_channels=classical_channels_for_bob + [classical_channels_for_charlie[0]],
                                       node_info=node_info)

    # set the wave plates for Bob
    for index, box in enumerate(popescu_rohrlich_correlations):
        circuit.wave_plate_from_hamiltonian_classical_control(  optical_channel_horizontal= box['channels_Ah_Av_Bh_Bv'][2],
                                                            optical_channel_vertical= box['channels_Ah_Av_Bh_Bv'][3], 
                                                            classical_channel_for_orientation= classical_channels_for_bob[index*2 + 1],
                                                            classical_channel_for_phase_shift= classical_channels_for_bob[index*2])

    # total measurement
    circuit.measure_optical_to_classical(optical_channels_to_be_measured=optical_channels_for_alice,
                                         classical_channels_to_be_written=classical_channels_for_alice)
    circuit.measure_optical_to_classical(optical_channels_to_be_measured=optical_channels_for_bob,
                                        classical_channels_to_be_written=classical_channels_for_bob)
    
    
    # prepare everything for a next iteration (i.e., the next stage) or generate the final answer if this was the last stage
    node_info = {'label' : "Alice Prep next"}
    circuit.classical_channel_function(function = prepare_for_next_stage_Alice,
                                       affected_channels=classical_channels_for_alice + [classical_channels_for_charlie[1]],
                                       node_info=node_info)
    
    node_info = {'label' : "Bob Prep next"}
    circuit.classical_channel_function(function = prepare_for_next_stage_Bob,
                                       affected_channels=classical_channels_for_bob + [classical_channels_for_charlie[0]],
                                       node_info=node_info)
    
    # implement noise in the communication channel
    node_info = {'label' : "Implement noise"}
    circuit.classical_channel_function(function = implement_noise_in_classical_channel,
                                       affected_channels=[classical_channels_for_charlie[1]],
                                       new_input_values=[channel_error_rate],
                                       node_info=node_info)
    
    node_info = {'label' : "Charlie Prep answer"}
    circuit.classical_channel_function(function = prepare_for_next_stage_Charlie,
                                       affected_channels=classical_channels_for_charlie,
                                       node_info=node_info)
    

    return circuit

Maximum success rate from Information Theory¶

We can express the upper limit posed by Information Theory in the maximum overall success rate for Bob, in the k-value for the Popescu-Rohrlich photon pair (which is equal to the CHSH correlation value we can achieve) or in the q-value of the photon pair.

In [ ]:
maximum_probabilities_from_IC = []
channel_error_rates_full_curve = [1-(n+100)/200 for n in range(1,100)]
for channel_error in channel_error_rates_full_curve:
    channel_probability = 1- channel_error
    threshold_entropy = (1+binary_entropy(channel_probability))/2
    threshold_probability = binary_entropy_reverse(threshold_entropy)
    maximum_probabilities_from_IC.append(threshold_probability)

fig, ax = plt.subplots()

ax.plot(channel_error_rates_full_curve, maximum_probabilities_from_IC, label = 'maximum success rate', linestyle = 'solid', marker = 'none', color='blue')


ax.set(xlabel='channel success rate', ylabel='success rate',
       title='Maximum success rate for n=2 and m=1')
ax.grid()
plt.legend()
plt.show()
No description has been provided for this image
In [ ]:
K_values = []
channel_error_rates_full_curve = [1-(n+100)/200 for n in range(1,100)]
for channel_error in channel_error_rates_full_curve:
    channel_probability = 1- channel_error
    threshold_entropy = (1+binary_entropy(channel_probability))/2
    threshold_probability = binary_entropy_reverse(threshold_entropy)
    photon_probability = (threshold_probability+channel_probability-1)/(2*channel_probability -1)
    K_value = 4*(2*photon_probability-1)
    K_values.append(K_value)

fig, ax = plt.subplots()

ax.plot(channel_error_rates_full_curve, K_values, label = 'maximum k-value allowed from IC', linestyle = 'solid', marker = 'none', color='blue')
ax.plot([0,0.5], [2*np.sqrt(2),2*np.sqrt(2)], label = 'Tsirelson bound',linestyle = 'solid', marker = 'none', color='black')
#ax.plot(plot_data_x_1, plot_data_y_1, label = 'simulated', linestyle = 'none', marker = 'o', color='red')


ax.set(xlabel='channel error rate', ylabel='k-values',
       title='Highest k-value allowed from Information Causality')
ax.grid()
plt.legend()
plt.show()
No description has been provided for this image
In [ ]:
q_values = []
channel_error_rates_full_curve = [1-(n+100)/200 for n in range(1,100)]
for channel_error in channel_error_rates_full_curve:
    channel_probability = 1- channel_error
    threshold_entropy = (1+binary_entropy(channel_probability))/2
    threshold_probability = binary_entropy_reverse(threshold_entropy)
    photon_probability = (threshold_probability+channel_probability-1)/(2*channel_probability -1)
    K_value = 4*(2*photon_probability-1)
    q_values.append(K_to_quantumness(K_value))

fig, ax = plt.subplots()

ax.plot(channel_error_rates_full_curve, q_values, label = 'maximum quantumness-value allowed from IC', linestyle = 'solid', marker = 'none', color='blue')
ax.plot([0,0.5], [1,1], label = 'Tsirelson bound',linestyle = 'solid', marker = 'none', color='black')
#ax.plot(plot_data_x_1, plot_data_y_1, label = 'simulated', linestyle = 'none', marker = 'o', color='red')


ax.set(xlabel='channel success rate', ylabel='q-values',
       title='Highest q-value allowed from Information Causality')
ax.grid()
plt.legend()
plt.show()
No description has been provided for this image

Function to calculate success rate with noisy channel¶

In [ ]:
def get_success_rate_noisy_channel(quantumness, channel_error_rate):
     """ This function will return the success rate for the guessing game with a noisy channel. The function utilize one pair of
     Popescu-Rohrlich photons. """

     # For the noisy channel analysis we always use a bit string of length 2. We only need one pair of Popescu-Rohrlich photons in this case
     bit_string_length = 2
     success_rate = 0
 
     # We use a trick here. Rather than using the noise level in the circuit itself we apply the noise at the end. So we
     # 'fake' the noise by setting the level to -1
     circuit =   generate_circuit_for_stage_in_guessing_game_with_noisy_channel(length_of_bitstring=bit_string_length,
                                                                                quantumness=quantumness,
                                                                                channel_error_rate=-1)
     
     resulting_collection = fsc.CollectionOfStates(fock_state_circuit=circuit, input_collection_as_a_dict=dict([]))

     # We run through all possible values of Alices number and Bob's index.
     for bobs_index in [0,1]:
          for alices_number in [0,1,2,3]: # this is alices_number from which she derives her bitstring
               for stored_so_far in [0,1]:
                    alices_bit_list = number_to_bit_list(alices_number,bit_string_length)

                    initial_collection = fsc.CollectionOfStates(fock_state_circuit=circuit, input_collection_as_a_dict=dict([]))
                    state = fsc.State(initial_collection)
                    state.classical_channel_values[-3] = bobs_index
                    state.classical_channel_values[-2] = alices_number
                    state.classical_channel_values[-1] = stored_so_far

                    expected = (alices_bit_list[bobs_index]+stored_so_far)%2
                    state.initial_state = "Expected: " + str(expected) 
                    # We want the cumulative probability to add up to 1. We have 16 states over two initial state,
                    # so every state has 6.25% probability
                    state.cumulative_probability = 0.0625
                    initial_collection.add_state(state)

                    result = circuit.evaluate_circuit(initial_collection)
                    for state_out in result:
                         resulting_collection.add_state(state_out)

     histo = resulting_collection.plot(classical_channels=[-1], histo_output_instead_of_plot=True)


     for label, outcomes in histo.items():
          for outcome in outcomes:
               if label[-1] == outcome['output_state']:
                    success_rate += outcome['probability']

     # we 'fake' the noise by applying it to the success rate which we return. 
     return (1-channel_error_rate)*(success_rate) + channel_error_rate * (1-success_rate)

Create the graphs¶

In [ ]:
# define the channel error rates (ce_rates) used for the graphs
ce_rates = [1-(n+10)/20 for n in range(1,9)] + [0.0001]

# first create date for higher q-values
plot_data_y = []
plot_data_x = []
probabilities_data = []
for quantumness in [2,4,8,16]:
     for channel_error_rate in ce_rates :
          p = get_success_rate_noisy_channel(quantumness, channel_error_rate)
          binary_entropy_from_success_rate = -p*np.log2(p) - (1-p)*np.log2(1-p)
          utilized_channel_capacity = 2*(1-binary_entropy_from_success_rate)
          plot_data_x.append(channel_error_rate)
          plot_data_y.append(utilized_channel_capacity)
          probabilities_data.append(p)

# create date for q = 1 (normal entangled photons)
plot_data_y_1 = []
plot_data_x_1 = []
probabilities_data_1 = []
for quantumness in [1]:
     for channel_error_rate in ce_rates :
          p = get_success_rate_noisy_channel(quantumness, channel_error_rate)
          binary_entropy_from_success_rate = -p*np.log2(p) - (1-p)*np.log2(1-p)
          utilized_channel_capacity = 2*(1-binary_entropy_from_success_rate)
          plot_data_x_1.append(channel_error_rate)
          plot_data_y_1.append(utilized_channel_capacity)
          probabilities_data_1.append(p)

# calculate the maximum q-value
plot_data_y_crossover = []
plot_data_x_crossover = []
probabilities_crossover = []
for channel_error_rate in ce_rates:
     channel_probability = 1-channel_error_rate
     threshold_entropy = (1+binary_entropy(channel_probability))/2
     threshold_probability = binary_entropy_reverse(threshold_entropy)
     photon_probability = (threshold_probability+channel_probability-1)/(2*channel_probability -1)
     K_value = 4*(2*photon_probability-1)
     quantumness = K_to_quantumness(K_value)
     p = get_success_rate_noisy_channel(quantumness, channel_error_rate)
     binary_entropy_from_success_rate = -p*np.log2(p) - (1-p)*np.log2(1-p)
     utilized_channel_capacity = 2*(1-binary_entropy_from_success_rate)
     plot_data_x_crossover.append(channel_error_rate)
     plot_data_y_crossover.append(utilized_channel_capacity)
     probabilities_crossover.append(p)


# We can relate the calculated probability to effective channel capacity utilized
maximum_from_noisy_channel = [2*(1-binary_entropy(p)) for p in maximum_probabilities_from_IC]
In [ ]:
# draw the curve
fig, ax = plt.subplots()

ax.plot(channel_error_rates_full_curve, maximum_from_noisy_channel, label = 'maximum from channel', linestyle = 'solid', marker = 'none', color='black')
ax.plot(plot_data_x, plot_data_y, label = 'simulated for q = 2,4,8,16', linestyle = 'none', marker = 'o', color='blue')
ax.plot(plot_data_x_1, plot_data_y_1, label = 'simulated for q = 1', linestyle = 'none', marker = 'o', color='green')
ax.plot(plot_data_x_crossover, plot_data_y_crossover, label = 'maximum allowed q values', linestyle = 'none', marker = 'x', color='black')


ax.set(xlabel='channel error rate', ylabel='channel information capacity',
       title='Effective channel capacity utilized to come to observed success rate')
ax.grid()
plt.legend()
plt.show()



total_probabilities = []
channel_error_rates = [1-(n+100)/200 for n in range(1,100)]
for channel_error in channel_error_rates:
    channel_probability = 1- channel_error
    threshold_entropy = (1+binary_entropy(channel_probability))/2
    threshold_probability = binary_entropy_reverse(threshold_entropy)
    total_probabilities.append(threshold_probability)

fig, ax = plt.subplots()

ax.plot(channel_error_rates_full_curve, maximum_probabilities_from_IC, label = 'maximum from channel', linestyle = 'solid', marker = 'none', color='black')
ax.plot(plot_data_x, probabilities_data, label = 'simulated for q = 2,4,8,16', linestyle = 'none', marker = 'o', color='blue')
ax.plot(plot_data_x_1, probabilities_data_1, label = 'simulated for q = 1', linestyle = 'none', marker = 'o', color='green')
ax.plot(plot_data_x_crossover, probabilities_crossover, label = 'maximum allowed q values', linestyle = 'none', marker = 'x', color='black')


ax.set(xlabel='channel error rate', ylabel='success rate',
       title="Success rate for Bob's guess")
ax.grid()
plt.legend()
plt.show()
No description has been provided for this image
No description has been provided for this image

Zoom in on the area near channel error rate = 0.5¶

In [ ]:
ce_rates = [0.5-n/500 for n in range(1,20)]

plot_data_y = []
plot_data_x = []
probabilities_data = []
for quantumness in [2,4,8,16]:
     for channel_error_rate in ce_rates :
          p = get_success_rate_noisy_channel(quantumness, channel_error_rate)
          binary_entropy_from_success_rate = -p*np.log2(p) - (1-p)*np.log2(1-p)
          utilized_channel_capacity = 2*(1-binary_entropy_from_success_rate)
          plot_data_x.append(channel_error_rate)
          plot_data_y.append(utilized_channel_capacity)
          probabilities_data.append(p)

plot_data_y_1 = []
plot_data_x_1 = []
probabilities_data_1 = []
for quantumness in [1]:
     for channel_error_rate in ce_rates :
          p = get_success_rate_noisy_channel(quantumness, channel_error_rate)
          binary_entropy_from_success_rate = -p*np.log2(p) - (1-p)*np.log2(1-p)
          utilized_channel_capacity = 2*(1-binary_entropy_from_success_rate)
          plot_data_x_1.append(channel_error_rate)
          plot_data_y_1.append(utilized_channel_capacity)
          probabilities_data_1.append(p)


plot_data_y_crossover = []
plot_data_x_crossover = []
probabilities_crossover = []
for channel_error_rate in ce_rates:
     channel_probability = 1-channel_error_rate
     threshold_entropy = (1+binary_entropy(channel_probability))/2
     threshold_probability = binary_entropy_reverse(threshold_entropy)
     photon_probability = (threshold_probability+channel_probability-1)/(2*channel_probability -1)
     K_value = 4*(2*photon_probability-1)
     quantumness = K_to_quantumness(K_value)
     p = get_success_rate_noisy_channel(quantumness, channel_error_rate)
     binary_entropy_from_success_rate = -p*np.log2(p) - (1-p)*np.log2(1-p)
     utilized_channel_capacity = 2*(1-binary_entropy_from_success_rate)
     plot_data_x_crossover.append(channel_error_rate)
     plot_data_y_crossover.append(utilized_channel_capacity)
     probabilities_crossover.append(p)




channel_entropy = [binary_entropy(p) for p in ce_rates]

maximum_from_noisy_channel = [1-H for H in channel_entropy]

fig, ax = plt.subplots()

ax.plot(ce_rates, maximum_from_noisy_channel, label = 'maximum from channel', linestyle = 'solid', marker = 'none', color='black')
ax.plot(plot_data_x, plot_data_y, label = 'simulated for q = 2,4,8,16', linestyle = 'none', marker = 'o', color='blue')
ax.plot(plot_data_x_1, plot_data_y_1, label = 'simulated for q = 1', linestyle = 'none', marker = 'o', color='green')
ax.plot(plot_data_x_crossover, plot_data_y_crossover, label = 'maximum allowed q values', linestyle = 'none', marker = 'x', color='black')


ax.set(xlabel='channel error rate', ylabel='channel information capacity',
       title='Effective channel capacity utilized to come to observed success rate')
ax.grid()
plt.legend()
plt.show()


total_probabilities = []
channel_error_rates = ce_rates
for channel_error in channel_error_rates:
    channel_probability = 1- channel_error
    threshold_entropy = (1+binary_entropy(channel_probability))/2
    threshold_probability = binary_entropy_reverse(threshold_entropy)
    total_probabilities.append(threshold_probability)

fig, ax = plt.subplots()

ax.plot(channel_error_rates, total_probabilities, label = 'maximum from channel', linestyle = 'solid', marker = 'none', color='black')
ax.plot(plot_data_x, probabilities_data, label = 'simulated for q = 2,4,8,16', linestyle = 'none', marker = 'o', color='blue')
ax.plot(plot_data_x_1, probabilities_data_1, label = 'simulated for q = 1', linestyle = 'none', marker = 'o', color='green')
ax.plot(plot_data_x_crossover, probabilities_crossover, label = 'maximum allowed q values', linestyle = 'none', marker = 'x', color='black')


ax.set(xlabel='channel error rate', ylabel='success rate',
       title="Success rate for Bob's guess")
ax.grid()
plt.legend()
plt.show()
No description has been provided for this image
No description has been provided for this image

Draw the circuit schematics¶

In [ ]:
bit_string_length = 2
circuit_draw_settings_dict = {'channel_labels_optical' : ['Alice optical ' + str(n) for n in range(bit_string_length)] +  ['Bob optical ' + str(n) for n in range(bit_string_length)],
                              'channel_labels_classical' :  ['Alice class. ' + str(n) for n in range(bit_string_length)] +  ['Bob class. ' + str(n) for n in range(bit_string_length)] + ['Charlie ' + str(n) for n in range(3)],
                              'number_of_nodes_on_a_line': 11, 
                              'spacing_between_lines_in_relation_to_spacing_between_nodes' : 1,
                              'compound_circuit_title': "Optical circuit for guessing game with 'noisy' channel"
                         }
circuit =   generate_circuit_for_stage_in_guessing_game_with_noisy_channel(length_of_bitstring=bit_string_length,
                                                                            quantumness=quantumness,
                                                                            channel_error_rate=-1)
circuit.draw(settings_for_drawing_circuit=circuit_draw_settings_dict)
No description has been provided for this image
In [ ]:
fsc.about()
FockStateCircuit: Quantum Optics with Fock States for Python
Copyright (c) 2023 and later.
Rob Hendriks

FockStateCircuit:            1.0.5
CollectionOfStates:          1.0.1
State:                       1.0.1
ColumnOfStates:              1.0.0
InterferenceGroup:           1.0.0
CollectionOfStateColumns:    1.0.0
OpticalNodes:                1.0.0
BridgeNodes:                 1.0.0
CustomNodes:                 1.0.0
ControlledNodes:             1.0.0
MeasurementNodes:            1.0.0
ClassicalNodes:              1.0.0
SpectralNodes:               1.0.0
QuantumOperatorNodes:        1.0.0
temporal_functions:          1.0.0
Numpy Version:               1.26.4
Matplotlib version:          3.8.4