QSeaBattle: Pyramid Trainable Assisted Imitation Learning (Per-level Bootstrap Tutorial)¶
This notebook mirrors the linear trainable-assisted bootstrap workflow, but for the pyramid architecture.
Key differences vs the linear tutorial:
- The pyramid reduces the active vector length by half at each level.
- For a flattened field length
N = field_size**2, there areK = log2(N)levels with input sizes:L ∈ {N, N/2, N/4, ..., 2}. - In this tutorial we follow Option 2A: train each level separately and then assemble Model A and Model B using per-level layers.
We will:
- Set the working directory to repo root and import QSeaBattle modules.
- Define the game layout and training hyperparameters.
- Generate per-level imitation datasets (teacher targets).
- Train per-level layers for A and B.
- Assemble pyramid Model A / Model B and transfer learned weights.
- Verify against the deterministic (teacher) strategy and run a small evaluation tournament.
- Save model weights.
Notes:
- The pyramid models use the PRAssistedLayer internally. For deterministic checks we set
sr_mode="expected".- For gameplay / evaluation we set
sr_mode="sample".
Current scope vs future DIAL end-to-end training¶
This tutorial implements imitation bootstrap for the pyramid architecture:
- We generate synthetic teacher targets for each layer type (measurement A/B, combine A/B).
- We train each level independently (Option 2A) using supervised losses.
- We assemble the full pyramid Model A / Model B by transferring weights into per-level layer lists.
Limitation (intentional for now)¶
This notebook does not train communication end-to-end. In particular:
- The message (comm) is treated as a supervised target (via teacher rules) rather than learned jointly.
- The
PRAssistedLayeris used in"expected"mode for deterministic checks and"sample"mode for gameplay, but we do not backpropagate through stochastic sampling.
What will change for DIAL/DRU end-to-end (future work)¶
To move to true end-to-end DIAL/DRU for the pyramid model, we will likely:
- Make Model A output a continuous message (logits) during training and discretize only for evaluation/gameplay.
- Let Model B consume that continuous message directly during training.
- Introduce a differentiable treatment of discrete sampling:
- straight-through estimators (DIAL/DRU style), or
- relaxed Bernoulli / Gumbel-sigmoid for comm and possibly for SR outcomes.
- Add end-to-end training utilities that optimize a game-level loss (e.g., BCE on correct shoot decision) across the full K-level pyramid computation graph.
The supervised imitation stage here is still useful in that setting as pretraining before end-to-end fine-tuning.
from __future__ import annotations
import os
import sys
from pathlib import Path
def change_to_repo_root(marker: str = "src") -> None:
"""Change CWD to the repository root (parent of `src`)."""
here = Path.cwd()
for parent in [here] + list(here.parents):
if (parent / marker).is_dir():
os.chdir(parent)
break
change_to_repo_root("src")
sys.path.append("./src")
print("CWD:", Path.cwd())
CWD: c:\Users\nly99857\OneDrive - Philips\SW Projects\QSeaBattle
Imports¶
import numpy as np
import tensorflow as tf
from Q_Sea_Battle.game_layout import GameLayout
from Q_Sea_Battle.game_env import GameEnv
from Q_Sea_Battle.tournament import Tournament
from Q_Sea_Battle.trainable_assisted_players import TrainableAssistedPlayers
from Q_Sea_Battle.pyr_trainable_assisted_model_a import PyrTrainableAssistedModelA
from Q_Sea_Battle.pyr_trainable_assisted_model_b import PyrTrainableAssistedModelB
from Q_Sea_Battle.pyr_trainable_assisted_imitation_utilities import (
pyramid_levels,
generate_measurement_dataset_a,
generate_combine_dataset_a,
generate_measurement_dataset_b,
generate_combine_dataset_b,
to_tf_dataset,
train_layer,
transfer_pyr_model_a_layer_weights,
transfer_pyr_model_b_layer_weights,
)
print("TensorFlow:", tf.__version__)
tf.get_logger().setLevel("ERROR")
TensorFlow: 2.20.0
Game layout and correlation setting¶
FIELD_SIZE = 4 # 4x4 -> N=16 (requires N a power of 2)
COMMS_SIZE = 1 # Pyramid requires 1 comm bit
# PR-Assisted correlation parameter (passed into the pyramid models' SR layers)
P_HIGH = 1.0
# Dataset / training sizes
DATASET_SIZE = 50_000
BATCH_SIZE = 256
EPOCHS_MEAS_A = 10
EPOCHS_MEAS_B = 10
EPOCHS_COMB_A = 45
EPOCHS_COMB_B = 45
SEED = 123
tf.random.set_seed(SEED)
np.random.seed(SEED)
GAMES_IN_EVAL_TOURNAMENT = 2000
# Folders
data_dir = Path("notebooks/data")
models_dir = Path("notebooks/models")
data_dir.mkdir(parents=True, exist_ok=True)
models_dir.mkdir(parents=True, exist_ok=True)
layout = GameLayout(field_size=FIELD_SIZE, comms_size=COMMS_SIZE)
N = FIELD_SIZE * FIELD_SIZE
levels = pyramid_levels(N)
print("Flattened N:", N)
print("Pyramid levels (input sizes):", levels)
# Optional: show or hide per-epoch progress bars from Keras .fit()
FIT_VERBOSE = 1 # 0 = silent, 1 = progress bar, 2 = one line per epoch
Flattened N: 16 Pyramid levels (input sizes): [16, 8, 4, 2]
Generating imitation targets from the classical assisted strategy (teacher rules)¶
This represents the correct rules (checked for field size 8 and 16)
Measurement A: Consider odd/even pairs in the input field. If pair equal, measure 0, if pair different, measure 1. $$ m_i(A)=x_{2i}⊕x_{2i+1} $$
Combine A: Consider the outcome and the even index of the pair in the field, if the these are equal output 0, if these are unequal output 1 $$ f'_i=x_{2i}⊕s_i $$
Measure B: Measurement B outputs 1 if and only if the gun pair equals (0,1); otherwise it outputs 0. $$ m_i(B)=¬g_{2i}∧g_{2i+1} $$
Combine B to new gun: If the pair in original gun is (0,1) or (1,0) set to 1, otherwise set to 0. In other words, the next gun state is defined as $g'_i=g_{2i}⊕g_{2i+1}$. If the current gun is one-hot, this operation preserves one-hotness.
Based on A’ measurement (input) to shared resources and the output of A’s shared resources, and the setting Phigh B produces and output for his shared resources, which is not necessarily equal to the output of A’s shared resources.
New comms: If the shared-resources outcome at the index with value ‘1’ in new gun is 1, flip the original comms, otherwise leave the original comms the same to produce new comms. Let g^'be the new gun (one-hot) and sthe shared-resources vector that is produced by B’s measurement. The updated communication bit is $$ c' = c \oplus \left(\sum_i g'_i s_i\right) \bmod 2 $$
Notes:
- The gun vector is assumed to be one-hot at all levels; the pyramid update rule preserves this invariant.
- Shared resources does not affect the gun state. It is used only to update the communication bit.
# For each level size L we generate independent synthetic inputs of length L,
# and compute supervised teacher targets according to the v2 pyramid rules.
datasets = {
"meas_a": {},
"comb_a": {},
"meas_b": {},
"comb_b": {},
}
for i, L in enumerate(levels):
datasets["meas_a"][L] = generate_measurement_dataset_a(L, num_samples=DATASET_SIZE, seed=SEED + 10 + i)
datasets["comb_a"][L] = generate_combine_dataset_a(L, num_samples=DATASET_SIZE, seed=SEED + 20 + i)
datasets["meas_b"][L] = generate_measurement_dataset_b(L, num_samples=DATASET_SIZE, seed=SEED + 30 + i)
datasets["comb_b"][L] = generate_combine_dataset_b(L, num_samples=DATASET_SIZE, seed=SEED + 40 + i)
print("Generated datasets for levels:", list(datasets["meas_a"].keys()))
Generated datasets for levels: [16, 8, 4, 2]
# ============================================================
# Pyramid per-layer dataset sanity check & manual inspection
# ============================================================
import numpy as np
import random
np.set_printoptions(precision=3, suppress=True, linewidth=120)
print("=== Pyramid Dataset Diagnostic ===")
def pick_one(ds_dict):
"""Pick one random index from a dataset dict of numpy arrays."""
n = len(next(iter(ds_dict.values())))
i = random.randrange(n)
return {k: v[i] for k, v in ds_dict.items()}, i
def frac_binary(x):
x = np.asarray(x)
return np.mean((x == 0) | (x == 1))
def summarize(name, arr):
arr = np.asarray(arr)
print(f"{name:>18}: shape={arr.shape}, min={arr.min():.3f}, max={arr.max():.3f}, "
f"mean={arr.mean():.3f}, frac_binary={frac_binary(arr):.3f}")
for L in levels:
print("\n" + "="*70)
print(f"LEVEL L = {L}")
print("="*70)
# --------------------------------------------------------
# Measurement A
# --------------------------------------------------------
ds = datasets["meas_a"][L]
sample, idx = pick_one(ds)
field = sample["field"]
target = sample["meas_target"]
print("\n[Measurement A]")
print(f"Sample index: {idx}")
summarize("field", field)
summarize("meas_target", target)
print(f'Example: \n\tField: {field}\n\tTarget: {target}')
print("Rule check:")
print("- field length =", len(field))
print("- meas_target length =", len(target), "(expected L//2)")
print("- Each target bit should indicate inequality of a field pair")
# --------------------------------------------------------
# Combine A
# --------------------------------------------------------
ds = datasets["comb_a"][L]
sample, idx = pick_one(ds)
field = sample["field"]
sr = sample["sr_outcome"]
target = sample["next_field_target"]
print("\n[Combine A]")
print(f"Sample index: {idx}")
summarize("field", field)
summarize("sr_outcome", sr)
summarize("next_field", target)
print(f'Example: \n\tField: {field}\n\tSR Outcome: {sr}\n\tNext Field: {target}')
print("Rule check:")
print("- sr_outcome length =", len(sr), "(expected L//2)")
print("- next_field length =", len(target), "(expected L//2)")
print("- next_field combines field + SR outcome")
# --------------------------------------------------------
# Measurement B
# --------------------------------------------------------
ds = datasets["meas_b"][L]
sample, idx = pick_one(ds)
gun = sample["gun"]
target = sample["meas_target"]
print("\n[Measurement B]")
print(f"Sample index: {idx}")
summarize("gun", gun)
summarize("meas_target", target)
print(f'Example: \n\tGun: {gun}\n\tTarget: {target}')
print("Rule check:")
print("- gun length =", len(gun))
print("- meas_target length =", len(target), "(expected L//2)")
# --------------------------------------------------------
# Combine B
# --------------------------------------------------------
ds = datasets["comb_b"][L]
sample, idx = pick_one(ds)
gun = sample["gun"]
sr = sample["sr_outcome"]
comm = sample["comm"]
tgt_g = sample["next_gun_target"]
tgt_c = sample["next_comm_target"]
print(f'Example: \n\tGun: {gun}\n\tSR Outcome: {sr}\n\tComm: {comm}\n\tNext Gun: {tgt_g}\n\tNext Comm: {tgt_c}')
print("\n[Combine B]")
print(f"Sample index: {idx}")
summarize("gun", gun)
summarize("sr_outcome", sr)
summarize("comm", comm)
summarize("next_gun", tgt_g)
summarize("next_comm", tgt_c)
print("Rule check:")
print("- next_gun length =", len(tgt_g), "(expected L//2)")
print("- next_comm length =", len(tgt_c), "(expected COMMS_SIZE)")
print("- next_comm aggregates information from SR + gun")
print("\n=== Diagnostic complete ===")
=== Pyramid Dataset Diagnostic ===
======================================================================
LEVEL L = 16
======================================================================
[Measurement A]
Sample index: 48285
field: shape=(16,), min=0.000, max=1.000, mean=0.375, frac_binary=1.000
meas_target: shape=(8,), min=0.000, max=1.000, mean=0.250, frac_binary=1.000
Example:
Field: [0. 1. 0. 0. 1. 1. 1. 1. 0. 0. 0. 0. 0. 1. 0. 0.]
Target: [1. 0. 0. 0. 0. 0. 1. 0.]
Rule check:
- field length = 16
- meas_target length = 8 (expected L//2)
- Each target bit should indicate inequality of a field pair
[Combine A]
Sample index: 37123
field: shape=(16,), min=0.000, max=1.000, mean=0.312, frac_binary=1.000
sr_outcome: shape=(8,), min=0.000, max=1.000, mean=0.875, frac_binary=1.000
next_field: shape=(8,), min=0.000, max=1.000, mean=0.750, frac_binary=1.000
Example:
Field: [0. 1. 0. 0. 1. 1. 1. 0. 1. 0. 0. 0. 0. 0. 0. 0.]
SR Outcome: [1. 1. 0. 1. 1. 1. 1. 1.]
Next Field: [1. 1. 1. 0. 0. 1. 1. 1.]
Rule check:
- sr_outcome length = 8 (expected L//2)
- next_field length = 8 (expected L//2)
- next_field combines field + SR outcome
[Measurement B]
Sample index: 47180
gun: shape=(16,), min=0.000, max=1.000, mean=0.062, frac_binary=1.000
meas_target: shape=(8,), min=0.000, max=0.000, mean=0.000, frac_binary=1.000
Example:
Gun: [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0.]
Target: [0. 0. 0. 0. 0. 0. 0. 0.]
Rule check:
- gun length = 16
- meas_target length = 8 (expected L//2)
Example:
Gun: [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0.]
SR Outcome: [1. 0. 0. 0. 0. 1. 0. 1.]
Comm: [1.]
Next Gun: [0. 0. 0. 0. 0. 0. 1. 0.]
Next Comm: [1.]
[Combine B]
Sample index: 15929
gun: shape=(16,), min=0.000, max=1.000, mean=0.062, frac_binary=1.000
sr_outcome: shape=(8,), min=0.000, max=1.000, mean=0.375, frac_binary=1.000
comm: shape=(1,), min=1.000, max=1.000, mean=1.000, frac_binary=1.000
next_gun: shape=(8,), min=0.000, max=1.000, mean=0.125, frac_binary=1.000
next_comm: shape=(1,), min=1.000, max=1.000, mean=1.000, frac_binary=1.000
Rule check:
- next_gun length = 8 (expected L//2)
- next_comm length = 1 (expected COMMS_SIZE)
- next_comm aggregates information from SR + gun
======================================================================
LEVEL L = 8
======================================================================
[Measurement A]
Sample index: 22933
field: shape=(8,), min=0.000, max=1.000, mean=0.875, frac_binary=1.000
meas_target: shape=(4,), min=0.000, max=1.000, mean=0.250, frac_binary=1.000
Example:
Field: [0. 1. 1. 1. 1. 1. 1. 1.]
Target: [1. 0. 0. 0.]
Rule check:
- field length = 8
- meas_target length = 4 (expected L//2)
- Each target bit should indicate inequality of a field pair
[Combine A]
Sample index: 40087
field: shape=(8,), min=0.000, max=1.000, mean=0.250, frac_binary=1.000
sr_outcome: shape=(4,), min=0.000, max=1.000, mean=0.250, frac_binary=1.000
next_field: shape=(4,), min=0.000, max=1.000, mean=0.500, frac_binary=1.000
Example:
Field: [0. 1. 0. 0. 0. 0. 1. 0.]
SR Outcome: [0. 0. 1. 0.]
Next Field: [0. 0. 1. 1.]
Rule check:
- sr_outcome length = 4 (expected L//2)
- next_field length = 4 (expected L//2)
- next_field combines field + SR outcome
[Measurement B]
Sample index: 37951
gun: shape=(8,), min=0.000, max=1.000, mean=0.125, frac_binary=1.000
meas_target: shape=(4,), min=0.000, max=0.000, mean=0.000, frac_binary=1.000
Example:
Gun: [0. 0. 0. 0. 1. 0. 0. 0.]
Target: [0. 0. 0. 0.]
Rule check:
- gun length = 8
- meas_target length = 4 (expected L//2)
Example:
Gun: [0. 1. 0. 0. 0. 0. 0. 0.]
SR Outcome: [1. 1. 0. 0.]
Comm: [1.]
Next Gun: [1. 0. 0. 0.]
Next Comm: [0.]
[Combine B]
Sample index: 15598
gun: shape=(8,), min=0.000, max=1.000, mean=0.125, frac_binary=1.000
sr_outcome: shape=(4,), min=0.000, max=1.000, mean=0.500, frac_binary=1.000
comm: shape=(1,), min=1.000, max=1.000, mean=1.000, frac_binary=1.000
next_gun: shape=(4,), min=0.000, max=1.000, mean=0.250, frac_binary=1.000
next_comm: shape=(1,), min=0.000, max=0.000, mean=0.000, frac_binary=1.000
Rule check:
- next_gun length = 4 (expected L//2)
- next_comm length = 1 (expected COMMS_SIZE)
- next_comm aggregates information from SR + gun
======================================================================
LEVEL L = 4
======================================================================
[Measurement A]
Sample index: 7555
field: shape=(4,), min=1.000, max=1.000, mean=1.000, frac_binary=1.000
meas_target: shape=(2,), min=0.000, max=0.000, mean=0.000, frac_binary=1.000
Example:
Field: [1. 1. 1. 1.]
Target: [0. 0.]
Rule check:
- field length = 4
- meas_target length = 2 (expected L//2)
- Each target bit should indicate inequality of a field pair
[Combine A]
Sample index: 2951
field: shape=(4,), min=0.000, max=1.000, mean=0.500, frac_binary=1.000
sr_outcome: shape=(2,), min=0.000, max=1.000, mean=0.500, frac_binary=1.000
next_field: shape=(2,), min=0.000, max=0.000, mean=0.000, frac_binary=1.000
Example:
Field: [0. 1. 1. 0.]
SR Outcome: [0. 1.]
Next Field: [0. 0.]
Rule check:
- sr_outcome length = 2 (expected L//2)
- next_field length = 2 (expected L//2)
- next_field combines field + SR outcome
[Measurement B]
Sample index: 27027
gun: shape=(4,), min=0.000, max=1.000, mean=0.250, frac_binary=1.000
meas_target: shape=(2,), min=0.000, max=1.000, mean=0.500, frac_binary=1.000
Example:
Gun: [0. 0. 0. 1.]
Target: [0. 1.]
Rule check:
- gun length = 4
- meas_target length = 2 (expected L//2)
Example:
Gun: [0. 0. 1. 0.]
SR Outcome: [1. 1.]
Comm: [0.]
Next Gun: [0. 1.]
Next Comm: [1.]
[Combine B]
Sample index: 14444
gun: shape=(4,), min=0.000, max=1.000, mean=0.250, frac_binary=1.000
sr_outcome: shape=(2,), min=1.000, max=1.000, mean=1.000, frac_binary=1.000
comm: shape=(1,), min=0.000, max=0.000, mean=0.000, frac_binary=1.000
next_gun: shape=(2,), min=0.000, max=1.000, mean=0.500, frac_binary=1.000
next_comm: shape=(1,), min=1.000, max=1.000, mean=1.000, frac_binary=1.000
Rule check:
- next_gun length = 2 (expected L//2)
- next_comm length = 1 (expected COMMS_SIZE)
- next_comm aggregates information from SR + gun
======================================================================
LEVEL L = 2
======================================================================
[Measurement A]
Sample index: 2759
field: shape=(2,), min=0.000, max=1.000, mean=0.500, frac_binary=1.000
meas_target: shape=(1,), min=1.000, max=1.000, mean=1.000, frac_binary=1.000
Example:
Field: [0. 1.]
Target: [1.]
Rule check:
- field length = 2
- meas_target length = 1 (expected L//2)
- Each target bit should indicate inequality of a field pair
[Combine A]
Sample index: 37492
field: shape=(2,), min=0.000, max=0.000, mean=0.000, frac_binary=1.000
sr_outcome: shape=(1,), min=1.000, max=1.000, mean=1.000, frac_binary=1.000
next_field: shape=(1,), min=1.000, max=1.000, mean=1.000, frac_binary=1.000
Example:
Field: [0. 0.]
SR Outcome: [1.]
Next Field: [1.]
Rule check:
- sr_outcome length = 1 (expected L//2)
- next_field length = 1 (expected L//2)
- next_field combines field + SR outcome
[Measurement B]
Sample index: 28708
gun: shape=(2,), min=0.000, max=1.000, mean=0.500, frac_binary=1.000
meas_target: shape=(1,), min=0.000, max=0.000, mean=0.000, frac_binary=1.000
Example:
Gun: [1. 0.]
Target: [0.]
Rule check:
- gun length = 2
- meas_target length = 1 (expected L//2)
Example:
Gun: [1. 0.]
SR Outcome: [1.]
Comm: [1.]
Next Gun: [1.]
Next Comm: [0.]
[Combine B]
Sample index: 39597
gun: shape=(2,), min=0.000, max=1.000, mean=0.500, frac_binary=1.000
sr_outcome: shape=(1,), min=1.000, max=1.000, mean=1.000, frac_binary=1.000
comm: shape=(1,), min=1.000, max=1.000, mean=1.000, frac_binary=1.000
next_gun: shape=(1,), min=1.000, max=1.000, mean=1.000, frac_binary=1.000
next_comm: shape=(1,), min=0.000, max=0.000, mean=0.000, frac_binary=1.000
Rule check:
- next_gun length = 1 (expected L//2)
- next_comm length = 1 (expected COMMS_SIZE)
- next_comm aggregates information from SR + gun
=== Diagnostic complete ===
Training individual layers by supervised imitation (per level)¶
Training notes¶
- We report
binary_accuracyduring supervised imitation to quickly detect learnability issues. - If you see accuracy stuck near
0.5with loss near0.693, the model is underpowered for XOR-like rules. Increasehiddenor add another hidden layer. FIT_VERBOSEcontrols Keras.fit()output (0/1/2).
For this notebook,
train_layer(...)is expected to accept averboseargument.
If your localpyr_trainable_assisted_imitation_utils.pydoes not yet include it, addverboseto that helper (same as in the linear tutorial).
# We'll define small per-level trainable layers.
# IMPORTANT:
# - We do NOT create ad-hoc trainable layer classes in this tutorial anymore.
# - Instead, we use the trainable Pyramid primitives from the Q_Sea_Battle package.
#
# Design intent is unchanged:
# - Measurement layers output probabilities in [0, 1] (sigmoid head) for SR compatibility.
# - Combine layers output logits (linear head). Therefore we train them with BCE(from_logits=True).
from Q_Sea_Battle.pyr_measurement_layer_a import PyrMeasurementLayerA
from Q_Sea_Battle.pyr_measurement_layer_b import PyrMeasurementLayerB
from Q_Sea_Battle.pyr_combine_layer_a import PyrCombineLayerA
from Q_Sea_Battle.pyr_combine_layer_b import PyrCombineLayerB
bce_probs = tf.keras.losses.BinaryCrossentropy(from_logits=False)
bce_logits = tf.keras.losses.BinaryCrossentropy(from_logits=True)
meas_layers_a_trained = []
comb_layers_a_trained = []
meas_layers_b_trained = []
comb_layers_b_trained = []
for L in levels:
print(f"Training per-level layers for L={L}...")
# --- Measurement A ---
ds = datasets["meas_a"][L]
tfds = to_tf_dataset(ds, x_keys=["field"], y_key="meas_target",
batch_size=BATCH_SIZE, shuffle=True, seed=SEED)
layer = PyrMeasurementLayerA(hidden_units=64)
_ = train_layer(layer, tfds, loss=bce_probs, epochs=EPOCHS_MEAS_A,
metrics=[tf.keras.metrics.BinaryAccuracy(threshold=0.5)], verbose=FIT_VERBOSE)
meas_layers_a_trained.append(layer)
print(f" \tTrained Measurement A layer for L={L}.")
# --- Combine A ---
ds = datasets["comb_a"][L]
tfds = to_tf_dataset(ds, x_keys=["field", "sr_outcome"], y_key="next_field_target",
batch_size=BATCH_SIZE, shuffle=True, seed=SEED+1)
layer = PyrCombineLayerA(hidden_units=64)
bce = tf.keras.losses.BinaryCrossentropy(from_logits=False)
_ = train_layer(
layer, tfds,
loss=bce,
epochs=EPOCHS_COMB_A,
metrics=[tf.keras.metrics.BinaryAccuracy(threshold=0.5)],
verbose=FIT_VERBOSE
)
comb_layers_a_trained.append(layer)
print(f" \tTrained Combine A layer for L={L}.")
# --- Measurement B ---
ds = datasets["meas_b"][L]
tfds = to_tf_dataset(ds, x_keys=["gun"], y_key="meas_target",
batch_size=BATCH_SIZE, shuffle=True, seed=SEED+2)
layer = PyrMeasurementLayerB(hidden_units=64)
_ = train_layer(layer, tfds, loss=bce_probs, epochs=EPOCHS_MEAS_B, metrics=[tf.keras.metrics.BinaryAccuracy(threshold=0.5)], verbose=FIT_VERBOSE)
meas_layers_b_trained.append(layer)
print(f" \tTrained Measurement B layer for L={L}.")
# --- Combine B (multi-output) ---
ds = datasets["comb_b"][L]
gun = ds["gun"]
sr_out = ds["sr_outcome"]
comm = ds["comm"]
y_g = ds["next_gun_target"]
y_c = ds["next_comm_target"]
tfds = tf.data.Dataset.from_tensor_slices(((gun, sr_out, comm), (y_g, y_c)))
tfds = tfds.shuffle(buffer_size=min(len(gun), 10_000),
seed=SEED+3, reshuffle_each_iteration=True).batch(BATCH_SIZE).prefetch(tf.data.AUTOTUNE)
layer = PyrCombineLayerB(hidden_units=64)
inp_g = tf.keras.Input(shape=(L,), dtype=tf.float32)
inp_sr = tf.keras.Input(shape=(L//2,), dtype=tf.float32)
inp_c = tf.keras.Input(shape=(COMMS_SIZE,), dtype=tf.float32)
out_g, out_c = layer(inp_g, inp_sr, inp_c)
out_g = tf.keras.layers.Activation("linear", name="next_gun")(out_g)
out_c = tf.keras.layers.Activation("linear", name="next_comm")(out_c)
model = tf.keras.Model([inp_g, inp_sr, inp_c], [out_g, out_c])
model.compile(
optimizer="adam",
loss=[bce_logits, bce_logits],
metrics=[["binary_accuracy"], ["binary_accuracy"]],
)
model.fit(tfds, epochs=EPOCHS_COMB_B, verbose=FIT_VERBOSE)
comb_layers_b_trained.append(layer)
print(f" \tTrained Combine B layer for L={L}.")
print("Trained per-level layers for A and B.")
Training per-level layers for L=16... Epoch 1/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - binary_accuracy: 0.6088 - loss: 0.6710 Epoch 2/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 0.8418 - loss: 0.5695 Epoch 3/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 0.9613 - loss: 0.4058 Epoch 4/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 0.9965 - loss: 0.2565 Epoch 5/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.1558 Epoch 6/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0954 Epoch 7/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0602 Epoch 8/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0402 Epoch 9/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0282 Epoch 10/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0205 Trained Measurement A layer for L=16. Epoch 1/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 1ms/step - binary_accuracy: 0.5715 - loss: 0.6805 Epoch 2/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 0.7754 - loss: 0.5979 Epoch 3/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 0.9517 - loss: 0.4295 Epoch 4/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - binary_accuracy: 0.9955 - loss: 0.2673 Epoch 5/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 0.9999 - loss: 0.1550 Epoch 6/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0908 Epoch 7/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0565 Epoch 8/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0374 Epoch 9/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0262 Epoch 10/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - binary_accuracy: 1.0000 - loss: 0.0191 Epoch 11/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0144 Epoch 12/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - binary_accuracy: 1.0000 - loss: 0.0111 Epoch 13/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - binary_accuracy: 1.0000 - loss: 0.0088 Epoch 14/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0070 Epoch 15/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0057 Epoch 16/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - binary_accuracy: 1.0000 - loss: 0.0047 Epoch 17/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0039 Epoch 18/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0033 Epoch 19/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - binary_accuracy: 1.0000 - loss: 0.0028 Epoch 20/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - binary_accuracy: 1.0000 - loss: 0.0024 Epoch 21/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0020 Epoch 22/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0017 Epoch 23/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0015 Epoch 24/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0013 Epoch 25/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0011 Epoch 26/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 9.7583e-04 Epoch 27/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - binary_accuracy: 1.0000 - loss: 8.5220e-04 Epoch 28/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 7.4592e-04 Epoch 29/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - binary_accuracy: 1.0000 - loss: 6.5428e-04 Epoch 30/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - binary_accuracy: 1.0000 - loss: 5.7496e-04 Epoch 31/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 5.0615e-04 Epoch 32/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - binary_accuracy: 1.0000 - loss: 4.4622e-04 Epoch 33/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 3.9395e-04 Epoch 34/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 3.4823e-04 Epoch 35/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 3.0820e-04 Epoch 36/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 2.7304e-04 Epoch 37/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 2.4212e-04 Epoch 38/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 2.1489e-04 Epoch 39/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 1.9090e-04 Epoch 40/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - binary_accuracy: 1.0000 - loss: 1.6970e-04 Epoch 41/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 1.5097e-04 Epoch 42/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 1.3439e-04 Epoch 43/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 1.1971e-04 Epoch 44/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - binary_accuracy: 1.0000 - loss: 1.0668e-04 Epoch 45/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - binary_accuracy: 1.0000 - loss: 9.5133e-05 Trained Combine A layer for L=16. Epoch 1/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - binary_accuracy: 0.9009 - loss: 0.3700 Epoch 2/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - binary_accuracy: 0.9624 - loss: 0.0934 Epoch 3/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - binary_accuracy: 1.0000 - loss: 0.0284 Epoch 4/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - binary_accuracy: 1.0000 - loss: 0.0106 Epoch 5/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - binary_accuracy: 1.0000 - loss: 0.0053 Epoch 6/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - binary_accuracy: 1.0000 - loss: 0.0031 Epoch 7/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0020 Epoch 8/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - binary_accuracy: 1.0000 - loss: 0.0014 Epoch 9/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 9.9355e-04 Epoch 10/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 7.4817e-04 Trained Measurement B layer for L=16. Epoch 1/45
c:\Users\nly99857\OneDrive - Philips\SW Projects\QSeaBattle\venvs\env_QSeaBattle\Lib\site-packages\keras\src\backend\tensorflow\nn.py:789: UserWarning: "`binary_crossentropy` received `from_logits=True`, but the `output` argument was produced by a Sigmoid activation and thus does not represent logits. Was this intended? output, from_logits = _get_logits(
196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - loss: 1.0645 - next_comm_binary_accuracy: 0.5310 - next_comm_loss: 0.6920 - next_gun_binary_accuracy: 0.8621 - next_gun_loss: 0.3720 Epoch 2/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.8331 - next_comm_binary_accuracy: 0.6044 - next_comm_loss: 0.6744 - next_gun_binary_accuracy: 0.9464 - next_gun_loss: 0.1584 Epoch 3/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.6951 - next_comm_binary_accuracy: 0.6691 - next_comm_loss: 0.6484 - next_gun_binary_accuracy: 0.9997 - next_gun_loss: 0.0466 Epoch 4/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.6398 - next_comm_binary_accuracy: 0.7008 - next_comm_loss: 0.6214 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 0.0185 Epoch 5/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.6071 - next_comm_binary_accuracy: 0.7149 - next_comm_loss: 0.5971 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 0.0099 Epoch 6/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.5817 - next_comm_binary_accuracy: 0.7308 - next_comm_loss: 0.5755 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 0.0062 Epoch 7/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.5584 - next_comm_binary_accuracy: 0.7490 - next_comm_loss: 0.5543 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 0.0043 Epoch 8/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.5352 - next_comm_binary_accuracy: 0.7680 - next_comm_loss: 0.5322 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 0.0032 Epoch 9/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.5098 - next_comm_binary_accuracy: 0.7921 - next_comm_loss: 0.5071 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 0.0025 Epoch 10/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.4844 - next_comm_binary_accuracy: 0.8157 - next_comm_loss: 0.4824 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 0.0020 Epoch 11/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.4579 - next_comm_binary_accuracy: 0.8391 - next_comm_loss: 0.4562 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 0.0017 Epoch 12/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.4302 - next_comm_binary_accuracy: 0.8634 - next_comm_loss: 0.4287 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 0.0014 Epoch 13/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.4032 - next_comm_binary_accuracy: 0.8830 - next_comm_loss: 0.4019 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 0.0012 Epoch 14/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.3753 - next_comm_binary_accuracy: 0.9057 - next_comm_loss: 0.3743 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 0.0010 Epoch 15/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.3480 - next_comm_binary_accuracy: 0.9256 - next_comm_loss: 0.3471 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 8.9354e-04 Epoch 16/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.3204 - next_comm_binary_accuracy: 0.9446 - next_comm_loss: 0.3195 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 7.9249e-04 Epoch 17/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.2946 - next_comm_binary_accuracy: 0.9575 - next_comm_loss: 0.2938 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 7.0538e-04 Epoch 18/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.2702 - next_comm_binary_accuracy: 0.9689 - next_comm_loss: 0.2696 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 6.2847e-04 Epoch 19/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.2463 - next_comm_binary_accuracy: 0.9776 - next_comm_loss: 0.2459 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 5.6133e-04 Epoch 20/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.2249 - next_comm_binary_accuracy: 0.9839 - next_comm_loss: 0.2244 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 4.9952e-04 Epoch 21/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.2047 - next_comm_binary_accuracy: 0.9891 - next_comm_loss: 0.2043 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 4.4286e-04 Epoch 22/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.1862 - next_comm_binary_accuracy: 0.9924 - next_comm_loss: 0.1858 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 3.8940e-04 Epoch 23/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.1694 - next_comm_binary_accuracy: 0.9945 - next_comm_loss: 0.1691 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 3.4426e-04 Epoch 24/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.1535 - next_comm_binary_accuracy: 0.9963 - next_comm_loss: 0.1531 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 3.0182e-04 Epoch 25/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.1394 - next_comm_binary_accuracy: 0.9973 - next_comm_loss: 0.1390 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 2.6430e-04 Epoch 26/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.1262 - next_comm_binary_accuracy: 0.9981 - next_comm_loss: 0.1259 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 2.3071e-04 Epoch 27/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.1144 - next_comm_binary_accuracy: 0.9987 - next_comm_loss: 0.1142 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 2.0054e-04 Epoch 28/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.1038 - next_comm_binary_accuracy: 0.9989 - next_comm_loss: 0.1036 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 1.7363e-04 Epoch 29/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0939 - next_comm_binary_accuracy: 0.9992 - next_comm_loss: 0.0938 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 1.4946e-04 Epoch 30/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0850 - next_comm_binary_accuracy: 0.9996 - next_comm_loss: 0.0849 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 1.2898e-04 Epoch 31/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0772 - next_comm_binary_accuracy: 0.9997 - next_comm_loss: 0.0770 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 1.1111e-04 Epoch 32/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0699 - next_comm_binary_accuracy: 0.9999 - next_comm_loss: 0.0698 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 9.5700e-05 Epoch 33/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0633 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0632 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 8.2810e-05 Epoch 34/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0576 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0575 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 7.1857e-05 Epoch 35/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0521 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0520 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 6.2377e-05 Epoch 36/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0473 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0473 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 5.4254e-05 Epoch 37/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0429 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0428 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 4.7251e-05 Epoch 38/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0390 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0389 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 4.1177e-05 Epoch 39/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0354 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0354 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 3.5745e-05 Epoch 40/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0322 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0322 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 3.1156e-05 Epoch 41/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0293 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0293 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 2.7099e-05 Epoch 42/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0266 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0266 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 2.3778e-05 Epoch 43/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0242 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0242 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 2.0732e-05 Epoch 44/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0220 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0220 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 1.8117e-05 Epoch 45/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0201 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0201 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 1.5864e-05 Trained Combine B layer for L=16. Training per-level layers for L=8... Epoch 1/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 1ms/step - binary_accuracy: 0.7419 - loss: 0.6428 Epoch 2/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 0.9898 - loss: 0.4288 Epoch 3/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.2129 Epoch 4/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.1013 Epoch 5/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0520 Epoch 6/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0301 Epoch 7/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0192 Epoch 8/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0131 Epoch 9/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0094 Epoch 10/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0070 Trained Measurement A layer for L=8. Epoch 1/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - binary_accuracy: 0.7059 - loss: 0.6495 Epoch 2/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 0.9858 - loss: 0.4365 Epoch 3/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.2113 Epoch 4/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0988 Epoch 5/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0507 Epoch 6/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0285 Epoch 7/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0175 Epoch 8/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0118 Epoch 9/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0084 Epoch 10/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0062 Epoch 11/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0048 Epoch 12/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0037 Epoch 13/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0030 Epoch 14/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0024 Epoch 15/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0020 Epoch 16/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0016 Epoch 17/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0014 Epoch 18/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0012 Epoch 19/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 9.8502e-04 Epoch 20/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - binary_accuracy: 1.0000 - loss: 8.4111e-04 Epoch 21/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 7.2177e-04 Epoch 22/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 6.2208e-04 Epoch 23/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 5.3820e-04 Epoch 24/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 4.6719e-04 Epoch 25/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 4.0678e-04 Epoch 26/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 3.5511e-04 Epoch 27/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 3.1076e-04 Epoch 28/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 2.7254e-04 Epoch 29/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 2.3947e-04 Epoch 30/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 2.1078e-04 Epoch 31/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - binary_accuracy: 1.0000 - loss: 1.8581e-04 Epoch 32/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 1.6405e-04 Epoch 33/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 1.4503e-04 Epoch 34/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 1.2836e-04 Epoch 35/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 1.1374e-04 Epoch 36/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 1.0088e-04 Epoch 37/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 8.9553e-05 Epoch 38/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 7.9566e-05 Epoch 39/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 7.0747e-05 Epoch 40/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - binary_accuracy: 1.0000 - loss: 6.2951e-05 Epoch 41/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 5.6056e-05 Epoch 42/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 4.9953e-05 Epoch 43/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 4.4542e-05 Epoch 44/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 3.9737e-05 Epoch 45/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 3.5467e-05 Trained Combine A layer for L=8. Epoch 1/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 1ms/step - binary_accuracy: 0.8635 - loss: 0.3799 Epoch 2/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 0.9958 - loss: 0.0791 Epoch 3/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0195 Epoch 4/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0078 Epoch 5/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0042 Epoch 6/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0026 Epoch 7/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0018 Epoch 8/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0013 Epoch 9/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 9.4845e-04 Epoch 10/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 7.2328e-04 Trained Measurement B layer for L=8. Epoch 1/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - loss: 1.0805 - next_comm_binary_accuracy: 0.5967 - next_comm_loss: 0.6732 - next_gun_binary_accuracy: 0.8333 - next_gun_loss: 0.4065 Epoch 2/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.6936 - next_comm_binary_accuracy: 0.8039 - next_comm_loss: 0.6043 - next_gun_binary_accuracy: 0.9999 - next_gun_loss: 0.0889 Epoch 3/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.5252 - next_comm_binary_accuracy: 0.8689 - next_comm_loss: 0.5016 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 0.0235 Epoch 4/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.4099 - next_comm_binary_accuracy: 0.9160 - next_comm_loss: 0.3991 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 0.0109 Epoch 5/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.3180 - next_comm_binary_accuracy: 0.9378 - next_comm_loss: 0.3116 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 0.0062 Epoch 6/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.2429 - next_comm_binary_accuracy: 0.9680 - next_comm_loss: 0.2387 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 0.0039 Epoch 7/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.1826 - next_comm_binary_accuracy: 0.9971 - next_comm_loss: 0.1799 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 0.0026 Epoch 8/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.1362 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.1343 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 0.0019 Epoch 9/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.1017 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.1003 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 0.0013 Epoch 10/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0768 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0757 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 0.0010 Epoch 11/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0587 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0579 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 7.7892e-04 Epoch 12/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0455 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0449 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 6.1641e-04 Epoch 13/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0358 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0353 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 4.9701e-04 Epoch 14/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0286 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0282 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 4.0826e-04 Epoch 15/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0231 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0228 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 3.4027e-04 Epoch 16/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0189 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0186 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 2.8649e-04 Epoch 17/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0156 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0154 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 2.4336e-04 Epoch 18/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0130 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0128 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 2.0803e-04 Epoch 19/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0109 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0107 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 1.7927e-04 Epoch 20/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0092 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0090 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 1.5522e-04 Epoch 21/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0078 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0077 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 1.3473e-04 Epoch 22/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0066 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0065 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 1.1746e-04 Epoch 23/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0057 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0056 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 1.0247e-04 Epoch 24/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0049 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0048 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 8.9706e-05 Epoch 25/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0042 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0041 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 7.8826e-05 Epoch 26/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0037 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0036 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 6.9497e-05 Epoch 27/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0032 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0031 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 6.1193e-05 Epoch 28/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0028 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0027 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 5.3970e-05 Epoch 29/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0024 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0024 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 4.7705e-05 Epoch 30/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0021 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0021 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 4.2285e-05 Epoch 31/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0018 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0018 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 3.7503e-05 Epoch 32/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0016 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0016 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 3.3341e-05 Epoch 33/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0014 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0014 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 2.9685e-05 Epoch 34/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0013 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0012 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 2.6385e-05 Epoch 35/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0011 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0011 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 2.3546e-05 Epoch 36/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 9.7648e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 9.5508e-04 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 2.0952e-05 Epoch 37/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 8.6101e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 8.4257e-04 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 1.8700e-05 Epoch 38/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 7.6022e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 7.4310e-04 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 1.6696e-05 Epoch 39/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 6.7230e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 6.5724e-04 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 1.4929e-05 Epoch 40/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 5.9541e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 5.8196e-04 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 1.3359e-05 Epoch 41/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 5.2803e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 5.1570e-04 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 1.1960e-05 Epoch 42/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 4.6884e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 4.5801e-04 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 1.0710e-05 Epoch 43/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 4.1656e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 4.0689e-04 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 9.6137e-06 Epoch 44/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 3.7044e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 3.6172e-04 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 8.6250e-06 Epoch 45/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 3.2967e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 3.2187e-04 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 7.7446e-06 Trained Combine B layer for L=8. Training per-level layers for L=4... Epoch 1/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 1ms/step - binary_accuracy: 0.9213 - loss: 0.6004 Epoch 2/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.3129 Epoch 3/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - binary_accuracy: 1.0000 - loss: 0.1157 Epoch 4/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0484 Epoch 5/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0252 Epoch 6/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0151 Epoch 7/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0099 Epoch 8/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0069 Epoch 9/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0051 Epoch 10/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0038 Trained Measurement A layer for L=4. Epoch 1/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - binary_accuracy: 0.8292 - loss: 0.6125 Epoch 2/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.3154 Epoch 3/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.1056 Epoch 4/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0419 Epoch 5/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0210 Epoch 6/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0124 Epoch 7/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0080 Epoch 8/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0056 Epoch 9/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - binary_accuracy: 1.0000 - loss: 0.0041 Epoch 10/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0031 Epoch 11/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0024 Epoch 12/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0019 Epoch 13/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0015 Epoch 14/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0012 Epoch 15/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0010 Epoch 16/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 8.6116e-04 Epoch 17/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 7.2471e-04 Epoch 18/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 6.1413e-04 Epoch 19/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 5.2362e-04 Epoch 20/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 4.4876e-04 Epoch 21/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 3.8639e-04 Epoch 22/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 3.3403e-04 Epoch 23/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 2.8979e-04 Epoch 24/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 2.5218e-04 Epoch 25/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 2.2008e-04 Epoch 26/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 1.9255e-04 Epoch 27/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 1.6885e-04 Epoch 28/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 1.4836e-04 Epoch 29/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 1.3060e-04 Epoch 30/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 1.1515e-04 Epoch 31/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 1.0167e-04 Epoch 32/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 8.9893e-05 Epoch 33/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 7.9580e-05 Epoch 34/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 7.0529e-05 Epoch 35/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 6.2571e-05 Epoch 36/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 5.5562e-05 Epoch 37/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 4.9379e-05 Epoch 38/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 4.3919e-05 Epoch 39/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 3.9091e-05 Epoch 40/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - binary_accuracy: 1.0000 - loss: 3.4816e-05 Epoch 41/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 3.1028e-05 Epoch 42/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 2.7668e-05 Epoch 43/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 2.4685e-05 Epoch 44/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 2.2034e-05 Epoch 45/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 1.9676e-05 Trained Combine A layer for L=4. Epoch 1/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 1ms/step - binary_accuracy: 0.9579 - loss: 0.3557 Epoch 2/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0550 Epoch 3/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0146 Epoch 4/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0065 Epoch 5/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0037 Epoch 6/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0023 Epoch 7/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0015 Epoch 8/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0011 Epoch 9/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 8.0802e-04 Epoch 10/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 6.1404e-04 Trained Measurement B layer for L=4. Epoch 1/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - loss: 1.0298 - next_comm_binary_accuracy: 0.7322 - next_comm_loss: 0.6441 - next_gun_binary_accuracy: 0.9515 - next_gun_loss: 0.3845 Epoch 2/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.4818 - next_comm_binary_accuracy: 0.9928 - next_comm_loss: 0.4322 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 0.0490 Epoch 3/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.2323 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.2197 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 0.0123 Epoch 4/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.1227 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.1172 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 0.0054 Epoch 5/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0678 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0647 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 0.0031 Epoch 6/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0403 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0383 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 0.0019 Epoch 7/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0258 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0245 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 0.0013 Epoch 8/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0178 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0169 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 9.4885e-04 Epoch 9/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 0.0128 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0121 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 7.1100e-04 Epoch 10/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 0.0096 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0090 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 5.4905e-04 Epoch 11/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - loss: 0.0074 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0069 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 4.3423e-04 Epoch 12/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0058 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0054 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 3.4991e-04 Epoch 13/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0046 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0043 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 2.8623e-04 Epoch 14/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0038 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0035 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 2.3663e-04 Epoch 15/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0031 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0029 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 1.9752e-04 Epoch 16/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0026 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0024 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 1.6610e-04 Epoch 17/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0021 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0020 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 1.4097e-04 Epoch 18/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0018 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0017 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 1.2001e-04 Epoch 19/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0014 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 1.0299e-04 Epoch 20/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0013 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0012 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 8.8742e-05 Epoch 21/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0011 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0010 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 7.6833e-05 Epoch 22/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 9.6694e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 9.0041e-04 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 6.6739e-05 Epoch 23/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 8.3603e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 7.7772e-04 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 5.8125e-05 Epoch 24/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 7.2528e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 6.7461e-04 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 5.0782e-05 Epoch 25/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 6.3104e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 5.8649e-04 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 4.4464e-05 Epoch 26/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 5.5056e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 5.1114e-04 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 3.9061e-05 Epoch 27/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 4.8154e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 4.4696e-04 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 3.4359e-05 Epoch 28/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 4.2206e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 3.9169e-04 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 3.0323e-05 Epoch 29/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 3.7070e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 3.4373e-04 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 2.6770e-05 Epoch 30/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 3.2617e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 3.0232e-04 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 2.3696e-05 Epoch 31/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 2.8741e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 2.6627e-04 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 2.1008e-05 Epoch 32/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 2.5357e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 2.3478e-04 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 1.8660e-05 Epoch 33/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 2.2403e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 2.0740e-04 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 1.6572e-05 Epoch 34/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 1.9817e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 1.8336e-04 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 1.4753e-05 Epoch 35/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 1.7547e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 1.6231e-04 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 1.3138e-05 Epoch 36/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 1.5554e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 1.4380e-04 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 1.1717e-05 Epoch 37/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 1.3800e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 1.2755e-04 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 1.0449e-05 Epoch 38/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 1.2256e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 1.1320e-04 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 9.3314e-06 Epoch 39/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 1.0894e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 1.0056e-04 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 8.3352e-06 Epoch 40/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 9.6906e-05 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 8.9495e-05 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 7.4442e-06 Epoch 41/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 8.6267e-05 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 7.9611e-05 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 6.6554e-06 Epoch 42/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 7.6855e-05 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 7.0900e-05 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 5.9495e-06 Epoch 43/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 6.8504e-05 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 6.3180e-05 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 5.3213e-06 Epoch 44/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 6.1096e-05 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 5.6316e-05 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 4.7619e-06 Epoch 45/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 5.4512e-05 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 5.0223e-05 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 4.2631e-06 Trained Combine B layer for L=4. Training per-level layers for L=2... Epoch 1/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 1ms/step - binary_accuracy: 0.9653 - loss: 0.5493 Epoch 2/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.2791 Epoch 3/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.1181 Epoch 4/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0506 Epoch 5/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0259 Epoch 6/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0154 Epoch 7/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0101 Epoch 8/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0070 Epoch 9/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0051 Epoch 10/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0039 Trained Measurement A layer for L=2. Epoch 1/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 1ms/step - binary_accuracy: 0.9489 - loss: 0.5536 Epoch 2/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.2403 Epoch 3/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0822 Epoch 4/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0348 Epoch 5/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0181 Epoch 6/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0109 Epoch 7/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - binary_accuracy: 1.0000 - loss: 0.0072 Epoch 8/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0051 Epoch 9/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0037 Epoch 10/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0028 Epoch 11/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0022 Epoch 12/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0017 Epoch 13/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - binary_accuracy: 1.0000 - loss: 0.0014 Epoch 14/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0011 Epoch 15/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 9.3498e-04 Epoch 16/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 7.7856e-04 Epoch 17/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 6.5351e-04 Epoch 18/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 5.5235e-04 Epoch 19/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 4.6964e-04 Epoch 20/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 4.0140e-04 Epoch 21/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 3.4465e-04 Epoch 22/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 2.9712e-04 Epoch 23/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 2.5706e-04 Epoch 24/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 2.2311e-04 Epoch 25/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - binary_accuracy: 1.0000 - loss: 1.9419e-04 Epoch 26/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 1.6945e-04 Epoch 27/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 1.4821e-04 Epoch 28/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 1.2991e-04 Epoch 29/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 1.1408e-04 Epoch 30/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 1.0036e-04 Epoch 31/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 8.8425e-05 Epoch 32/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 7.8018e-05 Epoch 33/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 6.8922e-05 Epoch 34/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 6.0966e-05 Epoch 35/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 5.3988e-05 Epoch 36/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 4.7854e-05 Epoch 37/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 4.2454e-05 Epoch 38/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 3.7692e-05 Epoch 39/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 3.3486e-05 Epoch 40/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 2.9771e-05 Epoch 41/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 2.6485e-05 Epoch 42/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 2.3575e-05 Epoch 43/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 2.0996e-05 Epoch 44/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 1.8709e-05 Epoch 45/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 1.6678e-05 Trained Combine A layer for L=2. Epoch 1/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 1ms/step - binary_accuracy: 0.9977 - loss: 0.3233 Epoch 2/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0448 Epoch 3/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - binary_accuracy: 1.0000 - loss: 0.0128 Epoch 4/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - binary_accuracy: 1.0000 - loss: 0.0059 Epoch 5/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0034 Epoch 6/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0022 Epoch 7/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0015 Epoch 8/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 0.0011 Epoch 9/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - binary_accuracy: 1.0000 - loss: 8.0928e-04 Epoch 10/10 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - binary_accuracy: 1.0000 - loss: 6.2634e-04 Trained Measurement B layer for L=2. Epoch 1/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 1ms/step - loss: 0.9440 - next_comm_binary_accuracy: 0.7912 - next_comm_loss: 0.6418 - next_gun_binary_accuracy: 0.9157 - next_gun_loss: 0.3011 Epoch 2/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.4628 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.4316 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 0.0307 Epoch 3/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.2023 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.1938 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 0.0082 Epoch 4/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0809 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0772 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 0.0036 Epoch 5/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0403 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0382 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 0.0020 Epoch 6/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0235 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0222 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 0.0013 Epoch 7/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0151 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0143 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 8.6287e-04 Epoch 8/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0104 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0098 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 6.2152e-04 Epoch 9/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0076 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0071 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 4.6568e-04 Epoch 10/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0057 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0053 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 3.5934e-04 Epoch 11/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0044 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0041 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 2.8329e-04 Epoch 12/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0035 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0032 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 2.2768e-04 Epoch 13/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0028 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0026 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 1.8542e-04 Epoch 14/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0023 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0021 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 1.5318e-04 Epoch 15/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0019 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0017 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 1.2762e-04 Epoch 16/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0015 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0014 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 1.0731e-04 Epoch 17/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0013 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0012 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 9.0882e-05 Epoch 18/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0011 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 0.0010 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 7.7448e-05 Epoch 19/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 9.3238e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 8.6572e-04 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 6.6413e-05 Epoch 20/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 7.9724e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 7.3994e-04 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 5.7149e-05 Epoch 21/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 6.8492e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 6.3532e-04 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 4.9405e-05 Epoch 22/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 5.9088e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 5.4792e-04 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 4.2864e-05 Epoch 23/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 5.1162e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 4.7424e-04 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 3.7281e-05 Epoch 24/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 4.4445e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 4.1174e-04 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 3.2588e-05 Epoch 25/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 3.8722e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 3.5862e-04 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 2.8514e-05 Epoch 26/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 3.3823e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 3.1313e-04 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 2.5012e-05 Epoch 27/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 2.9612e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 2.7403e-04 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 2.1997e-05 Epoch 28/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 2.5980e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 2.4040e-04 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 1.9367e-05 Epoch 29/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 2.2734e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 2.1030e-04 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 1.6978e-05 Epoch 30/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 1.9977e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 1.8480e-04 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 1.4951e-05 Epoch 31/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 1.7621e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 1.6297e-04 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 1.3222e-05 Epoch 32/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 1.5564e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 1.4390e-04 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 1.1724e-05 Epoch 33/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 1.3765e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 1.2723e-04 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 1.0389e-05 Epoch 34/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 1.2187e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 1.1261e-04 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 9.2384e-06 Epoch 35/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 1.0801e-04 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 9.9791e-05 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 8.1885e-06 Epoch 36/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 9.5815e-05 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 8.8499e-05 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 7.2878e-06 Epoch 37/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 8.5072e-05 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 7.8569e-05 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 6.4814e-06 Epoch 38/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 7.5593e-05 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 6.9809e-05 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 5.7773e-06 Epoch 39/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 6.7216e-05 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 6.2044e-05 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 5.1504e-06 Epoch 40/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 5.9810e-05 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 5.5195e-05 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 4.5904e-06 Epoch 41/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 5.3254e-05 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 4.9160e-05 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 4.0891e-06 Epoch 42/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 4.7442e-05 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 4.3786e-05 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 3.6503e-06 Epoch 43/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 4.2286e-05 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 3.9021e-05 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 3.2587e-06 Epoch 44/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 3.7710e-05 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 3.4796e-05 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 2.9083e-06 Epoch 45/45 196/196 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 3.3645e-05 - next_comm_binary_accuracy: 1.0000 - next_comm_loss: 3.1044e-05 - next_gun_binary_accuracy: 1.0000 - next_gun_loss: 2.5997e-06 Trained Combine B layer for L=2. Trained per-level layers for A and B.
Assembling Model A and Model B (per-level layers + weight transfer)¶
# To mirror the linear tutorial workflow, we instantiate *fresh* pyramid models
# with fresh layer objects of the same architecture, then transfer weights.
# Create new (destination) layer lists
meas_layers_a_dst = [PyrMeasurementLayerA(hidden_units=64) for _ in levels]
comb_layers_a_dst = [PyrCombineLayerA(hidden_units=64) for _ in levels]
meas_layers_b_dst = [PyrMeasurementLayerB(hidden_units=64) for _ in levels]
comb_layers_b_dst = [PyrCombineLayerB(hidden_units=64) for _ in levels]
# Build destination layers (so they have weights allocated)
for L, lyr in zip(levels, meas_layers_a_dst):
_ = lyr(tf.zeros((1, L), dtype=tf.float32))
for L, lyr in zip(levels, comb_layers_a_dst):
_ = lyr(tf.zeros((1, L), dtype=tf.float32), tf.zeros((1, L//2), dtype=tf.float32))
for L, lyr in zip(levels, meas_layers_b_dst):
_ = lyr(tf.zeros((1, L), dtype=tf.float32))
for L, lyr in zip(levels, comb_layers_b_dst):
_ = lyr(tf.zeros((1, L), dtype=tf.float32), tf.zeros((1, L//2), dtype=tf.float32), tf.zeros((1, 1), dtype=tf.float32))
# Build trained layers too (in case they haven't been built yet)
for L, lyr in zip(levels, meas_layers_a_trained):
_ = lyr(tf.zeros((1, L), dtype=tf.float32))
for L, lyr in zip(levels, comb_layers_a_trained):
_ = lyr(tf.zeros((1, L), dtype=tf.float32), tf.zeros((1, L//2), dtype=tf.float32))
for L, lyr in zip(levels, meas_layers_b_trained):
_ = lyr(tf.zeros((1, L), dtype=tf.float32))
for L, lyr in zip(levels, comb_layers_b_trained):
_ = lyr(tf.zeros((1, L), dtype=tf.float32), tf.zeros((1, L//2), dtype=tf.float32), tf.zeros((1, 1), dtype=tf.float32))
# Create pyramid models (use sr_mode expected for deterministic sanity checks)
model_a = PyrTrainableAssistedModelA(layout, p_high=P_HIGH, sr_mode="expected", measure_layers=meas_layers_a_dst, combine_layers=comb_layers_a_dst)
model_b = PyrTrainableAssistedModelB(layout, p_high=P_HIGH, sr_mode="expected", measure_layers=meas_layers_b_dst, combine_layers=comb_layers_b_dst)
# Transfer weights from trained layers into the models
transfer_pyr_model_a_layer_weights(model_a, meas_layers_a_trained, comb_layers_a_trained)
transfer_pyr_model_b_layer_weights(model_b, meas_layers_b_trained, comb_layers_b_trained)
print("Assembled pyramid Model A and Model B with transferred per-level weights.")
Assembled pyramid Model A and Model B with transferred per-level weights.
# --- Weight-transfer sanity check (paste as a cell) ---
import numpy as np
def _assert_layer_weights_equal(src_layer, dst_layer, name: str):
src_w = src_layer.get_weights()
dst_w = dst_layer.get_weights()
assert len(src_w) == len(dst_w), f"{name}: number of weight arrays differs ({len(src_w)} vs {len(dst_w)})"
for i, (a, b) in enumerate(zip(src_w, dst_w)):
assert a.shape == b.shape, f"{name}[{i}]: shape mismatch {a.shape} vs {b.shape}"
np.testing.assert_allclose(a, b, atol=0.0, rtol=0.0, err_msg=f"{name}[{i}]: weights differ")
def assert_pyr_transfer_ok(model_a, model_b,
meas_a_trained, comb_a_trained,
meas_b_trained, comb_b_trained):
def _get_list(obj, candidates):
for c in candidates:
if hasattr(obj, c):
return getattr(obj, c)
raise AttributeError(f"Could not find any of {candidates} on {type(obj).__name__}")
a_meas = _get_list(model_a, ["measure_layers", "meas_layers", "measurement_layers"])
a_comb = _get_list(model_a, ["combine_layers", "comb_layers"])
b_meas = _get_list(model_b, ["measure_layers", "meas_layers", "measurement_layers"])
b_comb = _get_list(model_b, ["combine_layers", "comb_layers"])
# Length checks
assert len(a_meas) == len(meas_a_trained), f"Model A meas layers length {len(a_meas)} != trained {len(meas_a_trained)}"
assert len(a_comb) == len(comb_a_trained), f"Model A comb layers length {len(a_comb)} != trained {len(comb_a_trained)}"
assert len(b_meas) == len(meas_b_trained), f"Model B meas layers length {len(b_meas)} != trained {len(meas_b_trained)}"
assert len(b_comb) == len(comb_b_trained), f"Model B comb layers length {len(b_comb)} != trained {len(comb_b_trained)}"
# Determine base length L0 (use the first configured level).
L0 = int(levels[0])
# Build/touch models once so weights exist
dummy_field = np.zeros((1, L0), dtype=np.float32)
_ = model_a.compute_with_internal(dummy_field)
dummy_gun = np.zeros((1, L0), dtype=np.float32)
dummy_comm = np.zeros((1, 1), dtype=np.float32)
_, dummy_meas, dummy_out = model_a.compute_with_internal(dummy_field)
_ = model_b([dummy_gun, dummy_comm, dummy_meas, dummy_out])
# Per-level comparisons
for i, (src, dst) in enumerate(zip(meas_a_trained, a_meas)):
_assert_layer_weights_equal(src, dst, f"A.measure[{i}]")
for i, (src, dst) in enumerate(zip(comb_a_trained, a_comb)):
_assert_layer_weights_equal(src, dst, f"A.combine[{i}]")
for i, (src, dst) in enumerate(zip(meas_b_trained, b_meas)):
_assert_layer_weights_equal(src, dst, f"B.measure[{i}]")
for i, (src, dst) in enumerate(zip(comb_b_trained, b_comb)):
_assert_layer_weights_equal(src, dst, f"B.combine[{i}]")
print("✅ Weight transfer check passed: all per-level layer weights match exactly.")
assert_pyr_transfer_ok(
model_a, model_b,
meas_layers_a_trained, comb_layers_a_trained,
meas_layers_b_trained, comb_layers_b_trained
)
✅ Weight transfer check passed: all per-level layer weights match exactly.
Verification: neural vs deterministic pyramid primitives (quick contract check)¶
# We compare the *structure* and run a single forward pass to ensure no shape/contract errors.
# For a deterministic comparison, we use sr_mode="expected" for both.
# Deterministic baseline models (Step-1 primitives at each level)
baseline_a = PyrTrainableAssistedModelA(layout, p_high=P_HIGH, sr_mode="expected")
baseline_b = PyrTrainableAssistedModelB(layout, p_high=P_HIGH, sr_mode="expected")
# Random test batch
B = 64
field = tf.constant(np.random.randint(0, 2, size=(B, N)).astype(np.float32))
gun = tf.constant(np.random.randint(0, 2, size=(B, N)).astype(np.float32))
comm0 = tf.zeros((B, 1), dtype=tf.float32)
# A forward
logits_a, meas_a, out_a = model_a.compute_with_internal(field)
logits_a0, meas_a0, out_a0 = baseline_a.compute_with_internal(field)
print("A: logits shape:", logits_a.shape, "K:", len(meas_a), "last length:", meas_a[-1].shape[-1])
# B forward (consumes A's lists)
shoot_logit = model_b([gun, comm0, meas_a, out_a])
shoot_logit0 = baseline_b([gun, comm0, meas_a0, out_a0])
print("B: shoot_logit shape:", shoot_logit.shape)
A: logits shape: (64, 1) K: 4 last length: 1 B: shoot_logit shape: (64, 1)
# ============================================================
# DIAGNOSTIC: dump get_config() for layers inside model_a/model_b
# ============================================================
import json
import tensorflow as tf
def _pretty(obj):
try:
return json.dumps(obj, indent=2, sort_keys=True, default=str)
except Exception:
return str(obj)
def _safe_get_config(layer):
if not hasattr(layer, "get_config"):
return None, "NO_get_config"
try:
return layer.get_config(), None
except Exception as e:
return None, f"get_config_ERROR: {type(e).__name__}: {e}"
def _iter_sublayers(root):
"""Depth-first traversal over Keras sublayers, without duplicates."""
seen = set()
stack = [root]
while stack:
x = stack.pop()
if id(x) in seen:
continue
seen.add(id(x))
yield x
if hasattr(x, "layers"):
for y in reversed(list(getattr(x, "layers"))):
stack.append(y)
if hasattr(x, "submodules"):
for y in reversed(list(getattr(x, "submodules"))):
stack.append(y)
def dump_model(name, model):
print("\n" + "=" * 80)
print(f"{name}: {type(model).__name__}")
print("=" * 80)
# Try common layer-list attributes first (pyr models store per-level primitives here)
for attr in ["measure_layers", "meas_layers", "measurement_layers", "combine_layers", "comb_layers"]:
if hasattr(model, attr):
lst = getattr(model, attr)
print(f"\n[{name}.{attr}] len={len(lst)}")
for i, lyr in enumerate(lst):
cfg, err = _safe_get_config(lyr)
header = f" - {attr}[{i}] {lyr.__class__.__name__} name={lyr.name}"
if err:
print(header + f" --> {err}")
else:
print(header)
print(_pretty(cfg))
# Also scan all sublayers for PRAssistedLayer and anything else interesting
print(f"\n[{name}] scan sublayers for PRAssistedLayer (and print mode/length/p_high)")
found = 0
for lyr in _iter_sublayers(model):
if lyr.__class__.__name__ == "PRAssistedLayer":
found += 1
cfg, err = _safe_get_config(lyr)
if err:
print(f" - PRAssistedLayer name={lyr.name} --> {err}")
else:
# Important keys per PRAssistedLayer.get_config() :contentReference[oaicite:0]{index=0}
keys = {k: cfg.get(k) for k in ["mode", "length", "p_high", "resource_index", "seed"]}
print(f" - PRAssistedLayer name={lyr.name}: {keys}")
if found == 0:
print(" (none found)")
Bootstrap tournament evaluation (gameplay uses sr_mode='sample')¶
# For gameplay evaluation, use sr_mode="sample"
model_a_eval = PyrTrainableAssistedModelA(layout, p_high=P_HIGH, sr_mode="sample",
measure_layers=meas_layers_a_dst, combine_layers=comb_layers_a_dst)
model_b_eval = PyrTrainableAssistedModelB(layout, p_high=P_HIGH, sr_mode="sample",
measure_layers=meas_layers_b_dst, combine_layers=comb_layers_b_dst)
# Transfer weights from trained layers into the models
transfer_pyr_model_a_layer_weights(model_a_eval, meas_layers_a_trained, comb_layers_a_trained)
transfer_pyr_model_b_layer_weights(model_b_eval, meas_layers_b_trained, comb_layers_b_trained)
# Ensure weights are present (same objects as dst lists already have weights transferred)
players = TrainableAssistedPlayers(layout, model_a=model_a_eval, model_b=model_b_eval)
dump_model('A', players.model_a)
dump_model('B', players.model_b)
pa, pb = players.players()
print("PlayerA.explore =", pa.explore)
print("PlayerB.explore =", pb.explore)
layout_eval = GameLayout(
field_size=FIELD_SIZE,
comms_size=COMMS_SIZE,
enemy_probability=0.5,
channel_noise=0.0,
number_of_games_in_tournament=GAMES_IN_EVAL_TOURNAMENT,
)
env = GameEnv(layout_eval)
t = Tournament(env, players, layout_eval)
log = t.tournament()
mean_reward, std_err = log.outcome()
print(f"Pyramid bootstrap tournament over {layout_eval.number_of_games_in_tournament}: {mean_reward:.4f} ± {std_err:.4f}")
================================================================================
A: PyrTrainableAssistedModelA
================================================================================
[A.measure_layers] len=4
- measure_layers[0] PyrMeasurementLayerA name=pyr_measurement_layer_a_4
{
"dtype": {
"class_name": "DTypePolicy",
"config": {
"name": "float32"
},
"module": "keras",
"registered_name": null
},
"hidden_units": 64,
"name": "pyr_measurement_layer_a_4",
"trainable": true
}
- measure_layers[1] PyrMeasurementLayerA name=pyr_measurement_layer_a_5
{
"dtype": {
"class_name": "DTypePolicy",
"config": {
"name": "float32"
},
"module": "keras",
"registered_name": null
},
"hidden_units": 64,
"name": "pyr_measurement_layer_a_5",
"trainable": true
}
- measure_layers[2] PyrMeasurementLayerA name=pyr_measurement_layer_a_6
{
"dtype": {
"class_name": "DTypePolicy",
"config": {
"name": "float32"
},
"module": "keras",
"registered_name": null
},
"hidden_units": 64,
"name": "pyr_measurement_layer_a_6",
"trainable": true
}
- measure_layers[3] PyrMeasurementLayerA name=pyr_measurement_layer_a_7
{
"dtype": {
"class_name": "DTypePolicy",
"config": {
"name": "float32"
},
"module": "keras",
"registered_name": null
},
"hidden_units": 64,
"name": "pyr_measurement_layer_a_7",
"trainable": true
}
[A.combine_layers] len=4
- combine_layers[0] PyrCombineLayerA name=pyr_combine_layer_a_4
{
"dtype": {
"class_name": "DTypePolicy",
"config": {
"name": "float32"
},
"module": "keras",
"registered_name": null
},
"hidden_units": 64,
"name": "pyr_combine_layer_a_4",
"trainable": true
}
- combine_layers[1] PyrCombineLayerA name=pyr_combine_layer_a_5
{
"dtype": {
"class_name": "DTypePolicy",
"config": {
"name": "float32"
},
"module": "keras",
"registered_name": null
},
"hidden_units": 64,
"name": "pyr_combine_layer_a_5",
"trainable": true
}
- combine_layers[2] PyrCombineLayerA name=pyr_combine_layer_a_6
{
"dtype": {
"class_name": "DTypePolicy",
"config": {
"name": "float32"
},
"module": "keras",
"registered_name": null
},
"hidden_units": 64,
"name": "pyr_combine_layer_a_6",
"trainable": true
}
- combine_layers[3] PyrCombineLayerA name=pyr_combine_layer_a_7
{
"dtype": {
"class_name": "DTypePolicy",
"config": {
"name": "float32"
},
"module": "keras",
"registered_name": null
},
"hidden_units": 64,
"name": "pyr_combine_layer_a_7",
"trainable": true
}
[A] scan sublayers for PRAssistedLayer (and print mode/length/p_high)
- PRAssistedLayer name=pr_assisted_layer_16: {'mode': 'sample', 'length': 8, 'p_high': 1.0, 'resource_index': 0, 'seed': None}
- PRAssistedLayer name=pr_assisted_layer_17: {'mode': 'sample', 'length': 4, 'p_high': 1.0, 'resource_index': 1, 'seed': None}
- PRAssistedLayer name=pr_assisted_layer_18: {'mode': 'sample', 'length': 2, 'p_high': 1.0, 'resource_index': 2, 'seed': None}
- PRAssistedLayer name=pr_assisted_layer_19: {'mode': 'sample', 'length': 1, 'p_high': 1.0, 'resource_index': 3, 'seed': None}
================================================================================
B: PyrTrainableAssistedModelB
================================================================================
[B.measure_layers] len=4
- measure_layers[0] PyrMeasurementLayerB name=pyr_measurement_layer_b_4
{
"dtype": {
"class_name": "DTypePolicy",
"config": {
"name": "float32"
},
"module": "keras",
"registered_name": null
},
"hidden_units": 64,
"name": "pyr_measurement_layer_b_4",
"trainable": true
}
- measure_layers[1] PyrMeasurementLayerB name=pyr_measurement_layer_b_5
{
"dtype": {
"class_name": "DTypePolicy",
"config": {
"name": "float32"
},
"module": "keras",
"registered_name": null
},
"hidden_units": 64,
"name": "pyr_measurement_layer_b_5",
"trainable": true
}
- measure_layers[2] PyrMeasurementLayerB name=pyr_measurement_layer_b_6
{
"dtype": {
"class_name": "DTypePolicy",
"config": {
"name": "float32"
},
"module": "keras",
"registered_name": null
},
"hidden_units": 64,
"name": "pyr_measurement_layer_b_6",
"trainable": true
}
- measure_layers[3] PyrMeasurementLayerB name=pyr_measurement_layer_b_7
{
"dtype": {
"class_name": "DTypePolicy",
"config": {
"name": "float32"
},
"module": "keras",
"registered_name": null
},
"hidden_units": 64,
"name": "pyr_measurement_layer_b_7",
"trainable": true
}
[B.combine_layers] len=4
- combine_layers[0] PyrCombineLayerB name=pyr_combine_layer_b_4
{
"dtype": {
"class_name": "DTypePolicy",
"config": {
"name": "float32"
},
"module": "keras",
"registered_name": null
},
"hidden_units": 64,
"name": "pyr_combine_layer_b_4",
"trainable": true
}
- combine_layers[1] PyrCombineLayerB name=pyr_combine_layer_b_5
{
"dtype": {
"class_name": "DTypePolicy",
"config": {
"name": "float32"
},
"module": "keras",
"registered_name": null
},
"hidden_units": 64,
"name": "pyr_combine_layer_b_5",
"trainable": true
}
- combine_layers[2] PyrCombineLayerB name=pyr_combine_layer_b_6
{
"dtype": {
"class_name": "DTypePolicy",
"config": {
"name": "float32"
},
"module": "keras",
"registered_name": null
},
"hidden_units": 64,
"name": "pyr_combine_layer_b_6",
"trainable": true
}
- combine_layers[3] PyrCombineLayerB name=pyr_combine_layer_b_7
{
"dtype": {
"class_name": "DTypePolicy",
"config": {
"name": "float32"
},
"module": "keras",
"registered_name": null
},
"hidden_units": 64,
"name": "pyr_combine_layer_b_7",
"trainable": true
}
[B] scan sublayers for PRAssistedLayer (and print mode/length/p_high)
- PRAssistedLayer name=pr_assisted_layer_20: {'mode': 'sample', 'length': 8, 'p_high': 1.0, 'resource_index': 0, 'seed': None}
- PRAssistedLayer name=pr_assisted_layer_21: {'mode': 'sample', 'length': 4, 'p_high': 1.0, 'resource_index': 1, 'seed': None}
- PRAssistedLayer name=pr_assisted_layer_22: {'mode': 'sample', 'length': 2, 'p_high': 1.0, 'resource_index': 2, 'seed': None}
- PRAssistedLayer name=pr_assisted_layer_23: {'mode': 'sample', 'length': 1, 'p_high': 1.0, 'resource_index': 3, 'seed': None}
PlayerA.explore = False
PlayerB.explore = False
Pyramid bootstrap tournament over 2000: 0.9840 ± 0.0028
dump_model('A', players.model_a)
dump_model('B', players.model_b)
# # Run it
# for nm in ["model_a", "model_b", "baseline_a", "baseline_b"]:
# if nm in globals():
# dump_model(nm, globals()[nm])
# else:
# print(f"{nm}: not found in globals()")
================================================================================
A: PyrTrainableAssistedModelA
================================================================================
[A.measure_layers] len=4
- measure_layers[0] PyrMeasurementLayerA name=pyr_measurement_layer_a_4
{
"dtype": {
"class_name": "DTypePolicy",
"config": {
"name": "float32"
},
"module": "keras",
"registered_name": null
},
"hidden_units": 64,
"name": "pyr_measurement_layer_a_4",
"trainable": true
}
- measure_layers[1] PyrMeasurementLayerA name=pyr_measurement_layer_a_5
{
"dtype": {
"class_name": "DTypePolicy",
"config": {
"name": "float32"
},
"module": "keras",
"registered_name": null
},
"hidden_units": 64,
"name": "pyr_measurement_layer_a_5",
"trainable": true
}
- measure_layers[2] PyrMeasurementLayerA name=pyr_measurement_layer_a_6
{
"dtype": {
"class_name": "DTypePolicy",
"config": {
"name": "float32"
},
"module": "keras",
"registered_name": null
},
"hidden_units": 64,
"name": "pyr_measurement_layer_a_6",
"trainable": true
}
- measure_layers[3] PyrMeasurementLayerA name=pyr_measurement_layer_a_7
{
"dtype": {
"class_name": "DTypePolicy",
"config": {
"name": "float32"
},
"module": "keras",
"registered_name": null
},
"hidden_units": 64,
"name": "pyr_measurement_layer_a_7",
"trainable": true
}
[A.combine_layers] len=4
- combine_layers[0] PyrCombineLayerA name=pyr_combine_layer_a_4
{
"dtype": {
"class_name": "DTypePolicy",
"config": {
"name": "float32"
},
"module": "keras",
"registered_name": null
},
"hidden_units": 64,
"name": "pyr_combine_layer_a_4",
"trainable": true
}
- combine_layers[1] PyrCombineLayerA name=pyr_combine_layer_a_5
{
"dtype": {
"class_name": "DTypePolicy",
"config": {
"name": "float32"
},
"module": "keras",
"registered_name": null
},
"hidden_units": 64,
"name": "pyr_combine_layer_a_5",
"trainable": true
}
- combine_layers[2] PyrCombineLayerA name=pyr_combine_layer_a_6
{
"dtype": {
"class_name": "DTypePolicy",
"config": {
"name": "float32"
},
"module": "keras",
"registered_name": null
},
"hidden_units": 64,
"name": "pyr_combine_layer_a_6",
"trainable": true
}
- combine_layers[3] PyrCombineLayerA name=pyr_combine_layer_a_7
{
"dtype": {
"class_name": "DTypePolicy",
"config": {
"name": "float32"
},
"module": "keras",
"registered_name": null
},
"hidden_units": 64,
"name": "pyr_combine_layer_a_7",
"trainable": true
}
[A] scan sublayers for PRAssistedLayer (and print mode/length/p_high)
- PRAssistedLayer name=pr_assisted_layer_16: {'mode': 'sample', 'length': 8, 'p_high': 1.0, 'resource_index': 0, 'seed': None}
- PRAssistedLayer name=pr_assisted_layer_17: {'mode': 'sample', 'length': 4, 'p_high': 1.0, 'resource_index': 1, 'seed': None}
- PRAssistedLayer name=pr_assisted_layer_18: {'mode': 'sample', 'length': 2, 'p_high': 1.0, 'resource_index': 2, 'seed': None}
- PRAssistedLayer name=pr_assisted_layer_19: {'mode': 'sample', 'length': 1, 'p_high': 1.0, 'resource_index': 3, 'seed': None}
================================================================================
B: PyrTrainableAssistedModelB
================================================================================
[B.measure_layers] len=4
- measure_layers[0] PyrMeasurementLayerB name=pyr_measurement_layer_b_4
{
"dtype": {
"class_name": "DTypePolicy",
"config": {
"name": "float32"
},
"module": "keras",
"registered_name": null
},
"hidden_units": 64,
"name": "pyr_measurement_layer_b_4",
"trainable": true
}
- measure_layers[1] PyrMeasurementLayerB name=pyr_measurement_layer_b_5
{
"dtype": {
"class_name": "DTypePolicy",
"config": {
"name": "float32"
},
"module": "keras",
"registered_name": null
},
"hidden_units": 64,
"name": "pyr_measurement_layer_b_5",
"trainable": true
}
- measure_layers[2] PyrMeasurementLayerB name=pyr_measurement_layer_b_6
{
"dtype": {
"class_name": "DTypePolicy",
"config": {
"name": "float32"
},
"module": "keras",
"registered_name": null
},
"hidden_units": 64,
"name": "pyr_measurement_layer_b_6",
"trainable": true
}
- measure_layers[3] PyrMeasurementLayerB name=pyr_measurement_layer_b_7
{
"dtype": {
"class_name": "DTypePolicy",
"config": {
"name": "float32"
},
"module": "keras",
"registered_name": null
},
"hidden_units": 64,
"name": "pyr_measurement_layer_b_7",
"trainable": true
}
[B.combine_layers] len=4
- combine_layers[0] PyrCombineLayerB name=pyr_combine_layer_b_4
{
"dtype": {
"class_name": "DTypePolicy",
"config": {
"name": "float32"
},
"module": "keras",
"registered_name": null
},
"hidden_units": 64,
"name": "pyr_combine_layer_b_4",
"trainable": true
}
- combine_layers[1] PyrCombineLayerB name=pyr_combine_layer_b_5
{
"dtype": {
"class_name": "DTypePolicy",
"config": {
"name": "float32"
},
"module": "keras",
"registered_name": null
},
"hidden_units": 64,
"name": "pyr_combine_layer_b_5",
"trainable": true
}
- combine_layers[2] PyrCombineLayerB name=pyr_combine_layer_b_6
{
"dtype": {
"class_name": "DTypePolicy",
"config": {
"name": "float32"
},
"module": "keras",
"registered_name": null
},
"hidden_units": 64,
"name": "pyr_combine_layer_b_6",
"trainable": true
}
- combine_layers[3] PyrCombineLayerB name=pyr_combine_layer_b_7
{
"dtype": {
"class_name": "DTypePolicy",
"config": {
"name": "float32"
},
"module": "keras",
"registered_name": null
},
"hidden_units": 64,
"name": "pyr_combine_layer_b_7",
"trainable": true
}
[B] scan sublayers for PRAssistedLayer (and print mode/length/p_high)
- PRAssistedLayer name=pr_assisted_layer_20: {'mode': 'sample', 'length': 8, 'p_high': 1.0, 'resource_index': 0, 'seed': None}
- PRAssistedLayer name=pr_assisted_layer_21: {'mode': 'sample', 'length': 4, 'p_high': 1.0, 'resource_index': 1, 'seed': None}
- PRAssistedLayer name=pr_assisted_layer_22: {'mode': 'sample', 'length': 2, 'p_high': 1.0, 'resource_index': 2, 'seed': None}
- PRAssistedLayer name=pr_assisted_layer_23: {'mode': 'sample', 'length': 1, 'p_high': 1.0, 'resource_index': 3, 'seed': None}
# ============================================================
# DIAGNOSTIC v2: handle per-level lists (meas_a/out_a)
# ============================================================
import numpy as np
import tensorflow as tf
np.set_printoptions(precision=3, suppress=True, linewidth=140)
def _as_list(x):
# Normalize: tensor -> [tensor], list/tuple -> list, dict -> values list
if isinstance(x, tf.Tensor):
return [x]
if isinstance(x, (list, tuple)):
return list(x)
if isinstance(x, dict):
return list(x.values())
return [x]
def _to_np(x):
if isinstance(x, tf.Tensor):
return x.numpy()
return np.asarray(x)
def summarize_arr(name, x):
x = _to_np(x).astype(np.float32)
frac_in_01 = np.mean((x >= 0.0) & (x <= 1.0))
frac_binary = np.mean((x == 0.0) | (x == 1.0))
print(
f"{name:>28}: shape={x.shape}, min={x.min(): .3f}, max={x.max(): .3f}, "
f"mean={x.mean(): .3f}, frac_in_[0,1]={frac_in_01: .3f}, frac_binary={frac_binary: .3f}"
)
def summarize_maybe_list(name, x):
xs = _as_list(x)
if len(xs) == 1 and isinstance(xs[0], tf.Tensor):
summarize_arr(name, xs[0])
return
print(f"{name:>28}: (list/tuple) len={len(xs)}")
for i, t in enumerate(xs):
if isinstance(t, tf.Tensor):
summarize_arr(f"{name}[{i}]", t)
else:
print(f"{name}[{i}]: type={type(t)}")
# --- Run a forward pass on model_a and baseline_a with same input ---
B = 64
# infer N
N = None
for cand in ["N", "n2"]:
if cand in globals():
try:
N = int(globals()[cand]); break
except: pass
if N is None:
N = 16
field = tf.constant(np.random.randint(0, 2, size=(B, N)).astype(np.float32))
print("=== Model A forward (structure + ranges) ===")
logits_a, meas_a, out_a = model_a.compute_with_internal(field)
summarize_maybe_list("A.logits_a", logits_a)
summarize_maybe_list("A.meas_a", meas_a)
summarize_maybe_list("A.out_a", out_a)
# Also check sigmoid() versions (critical for SR compatibility)
meas_a_list = _as_list(meas_a)
out_a_list = _as_list(out_a)
print("\n=== Sigmoid versions (what SR *should* see in expected mode) ===")
for i, t in enumerate(meas_a_list):
if isinstance(t, tf.Tensor):
summarize_arr(f"sigmoid(A.meas_a)[{i}]", tf.sigmoid(t))
for i, t in enumerate(out_a_list):
if isinstance(t, tf.Tensor):
summarize_arr(f"sigmoid(A.out_a)[{i}]", tf.sigmoid(t))
print("\n=== Baseline A forward (structure + ranges) ===")
logits_a0, meas_a0, out_a0 = baseline_a.compute_with_internal(field)
summarize_maybe_list("A0.logits_a", logits_a0)
summarize_maybe_list("A0.meas_a", meas_a0)
summarize_maybe_list("A0.out_a", out_a0)
print("\n=== Interpretation ===")
print("- If baseline A meas/out are in [0,1] but model A meas/out are not,")
print(" then the assembled model is feeding logits where SR expects probabilities.")
print("- If sigmoid(model A meas/out) matches baseline ranges well,")
print(" then the fix is at the SR/tournament boundary: apply sigmoid before PRAssistedLayer inputs.")
=== Model A forward (structure + ranges) ===
A.logits_a: shape=(64, 1), min=-5.397, max=-3.106, mean=-3.709, frac_in_[0,1]= 0.000, frac_binary= 0.000
A.meas_a: (list/tuple) len=4
A.meas_a[0]: shape=(64, 8), min= 0.002, max= 0.996, mean= 0.493, frac_in_[0,1]= 1.000, frac_binary= 0.000
A.meas_a[1]: shape=(64, 4), min= 0.026, max= 0.846, mean= 0.269, frac_in_[0,1]= 1.000, frac_binary= 0.000
A.meas_a[2]: shape=(64, 2), min= 0.063, max= 0.799, mean= 0.427, frac_in_[0,1]= 1.000, frac_binary= 0.000
A.meas_a[3]: shape=(64, 1), min= 0.036, max= 0.172, mean= 0.104, frac_in_[0,1]= 1.000, frac_binary= 0.000
A.out_a: (list/tuple) len=4
A.out_a[0]: shape=(64, 8), min= 0.500, max= 0.500, mean= 0.500, frac_in_[0,1]= 1.000, frac_binary= 0.000
A.out_a[1]: shape=(64, 4), min= 0.500, max= 0.500, mean= 0.500, frac_in_[0,1]= 1.000, frac_binary= 0.000
A.out_a[2]: shape=(64, 2), min= 0.500, max= 0.500, mean= 0.500, frac_in_[0,1]= 1.000, frac_binary= 0.000
A.out_a[3]: shape=(64, 1), min= 0.500, max= 0.500, mean= 0.500, frac_in_[0,1]= 1.000, frac_binary= 0.000
=== Sigmoid versions (what SR *should* see in expected mode) ===
sigmoid(A.meas_a)[0]: shape=(64, 8), min= 0.500, max= 0.730, mean= 0.614, frac_in_[0,1]= 1.000, frac_binary= 0.000
sigmoid(A.meas_a)[1]: shape=(64, 4), min= 0.507, max= 0.700, mean= 0.566, frac_in_[0,1]= 1.000, frac_binary= 0.000
sigmoid(A.meas_a)[2]: shape=(64, 2), min= 0.516, max= 0.690, mean= 0.604, frac_in_[0,1]= 1.000, frac_binary= 0.000
sigmoid(A.meas_a)[3]: shape=(64, 1), min= 0.509, max= 0.543, mean= 0.526, frac_in_[0,1]= 1.000, frac_binary= 0.000
sigmoid(A.out_a)[0]: shape=(64, 8), min= 0.622, max= 0.622, mean= 0.622, frac_in_[0,1]= 1.000, frac_binary= 0.000
sigmoid(A.out_a)[1]: shape=(64, 4), min= 0.622, max= 0.622, mean= 0.622, frac_in_[0,1]= 1.000, frac_binary= 0.000
sigmoid(A.out_a)[2]: shape=(64, 2), min= 0.622, max= 0.622, mean= 0.622, frac_in_[0,1]= 1.000, frac_binary= 0.000
sigmoid(A.out_a)[3]: shape=(64, 1), min= 0.622, max= 0.622, mean= 0.622, frac_in_[0,1]= 1.000, frac_binary= 0.000
=== Baseline A forward (structure + ranges) ===
A0.logits_a: shape=(64, 1), min= 0.539, max= 0.541, mean= 0.540, frac_in_[0,1]= 1.000, frac_binary= 0.000
A0.meas_a: (list/tuple) len=4
A0.meas_a[0]: shape=(64, 8), min= 0.238, max= 0.756, mean= 0.521, frac_in_[0,1]= 1.000, frac_binary= 0.000
A0.meas_a[1]: shape=(64, 4), min= 0.441, max= 0.553, mean= 0.510, frac_in_[0,1]= 1.000, frac_binary= 0.000
A0.meas_a[2]: shape=(64, 2), min= 0.428, max= 0.519, mean= 0.474, frac_in_[0,1]= 1.000, frac_binary= 0.000
A0.meas_a[3]: shape=(64, 1), min= 0.506, max= 0.506, mean= 0.506, frac_in_[0,1]= 1.000, frac_binary= 0.000
A0.out_a: (list/tuple) len=4
A0.out_a[0]: shape=(64, 8), min= 0.500, max= 0.500, mean= 0.500, frac_in_[0,1]= 1.000, frac_binary= 0.000
A0.out_a[1]: shape=(64, 4), min= 0.500, max= 0.500, mean= 0.500, frac_in_[0,1]= 1.000, frac_binary= 0.000
A0.out_a[2]: shape=(64, 2), min= 0.500, max= 0.500, mean= 0.500, frac_in_[0,1]= 1.000, frac_binary= 0.000
A0.out_a[3]: shape=(64, 1), min= 0.500, max= 0.500, mean= 0.500, frac_in_[0,1]= 1.000, frac_binary= 0.000
=== Interpretation ===
- If baseline A meas/out are in [0,1] but model A meas/out are not,
then the assembled model is feeding logits where SR expects probabilities.
- If sigmoid(model A meas/out) matches baseline ranges well,
then the fix is at the SR/tournament boundary: apply sigmoid before PRAssistedLayer inputs.
# ============================================================
# GAMEPLAY CONTRACT DIAGNOSTIC: are we returning BITS?
# ============================================================
import numpy as np
import tensorflow as tf
def _is_binary(arr):
a = np.asarray(arr).astype(np.float32).ravel()
return np.all((a == 0.0) | (a == 1.0))
def _summary(name, arr):
a = np.asarray(arr).astype(np.float32).ravel()
print(f"{name:>18}: shape={np.asarray(arr).shape}, min={a.min():.3f}, max={a.max():.3f}, mean={a.mean():.3f}, binary={_is_binary(arr)}")
# 1) Inspect sr_mode on the models you intend to use in gameplay
for nm in ["model_a_play", "model_b_play", "model_a", "model_b"]:
if nm in globals():
m = globals()[nm]
sm = getattr(m, "sr_mode", None)
print(f"{nm:>12}.sr_mode =", sm)
# 2) Instantiate the actual player objects that Tournament/Game will use
# (Assumes you have a Players factory, like in your tutorial).
player_a, player_b = players.players() # or however you construct them
# 3) Run one manual "game-like" decision
# Fake field/gun as bits
N = 16
field = np.random.randint(0, 2, size=(N,), dtype=np.int32)
gun = np.random.randint(0, 2, size=(N,), dtype=np.int32)
comm = player_a.decide(field, supp=None)
shoot = player_b.decide(gun, comm, supp=None)
print("\n=== Raw outputs returned to Game ===")
_summary("comm", comm)
_summary("shoot", shoot)
print("\nInterpretation:")
print("- If comm/shoot are not binary, the Players wrappers are not discretizing model logits/probs to bits.")
print("- In that case the tournament will not reflect the intended protocol even if training succeeded.")
model_a.sr_mode = None
model_b.sr_mode = None
=== Raw outputs returned to Game ===
comm: shape=(1,), min=1.000, max=1.000, mean=1.000, binary=True
shoot: shape=(), min=0.000, max=0.000, mean=0.000, binary=True
Interpretation:
- If comm/shoot are not binary, the Players wrappers are not discretizing model logits/probs to bits.
- In that case the tournament will not reflect the intended protocol even if training succeeded.
# ============================================================
# POST-TOURNAMENT DIAGNOSTIC: is comm the culprit?
# ============================================================
import numpy as np
import tensorflow as tf
np.set_printoptions(precision=3, suppress=True, linewidth=140)
def stats(name, x):
x = tf.convert_to_tensor(x, dtype=tf.float32)
x_np = x.numpy()
frac_in_01 = np.mean((x_np >= 0.0) & (x_np <= 1.0))
frac_binary = np.mean((x_np == 0.0) | (x_np == 1.0))
print(
f"{name:>20}: shape={tuple(x.shape)}, min={x_np.min(): .3f}, max={x_np.max(): .3f}, "
f"mean={x_np.mean(): .3f}, frac_in_[0,1]={frac_in_01: .3f}, frac_binary={frac_binary: .3f}"
)
def as_list(x):
return x if isinstance(x, (list, tuple)) else [x]
# --- Make a random batch ---
B = 256
N = int(N) if "N" in globals() else 16 # adjust if needed
field = tf.constant(np.random.randint(0, 2, size=(B, N)).astype(np.float32))
gun = tf.constant(np.random.randint(0, 2, size=(B, N)).astype(np.float32))
# --- Get A outputs (trained vs baseline) ---
logits_a, meas_a, out_a = model_a.compute_with_internal(field)
logits_a0, meas_a0, out_a0 = baseline_a.compute_with_internal(field)
meas_a = as_list(meas_a)
out_a = as_list(out_a)
meas_a0 = as_list(meas_a0)
out_a0 = as_list(out_a0)
print("=== A: comm representations ===")
comm_logits_tr = tf.convert_to_tensor(logits_a, dtype=tf.float32) # shape (B,1)
comm_prob_tr = tf.sigmoid(comm_logits_tr)
comm_bit_tr = tf.cast(comm_prob_tr >= 0.5, tf.float32)
comm_logits_bl = tf.convert_to_tensor(logits_a0, dtype=tf.float32)
comm_prob_bl = tf.sigmoid(comm_logits_bl)
comm_bit_bl = tf.cast(comm_prob_bl >= 0.5, tf.float32)
stats("A.comm_logits(tr)", comm_logits_tr)
stats("A.comm_prob(tr)", comm_prob_tr)
stats("A.comm_bit(tr)", comm_bit_tr)
stats("A.comm_logits(bl)", comm_logits_bl)
stats("A.comm_prob(bl)", comm_prob_bl)
stats("A.comm_bit(bl)", comm_bit_bl)
print("\n(comm_bit mean is fraction of 1s)")
# --- Helper to run model_b robustly (prev inputs are lists) ---
def run_b(comm_tensor, prev_meas_list, prev_out_list, name):
y = model_b([gun, comm_tensor, prev_meas_list, prev_out_list])
stats(f"B.shoot_logit {name}", y)
stats(f"sigmoid(shoot) {name}", tf.sigmoid(y))
return y
print("\n=== B: sensitivity to comm source and representation ===")
# 1) Use baseline A comm bit (what teacher likely did)
y1 = run_b(comm_bit_bl, meas_a, out_a, "comm=baseline_bit (from A0)")
# 2) Use trained A comm bit
y2 = run_b(comm_bit_tr, meas_a, out_a, "comm=trained_bit (from A)")
# 3) Use trained A comm probability (soft)
y3 = run_b(comm_prob_tr, meas_a, out_a, "comm=trained_prob (sigmoid(logits))")
# 4) Use trained A comm logits directly (WRONG if model expects bit/prob)
y4 = run_b(comm_logits_tr, meas_a, out_a, "comm=trained_logits (raw)")
print("\n=== Quick comparisons ===")
# If these differ drastically, comm wiring is the issue.
def corr(a, b, label):
a = tf.reshape(tf.cast(a, tf.float32), [-1])
b = tf.reshape(tf.cast(b, tf.float32), [-1])
a -= tf.reduce_mean(a); b -= tf.reduce_mean(b)
denom = tf.sqrt(tf.reduce_sum(a*a) * tf.reduce_sum(b*b)) + 1e-9
print(f"{label:>28}: corr={float(tf.reduce_sum(a*b)/denom): .3f}")
corr(y1, y2, "B output: baseline_bit vs trained_bit")
corr(y2, y3, "B output: trained_bit vs trained_prob")
corr(y2, y4, "B output: trained_bit vs trained_logits")
print("\n=== Interpretation ===")
print("- If comm=baseline_bit gives 'non-random' looking outputs but comm=trained_bit collapses,")
print(" then A's comm policy diverged from the teacher (even though per-layer training looked OK).")
print("- If comm=trained_logits behaves wildly compared to comm=trained_prob/bit, then tournament is feeding logits as comm.")
print("- If trained_prob and trained_bit behave similarly, prefer passing probabilities in expected-mode pipelines.")
=== A: comm representations ===
A.comm_logits(tr): shape=(256, 1), min=-5.429, max=-3.075, mean=-3.829, frac_in_[0,1]= 0.000, frac_binary= 0.000
A.comm_prob(tr): shape=(256, 1), min= 0.004, max= 0.044, mean= 0.024, frac_in_[0,1]= 1.000, frac_binary= 0.000
A.comm_bit(tr): shape=(256, 1), min= 0.000, max= 0.000, mean= 0.000, frac_in_[0,1]= 1.000, frac_binary= 1.000
A.comm_logits(bl): shape=(256, 1), min= 0.539, max= 0.541, mean= 0.540, frac_in_[0,1]= 1.000, frac_binary= 0.000
A.comm_prob(bl): shape=(256, 1), min= 0.632, max= 0.632, mean= 0.632, frac_in_[0,1]= 1.000, frac_binary= 0.000
A.comm_bit(bl): shape=(256, 1), min= 1.000, max= 1.000, mean= 1.000, frac_in_[0,1]= 1.000, frac_binary= 1.000
(comm_bit mean is fraction of 1s)
=== B: sensitivity to comm source and representation ===
B.shoot_logit comm=baseline_bit (from A0): shape=(256, 1), min=-9.861, max= 0.946, mean=-4.508, frac_in_[0,1]= 0.176, frac_binary= 0.000
sigmoid(shoot) comm=baseline_bit (from A0): shape=(256, 1), min= 0.000, max= 0.720, mean= 0.202, frac_in_[0,1]= 1.000, frac_binary= 0.000
B.shoot_logit comm=trained_bit (from A): shape=(256, 1), min=-9.838, max= 1.516, mean=-4.108, frac_in_[0,1]= 0.266, frac_binary= 0.000
sigmoid(shoot) comm=trained_bit (from A): shape=(256, 1), min= 0.000, max= 0.820, mean= 0.257, frac_in_[0,1]= 1.000, frac_binary= 0.000
B.shoot_logit comm=trained_prob (sigmoid(logits)): shape=(256, 1), min=-9.817, max= 1.486, mean=-4.163, frac_in_[0,1]= 0.254, frac_binary= 0.000
sigmoid(shoot) comm=trained_prob (sigmoid(logits)): shape=(256, 1), min= 0.000, max= 0.816, mean= 0.251, frac_in_[0,1]= 1.000, frac_binary= 0.000
B.shoot_logit comm=trained_logits (raw): shape=(256, 1), min=-9.838, max= 1.516, mean=-4.108, frac_in_[0,1]= 0.266, frac_binary= 0.000
sigmoid(shoot) comm=trained_logits (raw): shape=(256, 1), min= 0.000, max= 0.820, mean= 0.257, frac_in_[0,1]= 1.000, frac_binary= 0.000
=== Quick comparisons ===
B output: baseline_bit vs trained_bit: corr= 0.446
B output: trained_bit vs trained_prob: corr= 0.991
B output: trained_bit vs trained_logits: corr= 1.000
=== Interpretation ===
- If comm=baseline_bit gives 'non-random' looking outputs but comm=trained_bit collapses,
then A's comm policy diverged from the teacher (even though per-layer training looked OK).
- If comm=trained_logits behaves wildly compared to comm=trained_prob/bit, then tournament is feeding logits as comm.
- If trained_prob and trained_bit behave similarly, prefer passing probabilities in expected-mode pipelines.
field = tf.zeros((1,16), tf.float32)
logit, meas, out = model_a_eval.compute_with_internal(field)
print(len(meas), [m.shape for m in meas])
print(len(out), [o.shape for o in out])
gun = tf.zeros((1,16), tf.float32)
comm = tf.zeros((1,1), tf.float32)
shoot = model_b_eval([gun, comm, meas, out])
print("shoot shape:", shoot.shape)
for i, sr in enumerate(model_a_eval.sr_layers):
print("A sr", i, getattr(sr, "mode", None), getattr(sr, "p_high", None))
for i, sr in enumerate(model_b_eval.sr_layers):
print("B sr", i, getattr(sr, "mode", None), getattr(sr, "p_high", None))
4 [TensorShape([1, 8]), TensorShape([1, 4]), TensorShape([1, 2]), TensorShape([1, 1])] 4 [TensorShape([1, 8]), TensorShape([1, 4]), TensorShape([1, 2]), TensorShape([1, 1])] shoot shape: (1, 1) A sr 0 sample 1.0 A sr 1 sample 1.0 A sr 2 sample 1.0 A sr 3 sample 1.0 B sr 0 sample 1.0 B sr 1 sample 1.0 B sr 2 sample 1.0 B sr 3 sample 1.0
# --- Single-game SR contract diagnostic (paste as one cell) ---
import numpy as np
# 1) Run exactly one game via Tournament, so we capture what Player A stored as "prev_*"
layout_one = GameLayout(field_size=layout_eval.field_size,
comms_size=layout_eval.comms_size,
number_of_games_in_tournament=1)
env_one = GameEnv(layout_one)
t_one = Tournament(env_one, players, layout_one)
log_one = t_one.tournament()
row0 = log_one.log.iloc[0]
prev_meas = row0["prev_measurements"]
prev_out = row0["prev_outcomes"]
print("=== Logged prev from Player A ===")
print("type(prev_meas):", type(prev_meas))
print("type(prev_out): ", type(prev_out))
assert prev_meas is not None and prev_out is not None, "No prev_* logged; does players.has_prev=True and PlayerA.get_prev() return data?"
assert isinstance(prev_meas, (list, tuple)) and isinstance(prev_out, (list, tuple)), "prev_* must be lists/tuples"
assert len(prev_meas) == len(prev_out), "prev_meas/out length mismatch"
K = len(prev_meas)
print("K (depth) =", K)
print("meas shapes:", [np.array(x).shape for x in prev_meas])
print("out shapes:", [np.array(x).shape for x in prev_out])
# 2) Basic per-level shape alignment sanity:
# At level ℓ with current length L_ℓ, measurement/outcome should have length L_ℓ/2.
# We infer L_0 from game field size; then halve each level.
L0 = layout_one.field_size**2
L = L0
for ell in range(K):
m = np.array(prev_meas[ell])
o = np.array(prev_out[ell])
expected = L // 2
assert m.shape[-1] == expected, f"Level {ell}: meas last-dim {m.shape[-1]} != {expected}"
assert o.shape[-1] == expected, f"Level {ell}: out last-dim {o.shape[-1]} != {expected}"
L = expected
print("✅ Per-level dimensionality matches halving contract.")
# 3) Non-triviality / SR usage quick check:
# In EXPECTED mode, outcomes may be probabilities (often ~p_high / 1-p_high) not exactly {0,1}.
# In SAMPLE mode, outcomes should be near-binary most of the time.
def summary(arr):
arr = np.array(arr).astype(np.float32).ravel()
return dict(min=float(arr.min()), max=float(arr.max()), mean=float(arr.mean()), frac_mid=float(((arr > 1e-3) & (arr < 1-1e-3)).mean()))
print("\n=== Outcome value summaries per level ===")
for ell, o in enumerate(prev_out):
s = summary(o)
print(f"ℓ={ell}: min={s['min']:.4f}, max={s['max']:.4f}, mean={s['mean']:.4f}, frac_in_(0,1)={s['frac_mid']:.3f}")
print("\nInterpretation tip:")
print("- If sr_mode='expected': expect many values strictly between 0 and 1 (frac_in_(0,1) high).")
print("- If sr_mode='sample': expect values close to 0/1 (frac_in_(0,1) near 0).")
# 4) (Optional) Directly exercise Model B with the *logged* prev lists to ensure it accepts them.
# We re-create gun/comm from the logged game to match batch dims.
gun = row0["gun"]
comm = row0["comm"]
gun = np.array(gun, dtype=np.float32)[None, :] if gun.ndim == 1 else np.array(gun, dtype=np.float32)
comm = np.array(comm, dtype=np.float32)[None, :] if comm.ndim == 1 else np.array(comm, dtype=np.float32)
shoot_logit = model_b([gun, comm, prev_meas, prev_out])
print("\nModel B call with logged prev succeeded. shoot_logit shape:", shoot_logit.shape)
=== Logged prev from Player A === type(prev_meas): <class 'list'> type(prev_out): <class 'list'> K (depth) = 4 meas shapes: [(1, 8), (1, 4), (1, 2), (1, 1)] out shapes: [(1, 8), (1, 4), (1, 2), (1, 1)] ✅ Per-level dimensionality matches halving contract. === Outcome value summaries per level === ℓ=0: min=0.0000, max=1.0000, mean=0.2500, frac_in_(0,1)=0.000 ℓ=1: min=0.0000, max=1.0000, mean=0.5000, frac_in_(0,1)=0.000 ℓ=2: min=0.0000, max=1.0000, mean=0.5000, frac_in_(0,1)=0.000 ℓ=3: min=1.0000, max=1.0000, mean=1.0000, frac_in_(0,1)=0.000 Interpretation tip: - If sr_mode='expected': expect many values strictly between 0 and 1 (frac_in_(0,1) high). - If sr_mode='sample': expect values close to 0/1 (frac_in_(0,1) near 0). Model B call with logged prev succeeded. shoot_logit shape: (1, 1)
Save weights¶
# Save weights (one file per model). Include field/comms and p_high in the filename.
model_a_path = models_dir / f"pyr_model_a_bootstrap_f{FIELD_SIZE}_m{COMMS_SIZE}_p{P_HIGH:.2f}.weights.h5"
model_b_path = models_dir / f"pyr_model_b_bootstrap_f{FIELD_SIZE}_m{COMMS_SIZE}_p{P_HIGH:.2f}.weights.h5"
# Keras requires models to be built before saving weights.
# Build by calling once with dummy inputs.
dummy_field = tf.zeros((1, N), dtype=tf.float32)
dummy_gun = tf.zeros((1, N), dtype=tf.float32)
dummy_comm = tf.zeros((1, 1), dtype=tf.float32)
_ = model_a(dummy_field)
_, prev_meas, prev_out = model_a.compute_with_internal(dummy_field)
_ = model_b([dummy_gun, dummy_comm, prev_meas, prev_out])
# Save the deterministic (expected) versions' weights (same layer weights).
model_a.save_weights(model_a_path)
model_b.save_weights(model_b_path)
print("Saved weights:")
print(" -", model_a_path)
print(" -", model_b_path)
Saved weights: - notebooks\models\pyr_model_a_bootstrap_f4_m1_p1.00.weights.h5 - notebooks\models\pyr_model_b_bootstrap_f4_m1_p1.00.weights.h5