Author

Oren Bochman

Published

Friday, December 20, 2024

it seems that we might want to look at the emergent communications by considering 1. a Lewis signaling games to model coordination tasks for a basic communication system 2. a Shannon game to model the communication of information between agents in which the learn a shared communication protocol potentially using error detection and correction and corection. 3. a Chomsky game to model development of a shared grammar for complex signals.

Shannon Game

Shanon games are about emergence of randomized communication protocols. A randomised communication protocol is a probability distribution over the set of possible deterministic communication protocols.

We can model any deterministic communication protocol as a pair of decision rees, one for the sender and one for the receiver. The sender’s decision tree maps each possible message to a signal, and the receiver’s decision tree maps each possible signal to a message.

messages that the sender can send. The sender samples a message from this distribution and sends it to the receiver. The receiver then uses a decoding function to map the received message back to the original signal. The goal of the game is for the sender and receiver to coordinate on a communication protocol that maximizes their payoff, which is typically based on the accuracy of message transmission and reception. It is a protocol that uses randomness to encode and decode messages. This randomness can be used to introduce redundancy in the message, which can help in error detection and correction.

import numpy as np

class CommunicationAgent:
    def __init__(self, num_strategies):
        self.num_strategies = num_strategies
        self.q_table = np.zeros((num_strategies, num_strategies))
        self.learning_rate = 0.1
        self.discount_factor = 0.9
        self.epsilon = 0.1
    
    def choose_strategy(self):
        if np.random.rand() < self.epsilon:
            return np.random.randint(self.num_strategies)
        else:
            return np.argmax(self.q_table.sum(axis=1))
    
    def update_q_values(self, sender_strategy, receiver_strategy, reward):
        max_future_q = np.max(self.q_table[receiver_strategy])
        current_q = self.q_table[sender_strategy, receiver_strategy]
        new_q = current_q + self.learning_rate * (reward + self.discount_factor * max_future_q - current_q)
        self.q_table[sender_strategy, receiver_strategy] = new_q

# Simulation parameters
num_strategies = 5
num_iterations = 1000

# Initialize agents
alice = CommunicationAgent(num_strategies)
bob = CommunicationAgent(num_strategies)

for _ in range(num_iterations):
    sender_strategy = alice.choose_strategy()
    receiver_strategy = bob.choose_strategy()
    
    # Simulate message transmission and reception with noise
    # This is a placeholder for actual encoding/decoding logic
    success = np.random.rand() < 0.8  # Assume 80% chance of success
    
    reward = 1 if success else -1
    alice.update_q_values(sender_strategy, receiver_strategy, reward)
    bob.update_q_values(receiver_strategy, sender_strategy, reward)

print("Alice's Q-Table:\n", alice.q_table)
print("Bob's Q-Table:\n", bob.q_table)
Alice's Q-Table:
 [[ 1.1870612   0.          0.          0.1         0.        ]
 [ 1.56945522  1.54181517  0.97338873  0.82764688  1.00508109]
 [ 0.63863811 -0.02636762  0.          0.19736455  0.1607126 ]
 [ 0.56019999  0.          0.          0.          0.        ]
 [ 0.80594973  0.25923685  0.14809569  0.          0.        ]]
Bob's Q-Table:
 [[ 1.50847687  1.20172521  0.6687741   0.64135768  0.55690049]
 [ 0.          0.7344698  -0.0829      0.          0.15625959]
 [ 0.          0.94187947  0.          0.          0.15028825]
 [ 0.15690016  0.91314851  0.22940322  0.          0.        ]
 [ 0.          0.89534698  0.14117867  0.          0.        ]]

This example illustrates a basic game-theoretic approach where the sender and receiver iteratively learn better strategies for encoding and decoding messages over a noisy channel. The reinforcement learning framework allows both parties to adapt and improve their protocols, enhancing the reliability of communication over time. This model can be extended and refined to include more sophisticated encoding/decoding techniques and more complex noise models.

from mesa import Agent, Model
from mesa.time import RandomActivation
from mesa.datacollection import DataCollector
import numpy as np

def hamming_distance(a, b):
    return np.sum(a != b) / len(a)

class Sender(Agent):
    def __init__(self, unique_id, model):
        super().__init__(unique_id, model)
        self.protocol = self.random_protocol()
    
    def random_protocol(self):
        # Define a random protocol for encoding
        return lambda msg: msg  # Identity for simplicity
    
    def step(self):
        message = np.random.randint(0, 2, self.model.message_length)
        encoded_message = self.protocol(message)
        self.model.sent_message = encoded_message

class Receiver(Agent):
    def __init__(self, unique_id, model):
        super().__init__(unique_id, model)
        self.protocol = self.random_protocol()
    
    def random_protocol(self):
        # Define a random protocol for decoding
        return lambda msg: msg  # Identity for simplicity
    
    def step(self):
        noisy_message = self.model.sent_message ^ np.random.binomial(1, self.model.error_rate, self.model.message_length)
        recovered_message = self.protocol(noisy_message)
        self.model.recovered_message = recovered_message
        self.evaluate_performance()
    
    def evaluate_performance(self):
        original_message = self.model.original_message
        recovered_message = self.model.recovered_message
        distance = hamming_distance(original_message, recovered_message)
        self.model.payoff += self.model.recovery_payoff(distance)
        self.model.payoff += self.model.length_payoff(len(recovered_message))
        self.model.payoff += self.model.early_recovery_payoff(self.model.current_step)
    
class NoisyChannelModel(Model):
    def __init__(self, message_length=10, error_rate=0.1, max_steps=100):
        super().__init__()
        self.message_length = message_length
        self.error_rate = error_rate
        self.current_step = 0
        self.max_steps = max_steps
        self.payoff = 0
        
        self.schedule = RandomActivation(self)
        
        sender = Sender(1, self)
        receiver = Receiver(2, self)
        self.schedule.add(sender)
        self.schedule.add(receiver)
        
        self.original_message = np.random.randint(0, 2, self.message_length)
        self.sent_message = None
        self.recovered_message = None
        
        self.datacollector = DataCollector(
            model_reporters={"Payoff": "payoff"}
        )
    
    def recovery_payoff(self, distance):
        return 1 - distance
    
    def length_payoff(self, length):
        return 1 / length
    
    def early_recovery_payoff(self, step):
        return (self.max_steps - step) / self.max_steps
    
    def step(self):
        self.current_step += 1
        self.schedule.step()
        self.datacollector.collect(self)
        if self.current_step >= self.max_steps:
            self.running = False

# Example of running the model
model = NoisyChannelModel()
while model.running:
    model.step()

# Retrieve results
results = model.datacollector.get_model_vars_dataframe()
print(results)
    Payoff
0     1.39
1     2.97
2     4.54
3     6.10
4     7.55
..     ...
95  105.44
96  106.07
97  106.89
98  107.40
99  107.90

[100 rows x 1 columns]
/home/oren/.local/lib/python3.10/site-packages/mesa/time.py:82: FutureWarning:

The AgentSet is experimental. It may be changed or removed in any and all future releases, including patch releases.
We would love to hear what you think about this new feature. If you have any thoughts, share them with us here: https://github.com/projectmesa/mesa/discussions/1919

so this is a variant that uses a noisy channel model to simulate the transmission of messages between a sender and receiver. The agents have protocols for encoding and decoding messages, and the model tracks the performance of the communication system based on the accuracy of message recovery, message length, and early recovery. This example demonstrates how to model and analyze the performance of communication systems in the presence of noise and other challenges.

What we don’t have is a way to pick different protocols or to improve them over time.

I would break this down into a few steps: 1. identify the environmental factors that would encourage the agents to evolve diverse and efficient transmission protocols. a. noisy channels b. limited bandwidth c. limited computational resources d. time constraints e. risks of predation.

  1. allow agents randomly generate candidate protocols and evaluate their performance.

def random_protocol(): # Define a random protocol for encoding/decoding return lambda msg: np.random.randint(0, 2, len(msg))

which would be used as follows

class Sender(Agent): def init(self, unique_id, model): super().__init__(unique_id, model) self.protocol = random_protocol()

def step(self):
    message = np.random.randint(0, 2, self.model.message_length)
    encoded_message = self.protocol(message)
    self.model.sent_message = encoded_message

This could be done by introducing reinforcement learning techniques to allow the agents to adapt and learn better encoding/decoding strategies based on feedback from the environment. This would enable the agents to optimize their protocols for improved communication performance in noisy channels.

Citation

BibTeX citation:
@online{bochman2024,
  author = {Bochman, Oren},
  title = {Emergent Complex Communications Protocols},
  date = {2024-12-20},
  url = {https://orenbochman.github.io/posts/2024/2024-05-01-signals/shanon-game.html},
  langid = {en}
}
For attribution, please cite this work as:
Bochman, Oren. 2024. “Emergent Complex Communications Protocols.” December 20, 2024. https://orenbochman.github.io/posts/2024/2024-05-01-signals/shanon-game.html.