A multiagent system comprises multiple decision-making entities, each with different pieces of information, and able to signal messages to each other. This project studies questions of trust in multiagent agent systems using a combination of game theory and learning theory. We focus on two main directions:
1. Statistical learning and generalization from strategic action. Here a receiver aims to classify a single strategic sender’s true label from the data provided by the sender, knowing that such data would have a strategic skew in it.
2. Cooperation under mistrust. In multiagent systems post-facto privacy leakage happens when a player can infer the other player’s private information by observing the actions chosen by the other players. We will develop strategies that preserve post-facto privacy, quantify the loss in performance due to such privacy constraints, and develop third-party protocols that help preserve privacy. We also aim to introduce the concept of zero-knowledge signalling in multiagent systems.