•  About
    • About the Lab
    • Director’s Note
    • Our Vision
    • Founding Donor
    • Advisory Board
    • Principal Investigators
  • People
    • Associated Faculty
    • Executive Committee
    • Students
    • Program Directorate
  • TrustNet
  •  Projects
  •  Resources
    • Pre-Doctoral Program
    • Internships
    • Early Career Award
    • Trust Lab Grant
    • Trust Lab Fellowship
  •  News
    • Trust Matters
    • Quick Updates
  •  Events
    • Talks
    • Trust Summit
    • TL CTF
    • Schools
    • All Events
  •  Engage
  •  About
    • About the Lab
    • Director’s Note
    • Our Vision
    • Founding Donor
    • Advisory Board
    • Principal Investigators
  • People
    • Associated Faculty
    • Executive Committee
    • Students
    • Program Directorate
  • TrustNet
  •  Projects
  •  Resources
    • Pre-Doctoral Program
    • Internships
    • Early Career Award
    • Trust Lab Grant
    • Trust Lab Fellowship
  •  News
    • Trust Matters
    • Quick Updates
  •  Events
    • Talks
    • Trust Summit
    • TL CTF
    • Schools
    • All Events
  •  Engage

Game Theoretic Studies of Trust for Multiagent Systems

Overview
People
Outcome
Overview

A multiagent system comprises multiple decision-making entities, each with different pieces of information, and able to signal messages to each other. This project studies questions of trust in multiagent agent systems using a combination of game theory and learning theory. We focus on two main directions: 1. Statistical learning and generalization from strategic action. Here a receiver aims to classify a single strategic sender’s true label from the data provided by the sender, knowing that such data would have a strategic skew in it. 2. Cooperation under mistrust. In multiagent systems post-facto privacy leakage happens when a player can infer the other player’s private information by observing the actions chosen by the other players. We will develop strategies that preserve post-facto privacy, quantify the loss in performance due to such privacy constraints, and develop third-party protocols that help preserve privacy. We also aim to introduce the concept of zero-knowledge signalling in multiagent systems.
Active from 2023
Funding: Trust Lab Grant 2023

People

Ankur A. Kulkarni

Outcome
Twitter Facebook-f Linkedin Youtube
  • trustlabcse.iitb.ac.in
  • +91-22-2159-6725
  • First Floor, New CSE Building
    Department of Computer Science and Engineering,
    Indian Institute of Technology Bombay,
    Powai, Mumbai 400076
IITB logo