•  About
    • About the Lab
    • Director’s Note
    • Our Vision
    • Founding Donor
    • Advisory Board
    • Principal Investigators
  • People
    • Associated Faculty
    • Executive Committee
    • Students
    • Program Directorate
  • TrustNet
  •  Projects
  •  Resources
  •  News
    • Newsletter
    • Quick Updates
  •  Events
    • Talks
    • Trust Summit
  •  Engage
Menu
  •  About
    • About the Lab
    • Director’s Note
    • Our Vision
    • Founding Donor
    • Advisory Board
    • Principal Investigators
  • People
    • Associated Faculty
    • Executive Committee
    • Students
    • Program Directorate
  • TrustNet
  •  Projects
  •  Resources
  •  News
    • Newsletter
    • Quick Updates
  •  Events
    • Talks
    • Trust Summit
  •  Engage

Adversarial Machine Learning under Perturbation of a Subset of Training Instances

Overview
People
Outcome
Overview

Robustness to adversarial perturbations are crucial to machine learning. In practice, an attacker might only select specific instances for attack e.g., in facial recognition systems an adversary might aim to target specific faces. In this project, we plan to tackle this problem, where we consider different perturbation methods. We also consider different methods for choosing instances to attack. Initial results show significant promise in image classification, where we are accurately able to predict labels of perturbed examples more effectively than others.
Active from 2023
Funding: Trust Lab Grant 2023

People

Abir De

Swaprava Nath

Outcome
Twitter Facebook-f Linkedin Youtube
  • trustlabcse.iitb.ac.in
  • +91-22-2159-6725
  • Department of Computer Science and Engineering
    Indian Institute of Technology Bombay
    Powai, Mumbai 400076
resources
news
Events
engage
About
people
trustnet
projects
resources
news
Events
engage