Base des structures de recherche Inria
IA coopérative : équité, vie privée, incitations
FAIRPLAY
Statut:
Décision signée
Responsable :
Patrick Loiseau
Mots-clés de "A - Thèmes de recherche en Sciences du numérique - 2023" :
A4.8. Technologies pour la protection de la vie privée
, A8.11. Théorie des jeux
, A9.2. Apprentissage
, A9.9. IA distribuée, multi-agents
Mots-clés de "B - Autres sciences et domaines d'application - 2023" :
B9.9. Ethique
, B9.10. Confidentialité, vie privée
Domaine :
Mathématiques appliquées, calcul et simulation
Thème :
Optimisation, apprentissage et méthodes statistiques
Période :
01/03/2022 ->
28/02/2026
Dates d'évaluation :
Etablissement(s) de rattachement :
IP-PARIS, CRITEO
Laboratoire(s) partenaire(s) :
CREST (UMR 9194)
CRI :
Centre Inria de Saclay
Localisation :
Centre de recherche Inria de Saclay
Code structure Inria :
111101-0
Numéro RNSR :
202224251U
N° de structure Inria:
SR0917SR
Most of the current machine learning literature focuses on the case of a single agent (an algorithm) trying to complete some learning task based on gathered data that follows an exogenous distribution independent of the algorithm. One of the key assumptions is that this data has sufficient “regularity” for classical techniques to work. This classical paradigm of “a single agent learning on nice data”, however, is no longer adequate for many practical and crucial tasks that imply users (who own the gathered data) and/or other (learning) agents that are also trying to optimize their own objectives simultaneously, in a competitive or conflicting way. This is the case, for instance, in most learning tasks related to Internet applications (matching, content recommendation/ranking, ad auctions, etc.). Moreover, as such learning tasks rely on users’ personal data and as their outcome affect users in return, it is no longer sufficient to focus on optimizing prediction performance metrics—it becomes crucial to consider societal and ethical aspects such as fairness or privacy.
The overarching objective of FairPlay is to create algorithms that learn for and with users—and techniques to analyze them—, that is to create procedures able to perform classical learning tasks (prediction, decision, explanation) when the data is generated or provided by strategic agents, possibly in the presence of other competing learning agents, while respecting the fairness and privacy of the involved users. To that end, we naturally rely on multi-agent models where the different agents may be either agents generating or providing data, or agents learning in a way that interacts with other agents; and we put a special focus on societal and ethical aspects, in particular fairness and privacy.
The FairPlay team is positioned at the intersection of machine learning and game theory and has three main research threads:
La position est calculée automatiquement avec les informations dont nous disposons. Si la position n'est pas juste, merci de fournir les coordonnées GPS à web-dgds@inria.fr