it gb
it gb

R. Esposito - Fair-networks: imposizione di vincoli sulla rappresentazione interna di una rete neurale al fine di renderla imparziale

Autori: Mattia Cerrato, Roberto Esposito

The recent surge in interest for Deep Learning (motivated by its exceptional performances on long standing problems) made Neural Networks a very appealing tool for many actors in our society. Companies that traditionally used more explainable tools are nowadays considering Neural Networks to attack the problems they face everyday. One problem in this shift of interest is that Neural Networks are very opaque objects and more often than not the reasons underlying the results they provide are unclear to the final decision maker. When the decisions to be made impact the quality of life of people in the millions, it is imperative to guarantee the fairness of the decision making process: i.e., that the decision is made without discriminating people on a number of sensible attributes such as gender, sex orientation, religion, etc.

Fair-Networks are a tentative to mitigate the problem of using Neural Networks in such context: by explicitly modelling a preference for not using the sensible attributes, we build networks that construct representations of the data where any information about sensible attributes (either direct or mediated) is eliminated. By constructing classifiers on top of that representations one can guarantee that better, fairer decisions will be made.


Contatti: 3DWVehztoTI0yJUGXgMR7Q9l2qf]#[poONLGyURJ9aq7ev@ZLKu81Y1XX, 3DWVehztoTI0yJUGXgMR7Q9l2]#[u179UZmsRK7op1M4hZ99.D8Sv