Autori: Mattia Cerrato, Roberto Esposito
The recent surge in interest for Deep Learning (motivated by its exceptional performances on long standing problems) made Neural Networks a very appealing tool for many actors in our society. Companies that traditionally used more explainable tools are nowadays considering Neural Networks to attack the problems they face everyday. One problem in this shift of interest is that Neural Networks are very opaque objects and more often than not the reasons underlying the results they provide are unclear to the final decision maker. When the decisions to be made impact the quality of life of people in the millions, it is imperative to guarantee the fairness of the decision making process: i.e., that the decision is made without discriminating people on a number of sensible attributes such as gender, sex orientation, religion, etc.
Fair-Networks are a tentative to mitigate the problem of using Neural Networks in such context: by explicitly modelling a preference for not using the sensible attributes, we build networks that construct representations of the data where any information about sensible attributes (either direct or mediated) is eliminated. By constructing classifiers on top of that representations one can guarantee that better, fairer decisions will be made.