CLOSED PhD student : Fairness in Machine Learning
CLOSED
We are hiring PhD student(s) for a project on fairness-aware machine learning, in close collaboration with one of the largest insurance companies of the world.
Requirements for the PhD student:
- Master in Computer Science or related discipline;
- A good knowledge of AI, machine learning, data mining techniques;
- Excellent academic record;
Below you can find a short description of the research project. The project will commence early 2020 (flexible starting date based on availability of PhD candidates). Funding for PhD studies for up to 4 years is foreseen. Interested candidates please contact toon.calders@uantwerpen.be . In your application file, please add the following elements:
- A motivation letter explaining your interest in the topic of fairness in machine learning;
- Your CV;
- The (partial if ongoing) grade transcript of your master studies;
- Contact details of 2 referents;
- Optionally, a copy of the report/paper for which you were the main author and which, in your opinion, best characterizes your research abilities. This may be unrelated to the topic of the research position.
Context:
Artificial intelligence is more and more responsible for decisions that have a huge impact on our lives. But predictions made using data mining and algorithms can affect population subgroups differently. Academic researchers and journalists have shown that decisions taken by predictive algorithms sometimes lead to biased outcomes, reproducing inequalities already present in society. Is it possible to make a fairness-aware data mining process? Are algorithms biased because people are too? Or is it how machine learning works at the most fundamental level?
Short project description:
In contemporary society we are continuously being profiled; e.g., banks have profiles to divide up people according to credit risk, insurance companies profile clients for accident risk, telephone companies profile users on their calling behavior, web corporations profile users according to their interests and preferences based on web activity and visitation patterns. Those profiles may be inherently discriminatory. The European Union has one of the strongest anti-discrimination legislations, describing discrimination based on race, ethnicity, religion, nationality, gender, sexuality, disability, marital status, genetic features, language and age. The recent General Data Protection Regulation (GDPR; Regulation (EU) 2016/679) explicitly mentions profiling (Art. 22 GDPR “Automated individual decision-making, including profiling”) as an activity in which decisions should not be based on personal data and suitable measures should be in place to safeguard the data subject’s rights and freedoms and legitimate interests.
Nowadays, many decision-making procedures are, at least partially, automated and often based on data mining. In general, however, these techniques do not consider anti-discrimination legislation and may unintentionally produce models that are unfair and hence do not safeguard the data subject’s freedoms, which is mandatory under the GDPR. A further complication is that often detecting whether a model is unfair, is highly non-trivial. This is especially so if sensitive attributes such as ethnicity are unavailable for large parts of the target community. In future, companies may be held accountable for deploying unfair decision procedures, even if unintentionally. In a recent editorial of Nature (2016) it was stated: Largely absent from the widespread use of such algorithms [for profiling in advertising, credit, insurance] are the rules and safeguards that govern almost every other aspect of life in a democracy: adequate oversight, checks and balances, appeals, due process, and the right to have past offences removed from records after a statutory time. (Nature Editorial 2016).
Because of this, fairness-aware machine learning, also known as discrimination-aware data mining, recently became a hot topic in the research community. Fairness-aware machine learning addresses the issue by developing techniques that allow for profiling while respecting the fundamental rights of the data subjects.
The goal of this project is two-fold: (1) develop measures of fairness with respect to gender and racial discrimination that are useful in the operational context of the involved insurance company, (2) measure fairness of existing decision procedures and models generated by state-of-the-art machine learning methods.