Research team
Promoting transparency in the realm of fair machine learning.
Abstract
Machine learning is being used increasingly for high-stakes decisions in areas such as healthcare, finance and justice. It is therefore vital that its effect on society is aligned with our ethical objectives. Many examples of biased and discriminatory machine learning systems exist, and to avoid these scenarios, we need to be able to understand why AI systems make the decisions they do, and to ensure they are fair. Within this postdoctoral project, I have two main objectives: First, I want to develop methods to detect bias in machine learning models trained on behavioral data. This type of data is largely unexplored in the fairness domain. Secondly, I aim to provide transparency into the workings of bias mitigation methods . These are techniques deployed to eliminate biases in machine learning models, but they show a high level of arbitrariness, which introduces an additional layer of unfairness. Lastly, I will validate my findings on use cases in various application domains, which shows the widespread importance of these methods.Researcher(s)
- Promoter: Martens David
- Fellow: Goethals Sofie
Research team(s)
Project type(s)
- Research Project
Explaining prediction models to adress data science ethics in business and society.
Abstract
Artificial Intelligence (AI) is having an increasingly large impact on society and it is already used in several high stakes decision domains like finance, justice and healthcare. This also means that it is of high importance to make sure that the decisions of the AI system are aligned with ethical objectives. In our research, we will focus on two ethical aspects of fairness and link these with Explainable AI, which is the field of AI concerned with how well decisions can be understood by humans. The two aspects of fairness we will focus on are: 1) ensuring that the model does not discriminate against any sensitive group (for example women or a particular ethnic group) and 2) protecting the privacy: ensuring that the the personal data of data subjects will be kept safe and the subjects cannot be identified against their will. The main contribution of our research will be to develop new methodologies to improve these ethical issues when using Explainable AI. In the last phase, we will validate our findings and methodology through use cases in HR analytics (predicting suitable job candidates) and credit scoring (predicting default).Researcher(s)
- Promoter: Martens David
- Fellow: Goethals Sofie
Research team(s)
Project type(s)
- Research Project
Explaining prediction models to adress data science ethics in business and society
Abstract
Artificial Intelligence (AI) is having an increasingly large impact on society and is already used in several high stakes decision domains as finance, justice and healthcare. This also means that it is of high importance to ensure that the decisions of the AI system are aligned with ethical objectives. In my research, I will focus on the ethical areas of transparency, fairness and privacy. Transparency relates to how well the AI model and its predictions can be understood by individuals. Fairness of an AI model deals with not discriminating against any sensitive group (for example women or a particular ethnic group), while privacy requires respect for personal data. I will link these ethical ects with the field of Explainable AI, Counterfactual Explanations in particular, and several validation domains in business, such as tax fraud, HR analytics and credit scoring. The main contribution of my research will be to develop new methodologies to improve and validate these ethical issues when using Explainable AI.Researcher(s)
- Promoter: Martens David
- Fellow: Goethals Sofie
Research team(s)
Project type(s)
- Research Project