Jef De Slegte
Jef de Slegte is currently a data scientist and researcher at the Data Analytics Laboratory at the Vrije Universiteit Brussel. Before academia, he spent 10 years in the private sector as a telecommunications and high-tech expert. His interdisciplinary academic work lies at the intersection of machine learning and political science. On the methodological side, he researches how causal inference can be integrated with machine learning to create trustworthy models with the aim of adding an explainability and fairness layer and applying these to empirical research in political science. He examines how causal machine learning can be used to draw and quantify causal effects in electoral studies.
Jef de Slegte joins the Paris IAS in September-October 2025 as part of the PostGenAI@Paris program supported by Sorbonne Université. The Paris IAS welcomes international researchers working on artifical intelligence, its consequences on societies and its perspectives for the future.
Research topics
Artificial intelligence; machine learning; political science.
Quantifying intersectional disparities in outcome using causal machine learning
Over the past decade, political scientists have increasingly adopted machine learning methods. In a context of increasing data sources, like debate transcripts, social media, surveys, geospatial and event records, machine learning methods are appealing because they expand the volume of analyzable information. Their algorithmic logic, however, while enabling new – and under-utilized – forms of inference, is driven by associational relations rather than causal reasoning per se, which potentially hampers their relevance for social and political sciences, where causal reasoning is a crucial component for theorization. Algorithms can, however, be complemented with additional strategies to enable causal inference, notably by making complex interactions less opaque and thereby more theoretically informative for the social sciences.
The project explores how the integration of such strategies can be enhanced to increase the relevance of machine learning methods in political science, specifically in the domain of trustworthy AI. State-of-the-art causal machine learning methods is assessed with the aim of detecting and quantifying unfair outcomes, notably those influenced by a protected attribute (e.g. race, gender, age,…), found in observational data where disparities in outcome are present or on data that include decisions from an institution or automated system that are deemed unfair. The focus is on translating intersectionality to these methods and examining how multiple protected attributes impact disparities in outcome, which is crucial from a perspective of fairness and inclusivity in AI. Lastly the project considers developing an approach to conduct fair predictions, which entails exploring ways to recalibrate the unfair outcomes based on the data at hand, to obtain fair outcomes, i.e. as if fair decision making was in place within the institution or automated system.
Key publications
Jef de Slegte, Filip Van Droogenbroeck, Bram Spruyt, Andres Algaba. Quantifying Voter Turnout Disparities using a Novel Causal Machine Learning Framework, 2025 (forthcoming).
Jef de Slegte, Filip Van Droogenbroeck, Bram Spruyt, Sam Verboven, Vincent Ginis. "The Use of Machine Learning Methods in Political Science: An In-Depth Literature Review", Political Studies Review, 2024.
DOI: https://doi.org/10.1177/14789299241265084
Jef de Slegte, Jacqueline Höllig, Aniek F. Markus, Prachi Bagave "Evaluating Counterfactual Approaches for Real-World Plausibility and Feasibility", Explainable Artificial Intelligence, 2023.
|
|
|