Home / Events / A Cross-Case Analysis of the Representation of Victims-Protagonists in Truth Commission Narratives Using LLMs with Human-in-the-Loop Evaluation

Events

A Cross-Case Analysis of the Representation of Victims-Protagonists in Truth Commission Narratives Using LLMs with Human-in-the-Loop Evaluation

05 sep 2025 17:40 - 18:20
[ ONLINE ]
FacebookLinkedin

New session of the "Paris IAS Ideas" online talk series, with the participation of Tine Destrooper (Ghent University) and Jef De Slegte (Vrije Universiteit Brussels), Belgique) fellows for two months for the PostGenAI@Paris programme. 

The "Paris IAS Ideas" online talk series features short and stimulating presentations from fellows of the Paris Institute for Advanced Study, marking the beginning of 1-month writing residencies.

The PostGenAI@Paris is coordinated by coordinated by Sorbonne Université. The Paris IAS welcomes international researchers to support them in their research on artificial intelligence, its impact on our societies and the prospects it offers for the future.

Online only.
Free registration
Register via the form at the bottom of the page to receive the connection link.

Presentation

Tine Destrooper is studying the power of AI in critiques of transitional justice. The project draws on critical studies of transitional justice and examines the conditions that must be met in order to integrate machine learning methods into empirical research on transitional justice, particularly with a view to using these methods in a way that is consistent with the normative orientations of transitional justice towards more just and inclusive societies. Based on a test case using existing databases, the project explores the challenges and opportunities within and beyond research on transitional justice and human rights.

Jef de Slegte is investigating how to quantify intersectional disparities using causal machine learning. The project explores how the integration of such strategies can be improved in order to increase the relevance of machine learning methods in political science, particularly in the field of trustworthy AI. State-of-the-art causal machine learning methods are evaluated with the aim of detecting and quantifying unfair outcomes, particularly those influenced by a protected attribute (e.g., race, gender, age, etc.) present in observational data where disparities in outcomes are observed or in data that includes decisions by an institution or automated system deemed unfair. The focus is on transposing intersectionality to these methods and examining the impact of multiple protected attributes on disparities in outcomes, which is crucial from the perspective of fairness and inclusivity in AI. Finally, the project aims to develop an approach for making equitable predictions, which involves exploring ways to recalibrate unfair outcomes based on available data to achieve equitable results, i.e., as if fair decision-making were in place within the institution or automated system.

Date dépassée
Leveraging the power of AI within the framework of critical TJ scholarship
01 September 2025 - 31 October 2025
35348
Quantifying intersectional disparities in outcome using causal machine learning
01 September 2025 - 31 October 2025
35359
05 Sep 2025 18:20
Jef De Slegte,Tine Destrooper
Yes
35545
Lecture series