Home / Fellows / Patrick Haggard

Fellows

Patrick Haggard

Professor
University College London
Responsibility in the Age of Intelligent Systems
01 September 2020 - 31 December 2020
Neuroscience
FacebookTwitter

"Jean d’Alembert"  Paris-Saclay – IEA de Paris Research Chair

Patrick Haggard studies the cognitive and neural mechanisms of human voluntary action. He completed his PhD at Cambridge University, and a Postdoctoral Fellowship at Oxford University. He joined the Psychology Department of University College London in 1995, and now leads a research group at UCL's Institute of Cognitive Neuroscience. His research has been funded by several national and international agencies, including the European Research Council. He has recently published an article in the Annual Review of Psychology on « The Neurocognitive Bases of Human Volition » (2019).

Research Interests

Human action and agency, Responsibility, Bodily sensation, Self-awareness.

Responsibility in the Age of Intelligent Systems

All human societies have some concept of responsibility for action, which typically functions as the basis of social and moral order. Responsibility implies that people are aware of their actions, and choose to make them - it is therefore a psychological, as well as a sociological and legal concept. Human creativity is allowing increasing automation of decisions, so that events that matter to people are decided by AI systems that lack awareness. This situation creates a pressing challenge for our mental life, and for the well-being of society. My project focusses specifically on the importance of explainability and fixability in how we compute responsibility for human agents, and for machines. Studies with humans show that brain mechanisms for learning and adjusting our actions contribute to our sense of agency, and thus allow attribution of responsibility. This project investigates how explainability and fixability appear in current AI systems, such as deep learning neural networks. What is the relation between explainability of errors, fixability, and responsibility? For example, when an autonomous vehicle runs someone over and kills them, we want to identify who is responsible - but what role should the explainability and fixability of the system play in this process ? We are able to cohabit with other people largely because we can explain their behaviour, and they can explain ours. This project investigates how basic human cognitive abilities will guide our future cohabitation with intelligent machines.

Conference by Patrick Haggard, 2020-2021 Paris IAS Research Fellow, as part of the Sorbonne Université SCAI Program
25 Nov 2020 18:00 -
25 Nov 2020 19:30,
Responsibility for intelligent machines: a cognitive approach
24821
2020-2021
Other or several periods
World or no region