Home / Fellows / Patrick Haggard

Fellows

Patrick Haggard

Professor
University College London
Responsibility in the Age of Intelligent Systems
01 January 2020 - 31 July 2020
Neuroscience
FacebookTwitter

"Jean d’Alembert"  Paris-Saclay – IEA de Paris Research Chair

Patrick Haggard studies the cognitive and neural mechanisms of human voluntary action. He completed his PhD at Cambridge University, and a Postdoctoral Fellowship at Oxford University. He joined the Psychology Department of University College London in 1995, and now leads a research group at UCL's Institute of Cognitive Neuroscience. His research has been funded by several national and international agencies, including the European Research Council. He has recently published an article in the Annual Review of Psychology on « The Neurocognitive Bases of Human Volition » (2019).

Research Interests

Human action and agency, Responsibility, Bodily sensation, Self-awareness

Responsibility in the Age of Intelligent Systems

All human societies have some concept of responsibility for action, which typically functions as the basis of social and moral order. Responsibility implies that people are aware of their actions, and choose to make them - it is therefore a psychological, as well as a sociological and legal concept. Human creativity is allowing increasing automation of decisions, so that events that matter to people are decided by AI systems that lack awareness. This situation creates a pressing challenge for our mental life, and for the well-being of society. My project focusses specifically on the importance of explainability and fixability in how we compute responsibility for human agents, and for machines. Studies with humans show that brain mechanisms for learning and adjusting our actions contribute to our sense of agency, and thus allow attribution of responsibility. This project investigates how explainability and fixability appear in current AI systems, such as deep learning neural networks. What is the relation between explainability of errors, fixability, and responsibility? For example, when an autonomous vehicle runs someone over and kills them, we want to identify who is responsible - but what role should the explainability and fixability of the system play in this process ? We are able to cohabit with other people largely because we can explain their behaviour, and they can explain ours. This project investigates how basic human cognitive abilities will guide our future cohabitation with intelligent machines.

 

Lecture by Patrick Haggard,2019-2020 Paris IAS Fellow, as part of th e "Sciences in Context" program, organized by CRI and the Paris IAS
25 Feb 2020 18:30 -
25 Feb 2020 20:00,
Paris :
Sciences in Context : Agency and responsibility in humans and intelligent machines
31 Jan 2020 12:30 -
31 Jan 2020 12:30,
February 2020: the Institute welcomes new researchers!

22347
2019-2020
Other or several periods
World or no region