Lecture by Michael Jonik (2018-2019 Paris IAS fellow) within the framework of the lecture series "Sciences in Context", organized by CRI and Paris IAS
Sciences in Context is a new public lecture series, which aims to bring concepts and perspectives from the frontiers of the humanities to the CRI community.
Each lecture will take place on the last Tuesday of the month, featuring fellows from CRI and from the Institut d'Études Avancées (IEA) de Paris.
Lecture topics will be discussed in an open session of the Practical Philosophy Club on the Friday before each lecture, to facilitate an active and participatory discourse with the invited speaker.
As we enter into an unprecedented moment of automation, in which forms of computational thinking and artificial intelligence increasingly affect how people live, work, communicate, travel, or understand themselves, Kant’s enlightenment motto “Sapere aude!” – “Think for yourself!” – seems to have transformed into a new imperative for thinking: “Let the algorithms think for us!” Yet, as Kant’s dare entails the risk of breaking free from self-incurred forms of heteronomy, the wager for thought today is to understand how modes of human critical thinking are multifariously imbricated in, and shaped by, systems of nonhuman thinking. It is perhaps no longer the issue to ask the question of “what is called thinking?” from the perspective of an autonomous human subject, but rather in terms of an analysis of co-emergent, complex and adaptive assemblages of biological and technical cognizers: or what could be called “cognitive ecologies.” And, as if this were not complicated enough, these cognitive ecologies are themselves part of a constellation of ever more extensive methods of psychological and social control: in Luciana Parisi’s terms, “an apparatus of governance operating not only on bodies, but through the datafication of biological, physical and cultural specificites.”
Rather than dissolving or displacing the question of human thinking altogether, however, I will argue that these conditions make understanding thinking ever more urgent, if more complex. On the one hand, forms of technical cognition, big data, and computational processing both augment and extend contemporary modes of research and knowledge production in the sciences and humanities, and open new horizons for analytical and synthetic thinking. But, at the same time, I will want to explore here how the use of algorithms and machine learning, as in the case of automated or synthetic biology, bioinformatics or neural data analysis (to take just a few examples) also encourage a “biotechnopositivism,” which both draws uncritically on modes of data-based statistical analysis and prediction and, at the same time, treats risk as something that is not only anticipated or mitigated, but indeed structured and streamed. How could thinking, at the same time that it is prosthetic, deeply relational and reliant on forms of technical cognition and mediation, also avoid instrumentalization or data-fetishism? What would a risky and critical thinking be beyond the purview of profit-based innovation, digital control, or the positivism of data: a thinking of the unpredictable, the indeterminate, the incomputable? It is especially important now for scholars in the humanities to enter into candid and mutually informing dialogues with peers from the social sciences, informatics and engineering, psychology and the life sciences in order to confront these questions, and to create a new critical thinking for new critical times.