A conversation with Kader Attia, artist and curator of the Berlin Biennale 2022, and Matteo Pasquinelli, professor in Media Philosophy HfG Karlsruhe, as part of the course The Political Archeology of Data and moderated by Arif Kornweitz, PhD student HfG Karlsruhe.

Date: 26 January 2022, 10am. Online.
To register and receive the link write to:
Yannick Fritz yfritz@hfg-karlsruhe.de

 

 

Almost a decade after the rise of deep neural networks and machine learning (ca. 2012), it is time to draw up a balance of their cultural impact, to discuss what kind of knowledge model this new form of AI and data economy has come to embody and reinforce. Aside from its actual technical achievements, contemporary AI has already had an ideological impact by imposing the idea that a machine can perfectly imitate human skills such as “memory”, “intelligence”, “learning” and “perception.”

This process of anthropomorphisation and naturalisation of technology is however occurring also at a different level: notions such as “norm”, “error” and “anomaly”, for instance, are increasingly given mathematical definitions and uncritically translated into social ones. Meanwhile, we see that large repositories of knowledge and cultural heritage, from public libraries and museum collections to private pictures and social media posts, are turned into training datasets for AI models in a novel process of knowledge extractivism. At a deeper and more subtle level, the use of machine learning contributes to implicitly impose a new reductionist view to science as much as to society and culture, replacing the old episteme of causal explanations with an episteme of correlations

It is a new form of the colonisation of the mind, education, public institutions, social relations, collective memory and also nature that, this time, is not imposing a rationalist and mechanistic view of the world as in the modern age, but a statistical and algorithmic one for the purpose of economic profit (termed in different ways as surveillance capitalism, digital capitalism, platform capitalism, etc.). 

It is known that machine learning  can be a beneficial instrument in some applications, that its “intelligence” emerge from the imitation of the intelligence of our collective behaviours — we know this very well — but we should also not overlook those fields of knowledge production that cannot or do not want to adapt to this model of  epistemology. What is the role of art, education and community practices, of alternative ways of thinking in the face of the growing hegemony of algorithmic thinking? Or to put it in another way: What happens to learning, to all the practices and institutions of education, in the age of machine learning?