Arif Kornweitz contributes a paper to the symposium From Hype to Reality: Artificial Intelligence in the Study of Art and Culture.

20 – 21 April  2023
Digital Society Initiative
University of Zurich
and online: registration


The two-day symposium brings together scholars and artists with experience of working at the intersection of disciplines such as digital humanities, digital art history, cultural and media studies, digital visual studies, deep learning and computer vision. Arif Kornweitz contributes a talk on the following.

AI and the Concept of the Work of Art

Two approaches to the ‘work’ appear to collide in the confluence of AI and the arts. If art works today are decomposed into features, and generated from that same vector space, what kind of concept of the work is at play in AI? And how does it compare to conceptions of the work in the arts?

Today, artefacts (e.g. a music piece) are classified by machine learning models that were trained on a certain canon, for example a set of a specific genre or category (e.g. classical music) or of an oeuvre (i.e. work by one artist). Artists may then use those models as part of their process, for example for generating compositions or even audio files. But which parts of a work exactly are read and abstracted into a machine learning model?

The process of abstraction may include classifying an artefact (e.g. a sound file), and additional data about its context and its style. In following Peli Grietzer’s ‘theory of vibe’, style can be understood as ‘an abstractum that cannot be separated from its concreta’. The abstraction of the work in feature space appears to presuppose that the work is a stable, concrete thing. But a work is rarely stable, it is open to re-interpretion in different contexts by different audiences.

The current regime of machine learning deploys a superficial conception of the work, that is, it is geared towards the surface. It is a taxonomical regime in the technical sense, as it is designed to classify and generate artefacts based on specific features in a multidimensional space which is bound by its number of inputs. To be processed by machine learning model, the work is treated as a flat artefact. It is processed as is, not as it might be.

To understand what kind of notion of the work is being mobilised when artworks are classified and generated by AI, this talk draws from philosopher Lydia Goehr’s writings on the history of the (musical) work, a concept that emerged at the end of the 18th century. It was deployed to evaluate performances of music according to the composer’s intent and historical accuracy. The work concept has consequently been challenged during the last century and today can be said to include much more than the artefact at hand, rendering the work fluid and contingent on context.

Does AI then deploy a similar, archaic concept of the work by flattening it? Yes, but it does not render the work itself unambiguous. As philosopher Sybille Krämer points out, flattening is to be understood as a cultural technique that allows artefacts to be read and interpreted. While flattening momentarily fixates an abstractum, its dialectical relations to its concreta are what the task of interpretation grapples with.




Goehr, L. (1992). The Imaginary Museum of Musical Works: An Essay in the Philosophy of Music. Clarendon Press.

Grietzer, P. (2017). A Theory of Vibe. Glass Bead.

Krämer, S. (2023). Should we really ‘hermeneutise’ the Digital Humanities? A plea for the epistemic productivity of a ‘cultural technique of flattening’ in the Humanities. Journal of Cultural Analytics, 7.