Over the last years we have been witnessing a shift in the conception of artificial intelligence, in particular with the explosion in machine learning technologies. These largely hidden systems determine how data is gathered, analyzed, and presented or used for decision-making. The data and how it is handled are not neutral, but full of ambiguity and presumptions, which implies that machine learning algorithms are constantly fed with biases that mirror our everyday culture; what we teach these algorithms ultimately reflects back on us and it is therefore no surprise when artificial neural networks start to classify and discriminate on the basis of race, class and gender. (Blockbuster news regarding that women are being less likely to get well paid job offers shown through recommendation systems, a algorithm which was marking pictures of people of color as gorillas, or the delivery service automatically cutting out neighborhoods in big US cities where mainly African Americans and Hispanics live, show how trends of algorithmic classification can relate to the restructuring of the life chances of individuals and groups in society.) However, classification is an essential component of artificial intelligence, insofar as the whole point of machine learning is to distinguish ‘valuable’ information from a given set of data. By imposing identity on input data, in order to filter, that is to differentiate signals from noise, machine learning algorithms become a highly political issue. The crucial question in relation to machine learning therefore is: how can we systematically classify without being discriminatory?e
- Reflections dealing with theoretical (re-)conceptualisations of what artificial intelligence is and should be. What history do the terms artificiality, intelligence, learning, teaching and training have and what are their hidden assumptions? How can human intelligence and machine intelligence be understood and how is intelligence operationalised within AI? Is machine intelligence merely an enhanced form of pattern recognition? Why do ’human’ prejudices re-emerge in machine learning algorithms, allegedly devised to be blind to them?
- Implications focusing on the making of artificial intelligence. What kind of data analysis and algorithmic classification is being developed and what are its parameters? How do these decisions get made and by whom? How can we hold algorithms accountable? How can we integrate diversity, novelty and serendipity into the machines? How can we filter information out of data without reinserting racist, sexist, and classist beliefs? How is data defined in the context of specific geographies? Who becomes classified as threat according to algorithmic calculations and why?
- Imaginaries revealing the ideas shaping artificial intelligence. How do pop-cultural phenomena reflect the current reconfiguration of human-machine-relations? What can they tell us about the techno-capitalist unconscious? In which way can artistic practices address the current situation? What can we learn from historical examples (e.g. in computer art, gaming, music)? What would a different aesthetic of artificial intelligence look like? How can we make the largely hidden processes of algorithmic filtering visible? How to think of machine learning algorithms beyond accuracy, efficiency, and homophily?
If you would like to submit an article or other, in particular artistic contribution (music, sound, video, etc.) to the issue, please get in touch with the editorial collective (contact details below) as soon as possible. We would be grateful if you would submit a provisional title and short abstract (250 words, max) by 15 May, 2018. We may have questions or suggestions that we raise at this point. Otherwise, final versions of articles and other contributions should please be submitted by 31 August, 2018. They will undergo review in accordance with the peer review process (s. About spheres). Any revisions requested will need to be completed so that the issue can be published in Winter 2018.
Inga Luchs: email@example.com