What does it mean to hijack the latent space of a computation? Not just any computation, but a cognitive act about which we can say – a decision was made, an inference took place here. First there is the admission of a latent space itself, an interiority – to assert an algorithm could develop, in the language of Kant, an ‘inner sense’. Then there are the implications – that computation could maintain its own myriad languages of thought, as Fodor once proposed, that its acts are not performances for us, may not be performances at all, that they may ultimately be unintelligible to human reason.
By latent space we denote the tensorial data passed between inner layers of an Artificial Neural Network (ANN). Tensors are high-dimensional representations of input data, structures which are reshaped as they flow through the net. Reshaping here refers to dimensional plasticity, to moving fluidly between representations of an input space, learning higher level embeddings of data – in short, engaging in acts of multi-level abstraction.
The crisis of explication that characterises contemporary AI can be viewed either as a semantic limitation proper to the ANN model – its inability to attain the requisite level of concept formation – or instead a more fundamental limitation on the order of linguistic correspondence, a problem arising from the act of mapping human concepts to this latent space. A non-correspondence of vocabularies is merely the observation that no necessary bijection exists between language sets. The Inuit may create n words for snow, just as ‘umami’ may only be translatable through analogy, but this fact alone should not prompt a descent to relativism. An altogether stronger claim is at stake here, namely the incommensurability of cognitive acts mediated by diverse languages, a principle first proposed by Feyerabend in historical form, and subsequently critiqued by Putnam.
A computational theory of mind can quickly converge on its own claim of incommensurability, with implications for the epistemic status of inferences made by AI. In this account, reason is modeled as a set of linguistic statements, a ‘canon’ of every sensical alethic statement in a language. The assumption here is that reason can be modeled as a formal grammar R, given by the tuple: