A symposium at the ECAP12 from 24 to 28 August 2026 in Madrid
Recent work in the philosophy of mind and artificial intelligence (AI) asks whether the internal states of artificial neural networks, especially large language models (LLMs), can have representational content and, if so, what facts make this possible. This symposium brings together four approaches to answering this question. The first three talks take a teleosemantic approach, arguing that proper functions are required for representational content. Hundertmark and Turner argue that training involves forms of differential retention that can impart proper functions to ANN components. Mallory then uses circuit-level decomposition to address the level-of-ascription problem by asking where proper functions should be located within complex networks. Williams challenges influential etiological-function strategies in AI metasemantics, proposing function-like roles that are partly determined by the intentions and practices of designers, deployers, and users. Lastly, Heine presents an additional constraint on understanding, claiming that LLM-style training alone cannot produce intentional, world-directed representation without perceptual grounding.
Talks
AI Frogs, Proper Functions, and Internal Representations
James Turner — Umeå University (Sweden)
Fabian Hundertmark — University of Valencia (Spain)
Doing without etiological functions in AI metasemantics
Iwan Williams — University of Copenhagen (Denmark)
Decomposing Artificial Neural Networks
Fintan D. Mallory — Durham University (UK)
Understanding is in the head (and grounded in perception)
Jessica Heine — Auburn University (USA)
Organizers
Fabian Hundertmark (University of Valencia, Spain)
James Turner (Umeå University, Sweden)
