On this page you can download the discussion paper that was submitted for publication in the Journal for Media Linguistics. The blogstract summarises the submission in a comprehensible manner. You can comment on the discussion paper and the blogstract below this post. Please use your real name for this purpose. For detailed comments on the discussion paper please refer to the line numbering of the PDF.
Blogstract of
Rethinking Reference an Authorship: On the Philosophical Status of LMM-Generated Verbal Products
by Jan Georg Schneider
In this article, the status of LLM-generated verbal products is discussed in principle. While we have traditionally been socialized to automatically assume that there is an intelligent author ‚behind‘ verbal products that can be read as intelligent, we can no longer simply presuppose this close connection in the age of LLMs. In this sense, I name LLM-generated products ‚intelligible textures‘. As products, these intelligible textures can hardly, if at all, be distinguished from authorized, human-created texts, but the learning and usage processes differ fundamentally, especially with respect to reference acts. What consequences does this have for our understanding and general conception of the written word, as well as for our notion of authorship and related questions of responsibility for verbal products?
This fundamental question is discussed in the present article using the example of LLM-generated essay evaluations. In March 2025, I ran a test with ChatGPT4o to see if it can be useful for grading an essay. My test strategy was to deliberately change a standard essay to make it worse and then have it evaluated by the system. Although many other text types could be used here, essay evaluation is particularly suitable because it requires a high degree of judgment as well as reference to other texts, namely those to be evaluated, and also to their truthfulness.
Theoretically, the central problem of reference is discussed using Austin’s speech act theory as well as the concept of exemplification and denotation developed by Goodman and Elgin, which I consider very applicable in this research context. Following Goodman and Elgin, exemplifying and denoting are the two basic modes of reference acts. When exemplifying, people use something as an example of a ‚label‘ and emphasize certain relevant properties of it: a fabric sample, for instance, can be used as a sample for a type of fabric, whereby individual properties are emphasized as relevant: for example, the colour and the softness, but not the price or the date of manufacture. In that way, the sample itself becomes a symbol or sign. A sample is always a sample for someone in a concrete situation, and every individual process of symbol interpretation takes place within the framework of a customary practice. Denoting and exemplifying are mutually dependent on each other, because only through exemplifying in concrete situations do communicative practices arise in which, in turn, denoting takes place, that is, referring with a symbol to something in the world. Since this cultural anchoring is completely absent in machine learning, computers, from a philosophical point of view, do not perform any acts of reference. What does this mean for the status of LLM-based verbal products?