[OPR] Hetjens: Appeals to Generative Authority and the AI Reference Paradox

On this page you can download the discussion paper that was submitted for publication in the Journal for Media Linguistics. The blogstract summarises the submission in a comprehensible manner. You can comment on the discussion paper and the blogstract below this post. Please use your real name for this purpose. For detailed comments on the discussion paper please refer to the line numbering of the PDF.

Discussion Paper (PDF)

Blogstract of

Appeals to Generative Authority and the AI Reference Paradox. A Pragmatic Analysis of Reference to LLMs in Online Debates

by Dominik Hetjens

How LLMs are Reshaping Online Debate

Digital discourse has entered a new era. On platforms like Reddit, the traditional exchange between users is being disrupted by a silent third party: the Large Language Model (LLM). In an explorative study of the German-language subreddit r/de, I investigated how references to ChatGPT in comments are altering the nature of online debates. The results suggest that users have developed nuanced ways to reference the LLM and frame their interaction with it, ascribing differing degrees of authority to the AI.

Different Types of AI-references

By analysing their function, four distinct types could be identified:

  • The Scribe Reference: The AI acts as a creator of primary material, providing a text for use outside of the discussion.
  • The Informant Reference: The AI is framed as a neutral data source, often introduced with the casual phrase “Ich habe mal ChatGPT gefragt” (“I just asked ChatGPT”).
  • The Corroborator Reference: The user strategically employs the AI to validate a pre-existing position, using specific markers like the conditional “Wenn” (“if/when”) to imply the results are reproducible.
  • The Arbiter Reference: In its most extreme form, the AI is weaponised as a final authority to abruptly terminate debates, proclaiming a “higher truth” that leaves no room for further human deliberation.

The AI-Reference Paradox

One of the most striking findings is the emergence of what could be called the “AI-reference paradox.” Users frequently exhibit overt metadiscursive scepticism by using hedges like “aus Spaß” (“for fun”) to save face, while simultaneously placing deep-seated trust in the AI’s factual claims. This creates a “buffer of deniability”: by framing the interaction as a casual experiment, users can leverage the AI’s authority while insulating themselves from criticism should the output prove incorrect.

Interactional Profiles

The study also found that where a comment appears in Reddit’s “tree” structure correlates with its function. Informant references (Type 2) typically appear early in a thread (most often on Level 1), serving as a foundation for the discussion. In contrast, Arbiter references (Type 4) tend to appear deeper in the nested comment structure, surfacing only after a human-to-human debate has reached a stalemate. At this point, the AI is brought in not to inform, but to “win.”

AIHumanHumanAI Interaction

We are witnessing a shift in digital debating culture. Traditional markers of expertise and epistemic authority are being supplemented, and in some cases supplanted, by the ability to strategically “interrogate” generative systems. This creates an AI–human–human–AI interaction where every interlocutor must account for the separate, private conversations their peers might be having with an algorithm, while at the same time framing their own interactions with it in the best possible way.

As LLMs become further embedded in our digital lives, the power to win an argument may no longer rest on what you know, but on how effectively you can recruit an AI Arbiter to your side.

Leave a Comment

Bitte nutzen Sie Ihren Klarnamen für Kommentare.
Please use your real name for comments.