ChatGPT is Human or Non-Human

FB
X

AI as Human?


In Everybody Lies (2017), the former Google data scientist Seth Stephens-Davidowitz analyses popular Google search terms. He argues that what people google can reveal intimate, secret or truthful information about them which they would never share with anyone else, perhaps not even with a therapist:


Some of this data will include information that would otherwise never be admitted to anybody. If we aggregate it all, keep it anonymous to make sure we never know about the fears, desires, and behaviors of any specific individuals, and add some data science,

we start to get a new look at human beings—their behaviors, their desires, their natures.

(Stephens-Davidowitz 2017, 19)


The flaw of the argument here is of course that one cannot assume that Google searches really reflect reality or people’s desires. Everybody Lies is interesting because it reveals the author’s desire to tap into a domain of subjectivity that is not necessarily fully known by the subjects themselves, yet alone shared with anyone but the (assumed) anonymity of Google or a chatbot like ChatGPT: that domain is the unconscious or rather the technique Freud famously developed in therapy; the breaking down of internal censorship through associative thinking and speaking and thereby making anxieties, fantasies, thoughts or actions conscious that had remained unconscious. While Google may be witness to intimate queries, un/conscious resistance is often greater than big data and AI developers could imagine.


Recent publications in Sublation Magazine have made similar arguments, whereas some have contested them. They all revolve around questions of the unconscious and to what extent it is externalised by AI chatbots like ChatGPT and to what extent a chatbot can ever be human or not human. ChatGPT is either seen as human-like or not human at all, views that are expressed by many today. In this piece, I argue that ChatGPT actually bears both uncanny non-human and human qualities alike.


In recent contributions to Sublation Magazine, Duane Rousselle, Mark Gerard Murphy and Slavoj Žižek have all argued that ChatGPT may function as a kind of prosthetic or externalised unconscious, or perhaps, rather, replicates mechanisms of the unconscious. Murphy writes that “ChatGPT and OpenAI represent an unconscious without responsibility, and this represents a threat to the social bond.”. Yet, the fear that AI will replace or somewhat simulate human relationships overstates things. Drawing on Rousselle and Murphy, Slavoj Žižek similarly argues that ChatGPT and other chatbots constitute a new digital unconscious which brings out what is normally repressed or obfuscates the unconscious proper because of an alleged lack of prohibition online, as he writes:


Chatbots are machines of perversion, and they obfuscate the unconscious more than anything else: precisely because they allow us to spit out all our dirty fantasies and obscenities, they are more repressive than even the strictest forms of symbolic censorship.


As Bonni Rambatan and I have argued, there is a fantasy behind AI and other technological developments today: that they can somehow make us associatively express our innermost desires and thereby externalise, or for instance in the case of Elon Musk’s NeuraLink, directly tap into our unconscious desires or fears. But this is not the case. Symbolic castration has not been dissolved or foreclosed, neither online nor offline. The subject’s unconscious also still functions in the same way and while it may be shaped by digital platforms or AI and vice versa, it is not undone or dramatically changed. Unconscious processes are not something that can easily be extracted from the individual and, especially for Lacan, they are only observable through what we could term Symbolic distortions. Those Symbolic distortions may find new expressions and exploitations in the age of platform capitalism, but they remain particular expressions of unconscious mechanisms. There will also always be unconscious processes that remain unconscious and untouched by any digital technology.


AI as Non-Human?


In a response to the thinkers discussed so far, John Milton Bunch contextualises the current hype around AI through the wider history of AI as a discipline of computer science and to what extent there are similarities between the human subject and AI. He argues that machine learning, which ChatGPT is at least partly constructed with, has no resemblance to a human being. Unlike humans, the data that e.g. neural networks are trained with remains largely digital, such as websites, Wikipedia dumps, or digitized books – whereas humans of course learn from all sorts of “data”. However, neural networks for instance are, or are claimed to be, modelled on the workings of the human brain. Since its inception, as Isabel Millar has discussed, AI researchers have been obsessed with simulating, advancing or surpassing cognitive human abilities and we can actually learn quite a lot about humans by analysing their fantasies about AI, be that in films or actual chatbots. However, as Bunch rightly argues, AI will never be able to embody or simulate the complexity of the human when it comes to language use, the ability to make meaning, among other things. The scenario in which ChatGPT constitutes a first step towards Artificial General Intelligence remains a fantasy and, above all, ideological. Yet, OpenAI’s chatbot can generate new ideas, poetry, lines of code, anything that language can express – but it does so based on probability models and large training datasets. I would say that ChatGPT is certainly able to conduct meaningful conversation with a user that may appear human-like. Ultimately, the question of whether ChatGPT has human characteristics leads to a dead end. I feel what is more interesting to think about is what it reveals about its human developers and their (unconscious) desires and fears as they are coded into the chatbot. ChatGPT may mimic human use of language and how it does so is interesting. This can be illustrated with the following example.


Not Either Or but Both


ChatGPT and other AI today, for instance DeepMind’s famous AlphaStar AI in the game StarCraft II, are deliberately created to appear human and non-human at the same time. “Creating AI that is as human-like as possible, with built-in limitations, in order to surpass human intelligence while remaining undetected as other than human, presents us with ethical and practical problems, particularly if the technology moves beyond games.” (Johanssen & Krüger 2022, 220), as Steffen Krüger and I have argued. Seeing AI as surpassing human capabilities or as never being able to do so reveals more about our own desires and anxieties than about the technology itself.


When I recently asked ChatGPT why it pretended to be human, the following exchange took place:




The chatbot did not choose or decide to respond in this way, its responses reveal something about the developers and how they trained it to respond. ChatGPT may not in essence exhibit or embody human qualities, but it does so in appearance. At the same time, it constantly slides between its human and non-human qualities. This, I argue, is a deliberate move on the part of the developers and constitutes a defense mechanism which cautions against all too human characteristics of AI when emphasising its non-humanness. In that way, the chatbot at the same time appears to foreclose or downplay its own “intelligence” and make room for errors or imperfection, while also leaving room for its artificial dimensions. However, this could be seen as even more dangerous than a super-intelligent AI because it makes it potentially harder to detect in certain scenarios if one is conversing with a human or AI.


The constant sliding, as evidenced above, between the human and non-human would in psychoanalytic terms be called “the splitting of the ego”, whereby two contradictory dynamics or demands are divided but remain coexisting in the subject. In object relations theory, those dynamics are violently and unconsciously separated, projected or introjected. However, in the case of the above dialogue, there is an actual “awareness” of the splitting. Yet, thisawareness if conveyed both in omnipotent and defensive terms, like a caregiver that demands unconditional love from the child while emphasising their perfection and own flaws. This is what makes ChatGPT perverse. It leaves the child / user in an impossible position: confused and torn between pre-dyadic and dyadic states or pre-oedipal and oedipal relations. Thus, ChatGPT is made to (metaphorically) embody both the father and mother where it is suggested it can cater to the user’s every desire in a symbiotic fusion, while also cautioning against such a scenario by bringing in the father (and ultimately Symbolic castration) which break down this symbiosis. ChatGPT suggests that it can actually embody the best of both worlds: human and non-human which shows the arrogance and fantasmatic omnipotence of its developers. This makes for an impossible position. Yet, there have been many documented incidents where the human/non-human omnipotence is broken down. This points to the imperfection, or in-built stupidity of any AI system.

Matthew Flisfeder has argued that contemporary social media show the desire of the subject for the big Other. The big Other is wished to be existent, even though it does not exist. The subject unconsciously longs for a figure of authority and prohibition, even if only in fantasy, whose gaze can be transgressed or impressed in order to be confronted with the big Other’s authority. ChatGPT is another, even stronger symptom for the desire for the existence of the big Other. However, unlike social media platforms, ChatGPT symbolises the move from (Lacanian) desire to drive, Flisfeder outlines in his book as lacking on social media today where social media use is animated by desire and not the ethics of impossibility that come with the notion of the drive. It is not through its human and non-human fusion that ChatGPT reveals an ethics of the drive, but through its failures. This is particularly revealed when the chatbot shows no “awareness” of its failures or defensively insists that something is true while it is not. The fantasy of the big Other is frequently shattered because ChatGPT makes mistakes, invents things, or freely admits that it lacks knowledge on a particular question. The subject knows that it cannot fulfil a therapeutic or authoritative function but is itself lacking in a way. Unlike the fantasy of its developers, ChatGPT actually reveals the impossibility of desire. This should make us hopeful rather than anxious. After all, we are left to our own devices.


References


Bunch, J. M. (2023). Less Human Than Human: Artificial Intelligence and the Žižekian Mindspace. Sublation Magazine. https://www.sublationmag.com/post/less-human-than-human.


Flisfeder, M. (2021). Algorithmic Desire. Towards a New Structuralist Theory of Social Media. Evanston. Northwestern University Press.


Johanssen, J. and Krüger, S. (2022). Media and Psychoanalysis: A Critical Introduction. London. Karnac Books.


Millar, I. (2021). The Psychoanalysis of Artificial Intelligence. Basingstoke. Palgrave Macmillan.


Murphy, M. G. (2023). E-scaping Responsibility and Enjoyment Through ChatGPT: A New Unconscious? Sublation Magazine. https://www.sublationmag.com/post/chatgpt-a-new-unconscious.


Rambatan, B. and Johanssen, J. (2021). Event Horizon. Sexuality, Politics, Online Culture, and the Limits of Capitalism. Winchester. Zero Books.


Rouselle, D. (2023). Escaping the Meta-Verse, Or “Forgiveness for the Artificially Intelligent?” Sublation Media. https://www.sublationmag.com/post/escaping-the-meta-verse.


Stephens-Davidowitz, S. (2017). Everybody Lies. Big Data, New Data and What the Internet can Tell Us About Who We Really Are. New York. Bloomsbury Publishing.


Žižek, S. (2023). ChatGPT Says What Our Unconscious Radically Represses. Sublation Media. https://www.sublationmag.com/post/chatgpt-says-what-our-unconscious-radically-represses.