Dear Sublation Magazine Readers,

Thank-you for supporting us by reading and sharing our articles. To help us keep all of our content free, please consider supporting us with a donation.



Less Human Than Human

FB
X

As public awareness of the latest iterations of ChatGPT has increased, we’ve seen an increase in not only popular press articles declaring the latest woo-woo threat from AI, but from a certain segment of leftists, including Slavoj Žižek, in these very pages as well. In fact, I’m going to focus on Žižek’s recent paper in Sublation Magazine, ‘ChatGPT Says What Our Unconscious Radically Represses’, and hope my criticisms extend to Rousselle, Murphy, and others who seem to attribute much more influence and power to artificial intelligence applications than is warranted. AI, for example, has the potential to revolutionize the white-collar workplace in the same way machines have automated manual labor. This ought to generate some form of legitimate response from the left, if by “left” we are referring to people whose primary concern is with democracy in the workplace and other efforts to improve the material conditions under which we all live. Instead, we’re treated by Žižek and friends to phantasmagorical handwringing over ChatGPT as a potential “unconscious.”

Consider this line from Žižek, when summarizing Rousselle’s and Murphy’s claims that ChatGPT is an unconscious:


New digital media externalizes our unconscious into AI machines, so that those who interact with AI are no longer split subjects, i.e. subjects subjected via symbolic castration that makes their unconscious inaccessible to them.


Žižek then puts this idea through his own Lacanian filter and arrives at:

Chatbots are machines of perversion, and they obfuscate the unconscious more than anything else: precisely because they allow us to spit out all our dirty fantasies and obscenities, they are more repressive than even the strictest forms of symbolic censorship.


I will neither begrudge Žižek his insistence on Lacan nor challenge his analysis such as it is. I’m simply going to argue that such an analysis belies a poor understanding of artificial intelligence and chatbots, and for that reason is misguided. Simply put, I don’t think chatbots are as smart, particularly in their ability to process language, as Žižek assumes. Given the role of language in the Lacanian enterprise, this seems critical to his analysis.

As an example, read the following modified statements and see if they ring as true as Žižek’s originals. It should, because it’s no less technically accurate.


1. New digital media externalizes our unconscious into Google searches, so that those who interact with search results are no longer split subjects, i.e. subjects subjected via symbolic castration that makes their unconscious inaccessible to them.


2. Internet search engines are machines of perversion, and they obfuscate the unconscious more than anything else: precisely because they allow us to spit out all our dirty fantasies and obscenities, they are more repressive than even the strictest forms of symbolic censorship.


Was your first response to these statements that AI is not the same as a Google search? That AI is a human-like enigma and has at least some human-like properties that make it fundamentally different than a Google search?

If so, keep reading.


A Brief Crash Course in the Technical History of AI


There are two distinctly different forms of software labeled Artificial Intelligence that currently seem to be intractably conflated in the popular press and Twitter, and I think a lack of understanding of the history of AI research is to blame. First, we have the original vision of artificial intelligence, which emerged in the years immediately following WWII. I’m going to call this original AI. Unless someone has studied AI academically or professionally, this is probably the AI they know about. This is the AI in which researchers can create computer programs that grow ever more-human like, with the inevitable ending, even if in the distant future, of producing a man-in-the-machine.


Originally, this was very much what AI researchers tried to do. For your own edification, spend a little time reading about the work of Herbert Simon, Allen Newell, Marvin Minsky and the many others who spent high-profile academic careers invested in understanding what was possible in this area. The thinking was, simply, that we could potentially discover the algorithms at work in the human mind and implement them in a computer program. In other words, once we understood how humans process language, how sensory perception works, etc., we could program a computer to do the same thing.


This was, of course, a bonanza for science fiction writers. HAL from 2001: A Space Odyssey and Data from Star Trek Next Generation are what we would expect from this vision of AI, and it captured the public imagination. However, this visionof AI had exhausted itself in the laboratory by sometime in the 1990’s and fell out of favor. The reason is simple: researchers could not come to understand the algorithms at work in the human mind. And if we don’t know how humans do it, we can’t write a program to make computers do it. It’s as simple as that. It doesn’t matter how fast or how powerful computer hardware becomes. That can’t make up for lack of an algorithm.


Let’s consider the mind-as-machine metaphor itself upon which original AI is based. There are two machines to consider here. This first is the human machine, of which the central nervous system plays the primary role with regard to language and information processing. Then there is the digital computer, which implements what’s generally referred to as a Von Neumann architecture, meaning it has a central processing unit that follows instructions sent to it from a program stored in memory. The program is an algorithm, or a sequence of logical steps, leading from a beginning state to a goal state.


The human machine doesn’t work that way. At all. It’s an information processing machine, in that it performs a transformation to incoming sensory data and responds, but its mechanism of action does not involve a distinct CPU, random access memory bank, separate long-term storage, and a set of coded instructions. It’s a computer, but a computer of an entirely different sort. The simplest example of this sort of computer would be an old-fashioned, mechanical speedometer. The mechanical speedometer uses an assembly of precisely-sized gears in order to translate the rotations of a wheel into speed, defined by the driver as distance / unit time. Realize that within the speedometer there is no distinct measure or representation of either distance or time – measuring speed is just what the speedometer does.

However, we can pull apart the speedometer and see its gears – we can measure how they are connected, their sizes, their interactions – every physical feature of a mechanical speedometer can be discovered. We can create a computer algorithm to produce something essentially similar – measure distance between points A and B, measure the time it took to get from A to B, then divide distance by time and you have speed.


Now consider the human machine. It’s an analog computer like the mechanical speedometer, but was created by millions of years of biological evolution to include an enormous collection of features and functions. Language, sensory perception, etc. happen because of its physical design, not because of software. There are many billions of individual neurons, for example, that interact with each other and other systems in the body.


Trying to tease that apart as one would a mechanical speedometer has proven to be an extremely difficult undertaking, and one, which, at best, we are only in the beginning phases of. So while we can speak of the nervous system and brain as a computer, it is a computer of a radically different sort than the one on which you are likely reading this essay. It follows an algorithm, but the algorithm is within a physical design we don’t understand. As a substrate for “mind,” then, the human computer is a wholly different enterprise than the digital computer upon which original AI programs must run, and its information processing secrets have yet to be revealed. The upshot is that original AI all but died some 25 years ago. Or it’s at least in a state of suspended animation at present, waiting for more progress to be made by scientific psychology and neuroscience.


Let me repeat that – the type of AI that leads to HAL from 2001 is dead. ChatGPT and the latest generation of “AI” applications currently in the news are something altogether different. The label AI is unfortunate here and is in no small part the result of purposeful misrepresentation by corporate marketing departments successfully leveraging public misunderstanding.


I’m going to use the term machine learning rather than AI to refer to the class of applications to which ChatGPT and similar tools belong. Machine learning is a better term as it refers specifically to the collection of features that make these applications unique.

Machine learning applications make no attempt to “replicate” human cognition, and their ultimate theoretical outcome is NOT the creation of a man-in-the-machine. It makes no attempt to apply the algorithms of the human nervous system in a computer, and its association with original AI reflects more of an historical artifact than a conceptual similarity. Rather than implement an algorithm to process data as humans do, machine learning looks for statistical relationships between inputs and outputs.


There are some important implications of this. First, and perhaps most importantly, we know from the last 75 years of empirical research that this approach cannot produce, acquire, or comprehend language. And if it has no language capacity, how human can it be? How relevant can it possibly be to the Lacanian model in which critical elements of subjectivity are to be found in language itself?


This is not to say that machine learning isn’t powerful or revolutionary. It’s just that it’s powerful and revolutionary like a spreadsheet – a greattool for a lot of stuff, but it doesn’t challenge or rival any feature or element we would consider uniquely human. Remember, machine learning tools don’t have an algorithm for language, because we don’t know how to create one. What ChatGPT is doing is pure probabilistic association. It has no cognitive structure in which concepts can be organized and any form of semantics implemented.

Here’s a simple example illustrating how these sorts of applications process text. Let’s say you wanted to write a computer program to organize a large, unorganized collection of text documents by topic and subtopic. Assume it’s a tremendous amount of text; too much for a team of humans to work through in a month.


As we’ve discussed, we don’t have an algorithm for natural language processing, so that’s not an option. What you might do instead is first write a program that processed all of the text in, say, The Library of Congress, and simply counted the frequency with which each word occurred, and the adjacency of one word to another (i.e., the number of words in between) which it then stored in a database. As additional books were published and added to the Library of Congress, the application would read them, and count and add their word frequencies and adjacencies to the database.


Then you’d write another application with access to the first one’s database. You could program it to read in new text, separate out all the individual words into nouns, verbs, adjectives, etc., count the frequency of each, the adjacency of one word to another, and compare these values to the values for the same words in the Library of Congress database. Words that appear in your new document with a frequency greater than they appear in the Library of Congress database and with different adjacencies can be assumed to be about those words.


This is as close to a semantic model as it gets. The only meaning comes from the human user’s interpretation of the results. You can add thousands of additional quantified variables, hundreds of millions of individual cases, and apply all manner of complex statistical analysis to improve the outcome for human users, but it doesn’t change the fundamental fact that this is not “meaning” as we understand it in humans, and it certainly isn’t language. In addition, as the above example implies, the functionality and “smartness” of ChatGPT is dependent upon the data it reads. These commercial-scale applications digest terabytes of data on massive collections of computer hardware to accomplish the goal of becoming ever-more functional. Current estimates, such as that by Villalobos, et al., 2022, are that high-quality language data will be exhausted within the next couple of years. Once that happens, the perceived “intelligence” of these applications will plateau, and further progress will slow significantly.


In summary, machine learning systems such as ChatGPT at best mimic human language like a RealDoll mimics a living human partner. As I’ve discussed in a previous essay, machine learning systems have no capacity to synthesize existing ideas and create new ones, or to create new, meaningful sentences not based on existing data. Again, this is by design, and it becomes more and more apparent as users gain experience using these tools. And the tools will soon stop getting smarter.


What Qualifies as a Sentence Dispenser?


In order to bring significant change to the personality of the individual, certainly an AI application must meet some minimal criteria for acting as a “sentence dispenser” capable of natural language processing at a proficiency level at which meaningful interaction with the individual can occur. In this case, Žižek assumes that chatbots are powerful enough sentence dispensers to become “machines of perversion.” As I’ve discussed above, that seems highly questionable.


However, perhaps the sentence dispenser need not be particularly smart. Maybe the probabilistic non-semantics of machine learning are enough. If Žižek is willing to make the same claim about a wide range of technology, say all interactive technology including social media, perhaps the “sentence dispenser” function of AI isn’t the critical feature. Perhaps it’s simply interactivity. In this case, there’s nothing unique about AI that brings it to bear on the subject any differently than Google’s search engine. And perhaps this is his intent – in the end, I won’t challenge the idea that something could function as a “machine of perversion” without requiring meaningful language capacity.


However, claims of AI as an unconscious seem dubious, as this would require a language capacity beyond that of machine learning. Either way, machine learning applications have real implications for the work lives of real people, and we need to educate ourselves about that. Understanding real AI beyond the realm of science fiction would be the best place to start.

The author wishes to thank Isik Baris Fidaner and Phil E. Cheslett for comments on an earlier draft and in particular to Isik Baris Fidaner for the “sentence dispenser” metaphor.