At the outset, I’m going to qualify the title of this essay and say that ChatGPT in fact might be your enemy. You can decide this for yourself by asking a very simple question – do you create new things for a living? Does your vocation require you to synthesize existing ideas in order to create new ones? Are you an artist? A writer? A scientist? A Sublation Magazine editor? If you answered in the affirmative to any of these, ChatGPT is NOT your enemy. It is simply a tool you may find useful.
But what if you don’t create new things. What if your art isn’t art so much as kitsch – a creation masquerading as art but is in fact simply a tired re-working of ideas created by others? Or what if your job is to pull together textual information for the purpose of summary only within a constrained knowledge domain, as is the job of many law clerks? If this describes what you do, ChatGPT may very well be your enemy. ChatGPT is a lot cheaper than you, and your capitalist masters will replace you with it or its descendants. That seems inevitable, and is nothing more than the ongoing process of industrialization we’ve been monitoring for… how many centuries now?
In other words, ChatGPT is like the robot in the automobile factory, capable of replacing many workers, improving efficiency, and saving the company money. But this time it’s white collar workers in jeopardy, and that’s got a lot of white collar people a little scared, as many will see career disruption and possible unemployment as a result.
My message is simply this – ChatGPT is not revolutionary AI. It’s not even AI in the science fiction sense that so many people seem to perceive. It is NOT an “unconscious” and is NOT capable of anything like “consciousness.”
Let’s take a quick look at the issue of “artificial intelligence” in order to put ChatGPT into the proper historical perspective. In the decade or so following WWII, there was a great deal of speculation and hope that the algorithms of the human mind could be discovered and implemented within a computer program. If such a thing were done, a true Artificial Intelligence might indeed be achieved.
But we’ve learned a lot since then. To this very day we have been unable to discover the algorithms at work in the mind. Also, we know that the human nervous system isa radically different sort of processing system than any permutation of a Von Neumann architecture could be, so even if we could derive the algorithms of the human mind, implementing them on a contemporary computer system would be another challenge altogether.
A good example here is Chomsky’s universal grammar. Chomsky’s early claim to fame in psycholinguistics was his criticism of B.F. Skinner’s book Verbal Behavior, in which Skinner argued for a form of behaviorism in which internal information processing algorithms were irrelevant to understanding behavior, and in this case language. Understanding the rules of language acquisition and use could follow from simply observing the acquisition and use of language in people. In other words, Skinner thought it would be possible to understand the algorithms of human language acquisition and production by observing and recording system inputs and outputs only.
Chomsky argued convincingly that language results from an algorithm within the nervous system, and that simply observing stimulus and response could not capture it. Among his foremost observations was that young children can easily use language to produce new sentences with new meanings, even in impoverished language environments. This can’t happen without an internal mechanism for language. While UG itself has its detractors, and my Lacanian brethren will hear the name Chomsky and experience a little abdominal discomfort, no one seriously argues anymore against an internal algorithm for language.
The upshot is that we know now that we can’t derive the algorithm for human language from simply observing inputs and outputs. Same is true for sensory perception. Yet this is the sole “learning” mechanism of artificial intelligence. But wait a minute, I can hear the ChatGPT-is-magic crowd chanting, IT WRITES ESSAYS LIKE COLLEGE UNDERGRADUATES!!!! AND IT DRAWS PICTURES!!!!
Yep, both claims are true. Let’s first let’s consider your typical undergraduate essay. What’s its purpose? Its purpose is to ensure that undergraduates, with little genuine interest in the subject matter of a course they’re taking, have done enough cognitive work to justify a passing grade. As such, it’s typically little more than a summary of presented course material. This is best considered an artifact of an inefficient educational process rather than a special human capacity, and ChatGPT is quite good at it. But read some of those ChatGPT essays carefully. Try to find any real synthesis of ideas. See if ChatGPT was able to propose something new based on what it’s summarized. You won’t find it because it isn’t there.
But what about those pictures it draws? Pure kitsch. It isn’t art. It’s possibly a particularly industrial form of graphic design, but it’s not art, for the same reason the undergraduate essays aren’t real scholarship. It’s justa summary of text content with visual design tropes. In the thousands of ChatGPT drawings on the Internet, have you ever seen one that creates a new visual idea? An actual artist might very well be able to use ChatGPT as a tool for the creation of new art, and I would bet we’ll see that happen. But again, this won’t be ChatGPT creating the art any more than a camera creates the art of the photographer.
So how does ChatGPT work and why does it seem so… human-like? Let’s go back to the Skinner example. Skinner thought we could fully come to understand human psychology by looking only at inputs and outputs to the system – this was how we would learn the rules of language, sensory processing, and everything else. The idea was that by recording all stimuli and all response we could derive the rules or algorithms and predict future behavior. But the last several decades of research have demonstrated that we can’t.
Yet, this is exactly how ChatGPT works. ChatGPT takes existing information and applies various algorithms of statistical probability to make predictions. This is the way all machine learning works. It analyzes existing data and uses any of a collection of statistical processes to make predictions about it. The statistical processes are selected for the purpose by a human user, and no AI routine has any capacity for creating a new statistical process, nor does it have any capacity to apply rules across knowledge or problem domains.
While we use the term “machine learning” to refer to the class of applications that includes ChatGPT, there’s no real learning here in the sense we would apply it to a human. It’s just a term that means predictions should theoretically become more accurate as we add more data. But they are always a probabilistic approximation to what a person would do.
So ChatGPT isn’t Skynet, it can’t become self-aware, and it doesn’t offer an alternative to human decision-making. It does offer industrial and corporate efficiencies that will likely begin to displace certain classes of workers, and that’s not trivial. However, it’s anything but new. If we’re committed to approaching ChatGPT as leftists, let’s abandon the sci-fi drama around it and deal with the fact that it will likely displace workers. Not new or sexy, but something about which we really should have some good ideas.