What’s in a Name? Cybernetics vs AI

FB
X

The ability to bestow something with a name is a decisive act and an exercise of power. In the Judeo-Christian scriptures, for example, the main task given to Adam by his creator is that of naming the animals. And it is by the seemingly simple act of designation that the first man comes to establish humanity’s power and privilege over all of creation. Names, therefore, are anything but nominal. Nowhere is this more evident than in the field of “artificial intelligence” (AI). Right now, in fact, AI is embroiled in a kind of identity crisis, as leading voices in the field are beginning to ask whether the name is a misnomer and in need of replacement.


Artificial Intelligence


The term “artificial intelligence” was originally proposed and put into circulation by John McCarthy in the process of organizing a scientific meeting at Dartmouth College in the summer of 1956. The term had immediate traction. Despite its success, however, what “AI” designates has remained a bit murky and contentious. “AI people,” as Robert Schank famously wrote in 1990, “are fond of talking about intelligent machines, but when it comes down to it, there is little agreement on exactly what constitutes intelligence. And, it thus follows, there is very little agreement in AI about exactly what AI is and what it should be.”


Because there has been little agreement—even among experts in the field—about what AI is (or is not, for that matter), expectations for the technology are virtually unrestrained and prone to overinflated hyperbole. As a result, we now find ourselves discussing and debating all kinds of speculative questions: Is the Google LaMDA algorithm sentient? Do large language models, like OpenAI’s GPT-4, contain sparks of artificial intelligence? Or can generative AI systems be said to hallucinate? For many researchers, scholars, and developers these are not just the wrong questions, they are potentially dangerous lines of inquiry to the extent that they distract us from more urgent and important matters.


Since the source of the problem is with the term “artificial intelligence,” one solution has been to find or fabricate better or more accurate signifiers for these particular innovations. And there has been aproliferation of new acronyms circulating and competing for attention: ML (machine learning) and DL (Deep Learning), SALAMI (Systematic Approaches to Learning Algorithms and Machine Inferences), and GEI (Generation from Extracted Information), GMD (Generation from Mined Data), or DGM (Data-Mined Generative Model)? But formulating new names is not the only way to proceed. As French philosopher Jacques Derrida pointed out, there are at least two different ways to designate a new concept: neologism (the fabrication of new names) and paleonymy (the reuse of old names).


Cybernetics


Fortunately, there was already another, older name readily available at the time of the Dartmouth meeting—Cybernetics. This name—derived from the ancient Greek word kybernetes, which originally designed the helmsman of a boat—had been introduced and put into circulation by Norbert Wiener in 1948 to designate what he called “the science of communication and control in the animal and machine.”


Cybernetics has a number of advantages when it comes to rebranding what has been called AI. First, it focuses attention on decision making processes and control mechanisms. It does not get diverted by and lost in speculation about “machine intelligence,” which unfortunately directs attention to all kinds of cognitive capabilities, like consciousness, sentience, reason, understanding, etc. Cybernetics is, by comparison, more modest. It is only concerned with communication signals and the effect they have on controlling decision-making outcomes. The principal example utilized throughout the literature is the seemingly mundane thermostat, which can adjust for temperature without knowing anything about temperature or needing to be thought to be thinking.


Second, in focusing on communication and control, cybernetics avoids one of the main epistemological barriers that continually frustrates AI—the problem of other minds. For McCarthy and colleagues, one of the objectives of the Dartmouth meeting was to figure out “how to make machines use language.” This is because language use has been taken to be a sign of intelligence. But as John Searle already demonstrated by way of the Chinese Room thought experiment, the manipulation of linguistic tokens can transpire without knowing anything about the world outside (what linguists call “the referent”) or the language that is manipulated. Unlike AI, cybernetics does not look for nor does it need to posit intelligence. It attends to the phenomenon of communication without needing to resolve or even address the problem of other minds.


Finally, cybernetics already provides us with a better and more accurate description of what happens inside the blackbox of “machine learning” algorithms. The choice of the word “learning” has always been contentious and a significant obstacle to understanding, as machines do not really “learn” in the way that we typically understand and utilize this word. In the context of computer science, “learning” designates the process of adjusting the weighted connections on the artificial neurons that comprise a neural network through the process of backpropagation. Cybernetics already provides a more accurate and less confusing name for this process: feedback. “Feedback” not only circumvents and avoids the misunderstandings that accumulate around the word “learning,” but situate these algorithms as little more than sophisticated homostatic systems—a kind of thermostat on steroids.


Duty Now and for the Future


If the term “cybernetics” had already provided a viable alternative, one has to ask: How and why “artificial intelligence” became the privileged moniker in the first place? This inquiry returns us to where we began—with names and the politics of naming. As McCarthy admitted many years later: “One of the reasons for inventing the term ‘artificial intelligence’ was to escape association with cybernetics…I wished to avoid having either to accept Norbert Wiener as a guru or having to argue with him.” Thus, the term “artificial intelligence” was as much a political decision as it was a matter of scientific taxonony. We got and are now stuck with the term AI, because McCarthy actively sought to avoid both cybernetics and its progenitor.


I therefore have a story-idea for an alternative history, one that could have saved us from a lot of wasted time and effort. It is the summer of 1956. Norbert Wiener gets behind the wheel of his 1952 Chevy Impala and drives from Cambridge, MA to Dartmouth College in New Hampshire. Since he was not invited to attend, Wiener dramatically bursts into the workshop, crashes the party, and directly confronts McCarthy. Convinced by Wiener’s passionate speech, Claude Shannon—the progenitor of the mathematical theory of communication and one of the invited participants—begins to sees the light and makes a motion to table the name “artificial intelligence,” replacing it with cybernetics. Though this was a missed opportunity in the past, it is not too late for us to make things right and change course for the future.