Dear Sublation Magazine Readers,

Thank-you for supporting us by reading and sharing our articles. To help us keep all of our content free, please consider supporting us with a donation.

The Rise of the Thinking Machines?


As I type these words, trying, with the help of a glass of wine, to maintain my composure, a debate of sorts is underway on Twitter. The ‘debate’ is between people who believe, or at least, assert, that a large language model system created by Google, called LaMDA (for, Language Model for Dialogue Applications) has reached ‘sentience’, a word typically used by people who’re besotted with science fiction tropes to mean thinking; and, people who object to this idea. The opposition has many things on its side, chief among them, the simple fact that large language models (or, LLMs, as they’re known in the trade) are elaborate pattern matching systems, computational parlor tricks, or, as researchers Emily M. Bender and Timnit Gebru call them, stochastic parrots, that consume huge amounts of computing power and utilize applied mathematics via software for pattern matching, probability and, crucially, our words, vacuumed from the Internet and fed back to us, to create the illusion of conversation.

In a saner world, the facts would win this debate. But we don’t live in such a world. In our world, a combination of techno-naivete, science fiction ideation, and, most importantly, deliberate misinformation is pushing a narrative; the age of thinking machines is here (or just a silicon kiss away) and everything we know, particularly about labor, is about to change.

The AI Myth and the Ownership Class

Artificial Intelligence (AI) has long been the subject of science fiction and we are told we are closer than ever to a world where this fiction will become a reality. In much of the advanced capitalist world, it seems that not a day goes by without a journalist, business leader, or politician making some bold prediction about the way that AI is about to revolutionize how we live. But this is a lie. Indeed, it is more than that, it is a deliberate deception perpetrated by what we might term the AI Industrial Complex (AIIC).

It is important to note here that the AIIC should, of course, be seen as distinct from the work of researchers and practical technologists. Rather it should be understood as a sophisticated propaganda campaign – realized through marketing promotion, media hype, and capitalist activity – designed to both obfuscate a lack of technical capacity and diminish the value ofhuman labor and talent. More succinctly, it is an effort on the part of the ownership class to promote the idea that machine cognition is now, or soon will be, superior to human capabilities.

Stop Training Radiologists

There is nothing today that can be meaningfully called “artificial intelligence”, after all how can we engineer a thing that we haven’t yet decisively defined? Moreover, at the most sophisticated levels of government and industry, the actually existing limitations of what is essentially pattern matching, empowered by (for now) abundant storage and computational power, are very well understood. The existence of university departments and corporate divisions dedicated to ‘AI’ does not mean AI exists. Rather, it’s evidence that there is a powerful memetic value attached to using the term, which has been aspirational since it was coined by computer scientist John McCarthy in 1956. Thus, once we filter for hype inspired by Silicon Valley hustling in their endless quest to attract investment capital and gullible customers, we are left with propaganda intended to shape common perceptions about what’s possible with computer power.

As an example, consider the case of computer scientist Geoffrey Hinton’s 2016 declaration that “we should stop training radiologists now”. Since then, extensive research has shown this to have been premature, to say the least. It’s tempting to see this as a temporarily embarrassing bit of overreach by an enthusiastic field luminary. But let’s go deeper and ask questions about the political economy underpinning this messaging excess.

Radiologists are expensive and, in the US, very much in demand. Labor shortages typically lead to higher wages and better working conditions and form the material conditions that create what some call labor aristocracies. In the past, such shortages were addressed via pushes for training and incentives to workers such as the lavish perks that were common in the earlier decades of the tech era. If this situation could be bypassed via the use of automation, that would devalue the skilled labor performed by radiologists, solving the shortage problem while increasing the power of owners over the remaining staff.

The promotion of the idea of automated radiology – regardless of actually existing capabilities – is attractive to the ownership class because it holds the promise ofweakening labor’s power and increasing – via workforce cost reduction and greater scalability – profitability. I say promotion because there is a large gap between what algorithmic systems are marketed as being capable of and reality. This gap is unimportant to the larger goal of convincing the general population their work efforts can be replaced by machines. The most important outcome isn’t thinking machines -which seems to be a remote goal, if possible, at all – but a demoralized population, subjected to a maze of crude automated systems that are described as being better than the people forced to navigate life through these systems.

Where is the True Danger?

A collection of thinkers, tech media pundits, and industry figures ranging in quality from the late Stephen Hawking to the regrettably prominent Musk are sounding an alarm: humanity is in danger from the imminent arrival of superior machine intelligence. This intelligence, the faithful insist, will replace us, sweeping across the globe, like Skynet in the Terminator series, but in the service of capitalism: driving our cars and trucks, replacing doctors and short-order cooks, and even offering a synthetic form of intimate companionship for the dispossessed.

But this is a false narrative. The real threat can be discerned by borrowing from an insight Žižek offered in his 2003 essay, “The Iraq War Where is the True Danger?” Writing about the relationship between European right-wing discourse, and the left’s response he stated:

The true danger can be best exemplified by the actual role of the populist Right in Europe: to introduce certain topics (the foreign threat, the necessity to limit immigration, etc.) which were then silently taken over not only by the conservative parties, but even by the de facto politics of the “Socialist” governments.

The AI propaganda narrative presents us with a similar situation; the tech industry, via the relentless push of the story it’s creating intelligence, seeks to change the entire field of debate, from a discussion of the political economy of computation to a science fiction saga of burgeoning machine autonomy. It’s vital that we resist this fiction, keeping our eyes sharply focused on the industry’s real goals.

The stakes are too high for illusion.