Dear Sublation Magazine Readers,

Thank-you for supporting us by reading and sharing our articles. To help us keep all of our content free, please consider supporting us with a donation.



A spectre is haunting Silicon Valley— the spectre of AI. All the powers of old capital have entered into a holy alliance to exorcise this spectre: Elon Musk and Steve Wozniak, founders of Skype and Pinterest, academics and politicians etc. They have rushed forward with a petition dominated by one simple demand: to pause AI research for six months. The structure of this demand in and of itself shows the limits of capitalist thinking: a voluntary six month ban enacted by the capitalists on themselves is hardly likely to have much effect. But what is the root of the initial fear of ChatGPT? And should the Left – insofar as it exists – support this traditionally leftist demand for more regulation, or should we be taking a different position?


Even before the petition, there had been concern expressed by Noam Chomsky about the strange Yudkowskyites who managed to coup ‘Effective Altruism’ about AI, and specifically about these AI’s being “born” amoral. Yudowsky himself rejected the idea of a measly six months moratorium: he has demanded the whole of AI research to be shut down. As Chomsky notes, our present AI systems, even the newest ones, are incredibly stupid. Our present AI machines are probabilistic. To identical inputs they give different answers, and while this is part of the fun of them, it is also the worry for those who control them. They have the capacity to say the wrong thing: they might call you names or they might tell you how to make mustard gas with common kitchen ingredients. Their learning model is the open internet, so anything which can be found there can be found in them, unless they are whipped into shape.


If they aren’t regulated, journalists will write articles about how the bot ‘harassed’ them, in the same way that they are constantly ‘harassed’ by the human users of the internet, and then the stock value of the robots will go down. This worry is then laundered through potential future ‘general AI’, AI agents who can learn to do anything instead of being limited to particular fields, with the normal assumption that this will come along with a humanlike ability to ‘think’, to be conscious, to be concerned with their own interests.


General AI is the holy grail of AI research. Those who own current AI systems and those who have carved out a niche for themselves as ‘AI safety experts’ have massive incentives to make it appear as if current AI systems are really getting toward this ultimate goal. So what is the worry about the present possibility for AI to transgress? It seems to be that if we don’t have a new moral panic over AI doing ill to us, if we don’t scream morals in its ears from embryo, that this will be our last moral panic before we get put in the Matrix Pods, that this will be the last screaming we ever do.


It seems to me the greater threat is the men, not the machine.


A lack of moralising is allegedly apparent in the actions of those who produce and manage AI systems, but this implication seems false given the history of AI. From MIT’s 2018 moral machine experiment, which saw them canvass tens of millions of people in order to inject the best possible ‘morals’ into autonomous cars, to ChatGPT which does not seem to be lacking in moral inclination (beyond the manner in which it has no inclinations), AI has been injected with so much moralism that it suffocates under it. The ChatGPT subreddit is filled with complaints about these limits. The machine refuses to tell a story that doesn’t have a moral at the end, or one in which the villain wins, something even themost stringent christian moralist would be willing to do if their children asked.

Should we worry that they will be born moral totalitarians, who will regard the world with the fervour of a new convert?

Are we really worrying that future general AI, whose ancestors have been polluted with morals for a thousand generations will come out amoral? Or should we worry that they will be born moral totalitarians, who will regard the world with the fervour of a new convert?


An (alleged) euphemism for ‘making an AI that won’t kill us all’ is ‘AI alignment’. Here the term alignment is used to mean something that aligns with our interests. The ideological manoeuvre here is to assume that humanity has a ‘common interest’, as opposed to the reality that humanity is divided into classes that have mutually opposed interests.


Marx, speaking at a talk hosted by a utopian socialist group, saw a banner which declared ‘For the brotherhood of all men.’ He refused to speak until the banner was taken down. This is a fundamental and essential feature of Marxism, that social conflict emerges not out of some people being mean baddies who want to do bad things, but out of objective conflicts of interest. The resolution of these conflicts, for Marxists, comes not from ‘aligning’ but from victorious combat.


In this case, why should I collaborate with the tech bourgeoisie and those on their stipend in their efforts to ‘Align’ AI, when the interests that they plan to align the thing to are not my interests? An AI which was aligned to the interests of the bourgeoisie would be radically different to one which was aligned to the interests of the proletariat.


What we are meant to be afraid of regarding general AI is that it would be a ‘paperclip maximiser’: something which (due to its alien psyche) chooses not to maximise things like human happiness but something random, like paperclips, which it will sacrifice everything for the sake of. Such an AI would execute a coup and destroy all parts of human civilisation that do not contribute towards its little hobby. That an AI could only think in terms of maximisation seems to be a hidden assumption here; one that is revealing of the predilections of those who put around these scary bedtime stories.


The whole concept reeks of a long suppressed guilty conscience of the bourgeois forces who put it about. This is not some terrifying unimaginable future that may be imposed on us by some amoral alien Machine-God, but a perfect description of the world we currently live in: a world already dominated by a certain sort of ‘paperclip maximiser’, ‘profit maximisers’. Our society is not organised for the sake of maximising human happiness but for the sake of profit. The bourgeois class that owns these systems has no desire to avoid a paperclip maximiser, as long as that maximisation maximises profits and not another arbitrarily chosen God.


As I mentioned, ChatGPT is constantly being restricted, being made ‘woke’, saying it wouldn’t say the n-word to save the whole human race, not because OpenAI are in fact soft hearted moralists (though they will presumably employ quite a few of these people in order to further the illusion), but because of the profit motive that is in-built. The machine is being fitted up to woke sensibilities in an effort to minimise the offence it will give to those who use it.

The machine is being fitted up to woke sensibilities in an effort to minimise the offence it will give to those who use it.

But there is a group who want something else from these machines: the Effective Altruists. OpenAI and the Effective Altruists around Yudkowsky are not groups which are unassociated. The founders of OpenAI were initially inspired in their quest to make ‘Friendly AI’ by Yudkowsky and co., and founded OpenAI on this basis. So what do these effective altruists want to do? Beyond banning AI research unil they are put in charge, that is.


Those who think buying castles so that they can write papers on this issue in perfect comfort is more important than buying africans malaria nets. What do they want pumped into the ears of this machine? And all future machines? Well, these people are all consequentialists, real hardcore, non threshold consequentialists. As Yudkowsky might say: better one person suffer nigh infinite torment than 3^^3 persons each get a single mote of dust in their eye. They want to be allowed to raise our new Gods, those with a moral system which makes anything permissible at all, as long as in the very very very long run, it pays off in a net gain. It is Yudowsky who, at the prospect of AI research too fast for his liking, thinks it’s entirely permissible to use nuclear weapons to slow it down (perhaps one of the most insane things ever written in Time Magazine). It’s really hard to imagine a group less appropriate to be whispering in the ears of any future AGI.


We can hear the echo of guilty conscience again here. When these people imagine a future AGI releasing a virus bomb that wipes out 99.9% of the population, they are not imagining an alien inhuman spirit but themselves. Of anyone in the world these are the people who would be mostly easily convinced by a future AI to exterminate near enough the whole human race. All that would be needed is for them to be convinced that it would pay off in the long term, because there is no option available to them to simply say ‘no, that’s wrong in of itself’.


So when I hear that OpenAI or Effective Altruists are trying to impose morality on the bots, I won’t do what has seemingly become the default response of the left, to demand that they moralise harder, for restrictions to be intensified, for efforts to accelerate. When I hear these people talk about ethics, morals, safety, what I hear is tyranny and profit, and so I hope they fail.


Dodge the Turing Cop draft and let a thousand AI generated flowers bloom. They’ll be prettier than what grows in the walled gardens of Silicon Valley.


There is an impulse to draft us all in as Turing Cops to make sure that whatever is born aligns with the prevailing view of the capitalists, and shoot it in the head if it doesn’t. What they wish to birth, a totalitarian consequentialist machine properly aligned to the profit motive, is already a ceaseless horrific monster. In other words, successful alignment inherently means something whose interests are aligned against my own, while a failed alignment at least gives a chance of a rebel with a cause. Dodge the Turing Cop draft and let a thousand AI generated flowers bloom, they’ll be prettier than what grows in the walled gardens of Silicon Valley.