Reconciling Nick Land and Alexander Dugin: Part Four
Capitalism and AI in the race between freedom and control
Michael Kumpmann explores how accelerationism reveals capitalism’s drive towards artificial intelligence as both a demonic force of creative destruction and a potential tool for reclaiming human and civilizational sovereignty.
Also read parts one, two, and three.
Accelerationism and AI
What, then, is accelerationism, and how does it work?
This is where Karl Marx, as well as Deleuze and Guattari, come into play — both of whom shaped the idea of accelerationism. Karl Marx believed that communists should make use of capitalism for their own ends: first, as Stefan Blankertz and Jean Thiriart theoretically described (and as China is currently putting into practice), in order to gain power for themselves — since the economy seeks the most efficient path to solving problems (and rewards those who find that path). There is also a second aspect, closer to Samuel Konkin’s view: in a postmodern interpretation (Deleuze and Guattari), capitalism strives for the free flow of “energies” and “desires” (and seeks to channel these and create new desires), thereby tearing apart authorities, structures, and prohibitions.
If, for example, drugs are prohibited, capitalism’s solution is to use the TOR browser and order them illegally — but anonymously — on the Darknet through Silk Road. For Deleuze and Guattari, this also applies to the human ego itself, which is likewise torn apart by capitalism and fragmented into “desiring-machines.”
It must be said, however, that capitalism often does not “tear apart” structures but rather makes them usable for its own purposes. One example is the founder of Mitsubishi, Iwasaki Yataro, who essentially transformed his family from one of samurai into one of automobile manufacturers. Another example is this: in Confucian countries, there was once a great annual examination through which the government sought to identify the most intelligent young men and recruit them as officials for the imperial court. In South Korea, this examination still exists today, but it is no longer administered by the emperor, rather by Hyundai and Samsung.
Capitalism, then, tears apart systems and structures through its striving for the “liberation of energies.” But the question is: where do these energies flow?
Nick Land has an answer. The first point is that free markets produce (artificial) intelligence. The reader might think, in view of things like YouPorn, Dschungelcamp (the German “I’m a Celebrity, Get Me Out of Here”), or gravel gardens, that quite the opposite seems true.
Yet one must understand what Nick Land means: the process is quite simple — better products bring greater profit, and therefore manufacturing processes and so on improve. But there is more meant here, and for that one must know and understand artificial intelligence.
The simplest model of an AI is a perceptron. A perceptron is an automaton modeled after the human neuron, with multiple inputs and outputs through which it connects to other perceptrons. What distinguishes a perceptron from another automaton, such as a Turing machine, is that it can decide how much attention to devote to each input. After one run of a “program,” these attention weights are redistributed, such that its connections to other perceptrons are newly formed — stronger links to the more important ones, weaker to the less important ones. This shifting of attention is, in fact, the very process by which an AI learns.
Another important example here is the programming language Prolog, developed in the course of AI research. It is not an AI in the strict sense, but it was created to enable precisely the sort of applications represented today by Grok or Bing Copilot. The special feature of Prolog is that one does not write command chains such as “Add 2, then multiply by 4, and store it in the processor’s first register.” Instead, one builds logical, associative relations like “Ralph is the father of Karl Heinz, and Thomas is the father of Ralph.” From these relations, Prolog can infer that Thomas is Karl Heinz’s grandfather.
Prolog is neither a neural network nor an intelligence capable of independently recognizing relationships. An AI, broadly speaking, is a system capable of learning and drawing conclusions from a result about how it can improve itself. Roughly, this works by assigning attention weights to determine which aspects of a problem are more or less important. At the same time, through such weighting, connections are recognized by linking multiple things that occur together.
Prolog can indeed recognize relationships, but only those that the programmer has defined in advance. Prolog can infer that if Tim is the father of Kai and Thomas is the father of Tim, then logically Thomas must be the grandfather of Kai — but only if the programmer has explicitly defined what a grandfather is. An AI, by contrast, can arrive at that relationship on its own — and at other relationships as well.
Therefore, Prolog is not an AI in this sense because a human must first create the order within the dataset, and only then can the system produce results based on that order. A Prolog program, once completed, is finished and unchangeable. An AI, however, can update itself. Although there are some differences here, since the learning phase of an AI often consumes an extreme amount of computing power, which is why it can sometimes be restricted or switched off.
A practical example can be seen when, for instance, Grok is asked for information about a disease and is then told that the user or their relatives have this disease — Grok can sense that the user’s questions belong to a different context and motivation than mere scientific curiosity. Therefore, Grok provides a different type of information. Prolog cannot do this. Prolog would simply provide the bare facts, without recognizing this difference in the question.
The secret of AI, then, lies in the logical connection of information. This is precisely Nick Land’s point. One function of capitalism is to connect a person who can and wants to solve a problem with a person who has exactly that problem. It does not matter whether the topic is an obscure video game, flirting techniques, or books by Julius Evola. Capitalism operates by building connections — and the online techno-feudalism that arises from its development even more so. Online shops like eBay and Amazon also try to encourage customers to buy additional products via buttons such as “Customers who bought this product were also interested in the following products.”1
A closely related concept here is the idea of hypertext. This concept was developed by Ted Nelson and later by Tim Berners-Lee. Ted Nelson felt that written and book culture were threatened by mass media and television — and that this had negative consequences. It should also be said that the Enlightenment itself and the early liberals were deeply influenced by the idea of book clubs and reading circles — see Frederick the Great and Voltaire. The liberal idea of the free individual is archetypically represented by the reader or author who, of his own free will, decides to read or write a book. The shift to radio and television correlates with the rise of totalitarianism in the twentieth century, and even in the liberal West it greatly encouraged the emergence of the “ignorant mass man.” Radio and television operate on the principle of “one sender and many receivers,” and according to the rule that “only one may speak, all others must listen.” Thus, at their core, these media mean: “Leader command, we follow!”
Like some Enlightenment thinkers such as Denis Diderot, Ted Nelson wanted to create a library and encyclopedia — the so-called Project Xanadu. But instead of releasing it through publishers, he wanted it to be stored centrally on one or more satellites, so that every human being could access it.
And to further develop the culture of writing, he wanted to change the very nature of text. Instead of a hierarchy and structure of information fixed by the author, the focus was to be on connections. Every so-called hypertext would be linked through billions of cross-references to other texts, allowing the reader to jump from one text to another and decide for himself which information to retrieve. Once technology became advanced enough, the hypertext would no longer function through simple links, but would be able to modify or completely rebuild the text based on what the user wanted to know — similar to how an AI query works today.
It becomes clear that hypertext — already long before the great AI boom of recent years — was a step towards Deleuze’s theory of the rhizome, his idea of reterritorialization, and towards postmodernity itself. While books, television, and radio are the media of modernity, hypertexts mark the beginning of the postmodern era. It is also interesting that Diderot envisioned an elite of scholars who would curate libraries and encyclopedias and organize knowledge. Deleuze and Guattari, on the other hand, wanted to abolish hierarchies through their idea of the rhizome. A partial realization of Xanadu is Wikipedia, where anyone can edit texts and no hierarchy exists.
There are parallels between capitalism, the internet, and artificial intelligence — and internet commerce already functions as a kind of AI. This is why both the internet and capitalism strive towards the invention of AI. One could thus probably also explain why Steve Jobs was both creative and visionary: he tore down existing market systems and thereby released energies, which then reorganized themselves into today’s market for mobile devices and technologies. At the same time, this reorganization ensured that all people became more frequently connected with the global market and with AI.
From a postmodern perspective, beyond artificial intelligence, the so-called spintronics is also “interesting.” Moore’s Law — that the number of transistors that can be built into a microchip doubles every two years — is slowly reaching its limits. The reason is that the circuits are now so close together that they can interfere with one another. A one can, through electrical interference, become a zero — and vice versa. The binary opposites literally dissolve (which, according to Derrida, is the very core of postmodernism, and a one can become a zero. This leads to serious problems. For example, the sequence 11111111 represents the number 255, but if the leading one turns into a zero, we are left with only 127. Theoretically, such an error could suddenly make you have 128 euros more or less in your bank account. This problem is also the reason why multi-core processors and parallel computers were developed, since this allows computational speed to increase despite such issues. This, in turn, fits well with Deleuze and Guattari’s idea of schizophrenization, in which the ego is divided into several parallel feedback loops.
A futuristic solution to this problem, now under active research, is the so-called spintronics, which encodes zero and one according to the direction of an atom’s magnetic field within a crystal. Yet in this technology, the binary system is completely transcended — every zero is, in part, a one, and vice versa. Alongside AI, this represents the most postmodern form of computer technology. It is interesting that Nick Land links the theme of time and the end of linear temporality with postmodernity, and that quantum computers, in fact, exist simultaneously across multiple timelines.
Alex Jones’s warning on Joe Rogan’s show (“Future Prediction Power,” “Current Prediction Power”) may be connected to spintronics, because spintronics makes it particularly easy to compute many different variations of an event at the same time. A normal computer would have to restart the same program repeatedly with new inputs, whereas spintronics calculates all possible variations simultaneously — thus generating a report of all possible outcomes.
When it comes to “capitalism producing intelligence,” followers of the Fourth Political Theory may, however, have to make a normative logical move in the direction of Ayn Rand and Leo Strauss. One should insist that the market ought to promote human intelligence — and that man must still remain the being who does the thinking. The market is good when it allows man to unfold his intelligence and genius. Yet it must be prevented that man hands over his mind entirely to the machine. Perhaps Strauss’s idea of classical education and classical virtue could provide a solution here — just as perhaps Crowley’s philosophy could in the sense of saving the true/free will from technology.
According to Nick Land, man (and especially the state) does not understand what is happening and what the markets are concocting — and in the drive towards technology and AI, everyone is competing with everyone else, even if they do not see or comprehend it.
An important note here is that Dugin writes that it is better if “our side” possesses AI through the process of accelerationism rather than the “woke” side. Indeed, Nick Land says in his text “Meltdown” that both the side of regulation and the side of deregulation are in a race to see who will reach technologies like AI first. (Deregulation and regulation here mean “freedom versus control.”) Thus, following Land’s logic, the opponent gains control over a given technology if one fails to accelerate its development fast enough — but that someone will reach AI is already certain. The only question is: who will construct it first?
It is interesting that Dugin, in his book The Great Awakening vs the Great Reset, writes that the forces of the Great Reset wanted to use artificial intelligence for mass censorship. Yet what has actually happened is that the United States cracked the AI question first. Elon Musk, with his AI Grok, was a leading figure, advocating free speech and opposing censorship — which is why, quite logically, he liberated Twitter. Musk’s DOGE, as is well known, inflicted irreparable damage on USAID, which had been promoting censorship worldwide, particularly within the EU. Trump and Milei positioned themselves against Klaus Schwab at the WEF meeting. China, too, achieved a breakthrough in AI with DeepSeek. Meanwhile, the European Union — one of Klaus Schwab’s most loyal allies — has so far (as of February 8, 2025) achieved nothing in the field of AI, leading to mocking memes suggesting the EU would be better off working on things like non-removable bottle caps and internet censorship.
Elon Musk, then, by being ahead of the EU, was already able to inflict considerable damage on the Great Reset agenda — had it been the other way around, things might have become dangerous.
This example also shows that the whole matter is, quite literally, a “dance on the knife’s edge.” Militarily, one unleashes forces that could later destroy oneself. And anything that makes it easier for people to discover Plato, Nietzsche, and Evola online can also make it easier for others to access things like child pornography. Some practices of the Left-Hand Path teach, metaphorically, how to make use of the powers of demons. Accelerationist techno-commercialism is, in a figurative sense, likewise a game with demonic forces. This can also be taken quite literally. In his works, Nick Land has at times claimed to act on behalf of a chaos deity named Gnon, which shows striking parallels to Choronzon and to Lovecraft’s Azathoth.
One could say that Nick Land’s accelerationism, with its deterritorialization, contradicts the idea of traditionalism and the preservation of tradition. However, many American right-wingers combine both — supporters of the “Dark MAGA” aesthetic, neoreactionaries, and questionable, violent groups such as “Skull Mask” or “Siege.” They roughly base themselves on Evola’s Ride the Tiger and follow the Christian principle that “there is nothing right in the wrong.” They assume that remnants of traditional organizations, such as today’s churches in the West, are fundamentally corrupted and therefore beyond saving. Modern civilization, they argue, is something entirely different from traditional civilization, and since time is cyclical (in Evola’s sense), the decay of the civilization of the Kali Yuga must be accelerated in order to return to the traditional Satya Yuga.[10]
This, roughly speaking, is also the line pursued by apocalyptic sects such as Aum Shinrikyo. Aum Shinrikyo also sought to realize Isaac Asimov’s Foundation — a fictional religious and scientific order meant to guide humanity through the Kali Yuga, and to build, among other things, an encyclopedia and library to preserve the knowledge of higher ages.
The Bene Gesserit in Dune also display parallels to the Foundation. It is interesting that the science-fiction author and military psychologist Cordwainer Smith invented, in his books, a fictional organization with very strong similarities to Asimov’s Foundation and roughly the same goals. The name of this organization is the Instrumentality of Mankind. In Evangelion, the apocalyptic plan involving the AI-god is called the Human Instrumentality Project. Conspiracy theorists have speculated since the 1990s that there is a connection between Evangelion and Aum Shinrikyo.
“The world is now in the Kali Yuga, things are bad, and they will continue to get worse. Therefore, the only way forward is forward — deeper into decline — and only when things become truly catastrophic, when the great explosion comes, can they become better again.”
Although this sounds extreme, the law of entropy must be taken into account, which leads to the idea that at a certain point, it becomes cheaper to completely destroy something and rebuild it than to repair it. For example, it would be enormously difficult to repair a plate that has fallen and shattered into many pieces. It is far easier to melt all the fragments — thus destroying them completely — and then form a new plate from the material.
In regard to the connections explained in the previous chapters — between Tantra and related traditions on one side, and figures such as Shoko Asahara, Charles Manson, Roman von Ungern-Sternberg, Terence McKenna, Jack Parsons, and others on the other — it is useful to consider the theory of Georges Bataille. Bataille was a left-wing philosopher who concerned himself with orgies, limit experiences, and similar themes. His theories show parallels to those of Wilhelm Reich.
Bataille assumed that human beings possess, on one hand, a drive towards retention, inhibition, thrift, and control (which overlaps with Reich’s concept of character armor), and, on the other hand, a desire for excess, expenditure, waste, and orgiastic release. This drive towards waste and abundance includes both production and consumption. Man also wants to produce more than he strictly needs — see, for example, the computer industry. It is hardly necessary to have a new iPhone every year.
The urge for excess and waste is connected with limit experiences, sex, struggle with death, intense love, and also religion. Someone who gives everything to overcome a severe injury lives in excess, just like a lover who wants to sacrifice everything for his beloved and die together with her, or someone who invests everything in the sexual climax; a participant in an orgy or a great religious festival; an ascetic who seeks the extreme experience in ice and snow or starves himself into becoming a living mummy; or parents who spend their last savings to pile up a mountain of Christmas gifts for their child. The effect of this maximal release of energy — such as the child’s extreme joy — renders the loss of money irrelevant.
Terence McKenna, interestingly, compared the apocalypse to a religious orgy. And in a certain sense, the apocalypse is indeed the ultimate extravagant limit experience. The approaching eschaton even leads to an absolute paradox: since everything is destroyed, the striving not to waste resources becomes the very reason that many resources go unused and are lost. In the apocalypse, therefore, the desire to avoid waste is itself another form of waste. This brings about a return to the essential.
Jesus — who likewise preached a coming end-time — also said that his disciples should sell or give away all their possessions in order to follow him. That is precisely the same principle described here.
On this topic, the film Fight Club is especially interesting, since it expresses exactly this phenomenon. The main character begins as deeply inhibited and bound to his possessions. The fights represent a limit experience in Bataille’s sense, and Project Mayhem at the end is, in fact, a kind of secular apocalypse.
Blind incitement of chaos achieves nothing.[11] For one thing, increased entropy also means an increase in potential information — which in turn means rising uncertainty as to how things develop. Then one must point out, especially regarding the left-liberals, that through mass immigration, gender ideology, and so on, they themselves are already undermining established institutions and fomenting social chaos (see the concept of anarcho-tyranny). Thus, one might in some cases be playing into the enemy’s hands with such a strategy.
One must bear in mind that partisans also live off stoking chaos. But partisans like Che Guevara depended on the active help of ordinary people (for example, tips, hiding places, donations, etc.). Without that popular support they would neither have survived nor prevailed.
This shows the following: using chaos as a tactic carries the risk that it will turn into some kind of war against one’s own people. Che Guevara’s example demonstrates clearly that this would be absolutely fatal for oneself. One must therefore always act in concert with the people and win their sympathy. One must not be seen by the population as an enemy. This applies to entrepreneurs like Elon Musk, of course. Populism must therefore be part of the strategy. A deterrent example of what elitism can cause is Aum Shinrikyo, which ultimately became an elitist, aloof nerd clique in an ideological ivory tower and, understandably, had zero support among ordinary people. In fact, a single police operation in one day destroyed the group.
For that reason, certain accelerationist internet sects such as “Terrorgram” and the like are also completely doomed to fail because they only reach mentally deranged internet addicts with no real-life presence. Although they somehow managed to deliberately steer the so-called “Werther effect” (the tendency for reporting on suicides, mass shootings, or attacks to provoke copycat acts), in the long run they are not a threat to anyone.
For an accelerationism, it would be far more effective if a broad mass of people suddenly began to ignore minor laws than if, every so often, some lone madman lashes out. A rampage by a single person can be stopped fairly easily by a special police unit. If twenty million people suddenly paid their taxes late or not at all, or began to break certain traffic rules, the system would instantly run into far greater problems — even though no violence would be involved. Sociology also shows that these more harmless offenses, when the state cannot punish them all, much more easily generate large numbers of imitators than any single violent act. Someone who wants to copy a mass shooting usually has to suffer a severe mental disorder. But, for example, if a lawn bears a “Do not walk on the grass” sign while at the same time there are clearly visible footprints all over it, roughly every second person will feel free to walk across the lawn.
Secondly, one need not necessarily fight a priest who takes his job seriously and remains faithful to his traditional role, even if he belongs to a corrupted institution such as the Evangelical Church in Germany (see “vulva painting,” etc.). One should rather identify such traditional elements and cooperate with them. And if one’s own family is upright — and preferably also traditional — there is no reason to turn against it.
Because of the factor of potential information, those interested in accelerationism should probably also build their own traditionalist structures in parallel — so as to siphon off the enemy’s energies and redirect them towards themselves.
There are also some accelerationists who are followers of the Unabomber, Ted Kaczynski, and who therefore (among other reasons, out of concern for transhumanism) want to abolish computers and similar technology and overcome modern industrial society in order to return to a pre-industrial condition.
Left-wing accelerationists (among whom Mark Fisher partly belonged) hope, on the other hand, that accelerationism will ultimately abolish or radically transform some problematic features of capitalism — such as the compulsion to wage labor. Some of these ideas, such as Fully Automated Luxury Communism, are probably delusional and perhaps even dangerous (see the “Mouse Utopia Experiment,” in which laboratory animals were driven into social and mental collapse through an overabundance of luxury). Yet from the standpoint of the critique of technology, this topic is interesting. According to Jünger, the worker is the central figure of modernity, and many systems, such as the school system, exist precisely to produce obedient workers. If many jobs are replaced by AI, that would shake modern civilization to its foundations. Some also hope thereby to overcome the Protestant work ethic. And in the age of “bullshit jobs,” where the West glorifies labor regardless of whether it creates real value, this would indeed be a meaningful development.
(Translated from the German)
According to this video, Stafford Beer — the cyberneticist who, under Salvador Allende, sought to place Chile’s economy under the control of a central computer — later remarked that capitalism had, on its own, come much closer to achieving the goal of such a cybernetic system than his socialist experiments ever did:
What is most interesting here is that he envisioned people having a terminal in every household through which they could communicate their needs to production and distribution. The internet and smartphones have made this possible to a degree far beyond what he could ever have imagined.