The Philosophical Front of AI
Technology needs thought.
Alexander Dugin argues that Vladimir Putin’s new AI commission must confront the deeper philosophical question of what intelligence itself truly means.
Vladimir Putin has signed a decree establishing a commission on the development of artificial intelligence (AI). The commission will operate under the President of Russia. Yet the issue of AI is not merely—or even primarily—a technical matter. It is a philosophical and conceptual problem. It calls into question rationality itself, the very human capacity for thought.
Because we are the species Homo sapiens—the rational being—this development calls into question humanity as such. Accordingly, in my view, if a commission on AI development is to be created (and it has now been created at the highest level), it must include a philosophical dimension.
What is referred to as AGI (Artificial General Intelligence) or the so-called technological singularity is, broadly speaking, a prospect of the very near future. It implies the replacement of humanity as such by artificial intelligence. This is a subject that demands extremely serious reflection, and technological development in this field cannot proceed in complete isolation from its philosophical implications.
Dmitry Grigorenko and Maksim Oreshkin, who have been appointed to head the newly formed Commission on Artificial Intelligence Technologies—along with the other talented and effective technocratic administrators who serve on it—are not philosophers (with the exception of Defense Minister Andrey Belousov). Yet in my view, the commission must include a philosophical component, because without it any action in this sphere becomes extraordinarily dangerous.
Today, transforming artificial intelligence into a domain of top-level global competition is at least as important as nuclear weapons—perhaps even more so.
Of course, a sovereign civilization-state such as Russia must develop its own sovereign technologies in this sphere. Yet even here—at the level of sovereign AI—the civilizational and philosophical dimension reappears.
The subject of artificial intelligence is, first and foremost, philosophical. Adapting AI to a sovereign civilization-state—to Russia—requires an additional philosophical effort. Yet we often display a pathological disregard for thought. When we rush towards purely technical solutions, we gradually begin to fall behind even there, because technology is nourished by science, and science in turn is nourished by philosophy.
Let me emphasize: thought, theoretical vision, and answers to the most pressing questions—questions that properly belong to philosophy—are what inspire and propel science forward, and science in turn determines technological decisions. Philosophy cannot be replaced by science, nor science by technology. This proper hierarchy must be established at every level of state governance, especially in matters as inherently philosophical as intelligence itself.
How can we speak of intelligence—artificial or natural—when “thinking about thinking” is precisely what philosophy is? Aristotle defined philosophy in exactly these terms: it is that which thinks about thinking, about how we think. The philosophical dimension is therefore indispensable. Yet today it is almost entirely absent from our society. Within our social, technological, and administrative systems, the philosophical dimension is missing. This is deeply regrettable.
For example, Aleksey Chadayev1 today proposes a number of insightful and well-conceived philosophical frameworks for logistics, including trade. Philosophy can certainly be applied there as well. Even more so in spheres that are philosophical by nature—worldview, geopolitics, civilization, sovereignty in its deepest foundations, strategies for the future, and, of course, high technology and artificial intelligence.
In my view, the neglect of philosophy in our society has now reached a critical stage. This cannot continue. Nothing functions properly in this direction because many people assume philosophy is entirely unnecessary. In reality, it is the one thing we truly need at this moment. And not only we.
(Translated from the Russian)
Aleksei Chadaev is a Russian political strategist and public intellectual known for his work on state governance, ideology, and civilizational sovereignty.




The Murder of an OpenAI Top Engineer and the True Dangers of Artificial Intelligence:
On November 22, 2024, 26-year-old former OpenAI engineer Suchir Balaji was brutally murdered in his San Francisco apartment.
Authorities ruled his death a suicide.
Suchir Balaji was a brilliant American IT engineer of Indian descent.
At the age of 22, he was hired by OpenAI as a top talent and played a key role in the development of ChatGPT.
In addition to his exceptional intelligence, he possessed a strong sense of justice and unwavering ethical principles.
It is therefore not surprising that he disagreed with the behavior of his boss, Sam Altman, and OpenAI's business practices. He developed an increasingly critical attitude toward management and his boss.
Sam Altman is notorious within the company for his lies and power plays. Suchir Balaji had absolutely no understanding for this and was ultimately quite disgusted by his behavior.
He also witnessed OpenAI's transformation from a non-profit, open-source project into a for-profit, closed-source company.
It's important to understand that the development of ChatGPT was only possible by feeding and training the AI with gigantic amounts of data, including vast quantities of copyrighted material.
OpenAI was only able to use this data free of charge and without the permission of the copyright holders because the company presented itself as a non-profit project.
The use of copyrighted material is considered permissible if it is a research project that does not generate profits and serves the public good.
In retrospect, it is clear that OpenAI deliberately exploited this situation. The billions in profits the company now generates are largely due to OpenAI's free access to this data during its non-profit phase.
For Suchir Balaji, this practice was completely unacceptable.
Suchir left the company in the summer of 2024, having made crucial contributions to the development of ChatGPT during his four years there.
In the months leading up to his violent death, he was preparing to launch his own startup and wrote a scientific paper on the future of large language models (LLMs) like ChatGPT.
In this work, which unfortunately remained unfinished, he refuted the so-called scaling hypothesis, championed by OpenAI and most other AI companies.
This hypothesis states that the intelligence of AI models can be developed indefinitely as long as they are fed enough data. It forms the basis for the grandiose promises of AI companies.
The achievement of a level of artificial general intelligence (AGI) has been announced for years.
AI models are supposedly about to develop superhuman intelligence (ASI = Artificial Super Intelligence), replace all kinds of jobs, cure diseases, create wealth for everyone, and so on.
In his unfinished essay, Suchir Balaji demonstrated in an impressive yet easily understandable way that, contrary to the claims of AI companies, large language models can never reach the level of human-like intelligence (AGI = Artificial General Intelligence).
He predicted that the fundamentally limited, abysmal data efficiency of this technology will inevitably slow down the further development of AI models and bring them to a standstill long before AGI is achieved.
This is an inconvenient truth for the AI industry, which it is trying to conceal to protect its business model.
Suchir Balaji was also slated to testify as a key witness in a lawsuit against OpenAI, which involved, among other things, massive copyright infringements.
In the months leading up to his death, Suchir was in good spirits and looking forward to launching his own AI company.
On November 22, 2024, he had just returned from a short vacation with his closest friends.
According to the investigation by a private investigator hired by Suchir's parents, Suchir had ordered food that evening, listened to music, and worked on his laptop. According to the investigator's reconstruction, he ...
Read the full article for free on Substack:
https://truthwillhealyoulea.substack.com/p/the-murder-of-an-openai-top-engineer?utm_source=share&utm_medium=android&r=4a0c9v