
As artificial intelligence (AI) seeps into our daily lives, its impact on our thinking capacities is becoming increasingly clear. AI is replacing our jobs, increasing government and corporate surveillance, and luring vulnerable internet users into rabbit holes of loneliness and psychosis. In particular, large language models (LLMs), such as ChatGPT, have rapidly become part of our daily routines. A lot of workers use LLMs every day to write e-mails, summarise reports or brainstorm ideas. More and more people are forming opinions on all kinds of topics by discussing them with an LLM. Many use ChatGPT and its equivalents as personal assistants to coordinate their calendars, as secretaries to write their emails or student essays, or even as 24/7 therapists when access to mental health care is becoming scarcer and more expensive.
Meanwhile, the impact of LLMs on our cognitive skills is also becoming increasingly clear: the more people outsource argumentation and critical thinking to technology, the harder they find it to practise those thinking skills without technological support. Through so-called ‘cognitive offloading’, thinking tasks such as synthesising information, forming and structuring arguments and remembering data are entrusted to technology. These thus take over crucial functions of our brain. In time, a doomsday scenario looms in which people are only capable of critical thinking if they plug their brains into machines. This scenario has recently received the ominous name “brain rot”: the more we offload critical thinking unto online applications, the more our brains atrophy until nothing but an empty husk full of online conspiracy theories and hate speech remains. If German philosopher Immanuel Kant once summed up the message of the Enlightenment as ‘Dare to think’, today the light box of our computer screen says, ‘Please, leave the thinking to us.’
European policymakers are not blind to the impact of LLMs. The European Commission has developed a European Approach to Artificial Intelligence in which it combines public investment in AI with an emphasis on ethical AI policies. These documents look into some of the big challenges for European governance today, like the right to privacy for European citizens or the dangers to democracy of deepfakes. The issue of skills, however, mainly comes up in the field of labour policy. The Commission warns of the dangers of AI on employment and therefore advocates the reskilling of potentially replaceable workers. The government should invest in teaching new skills to workers at risk of losing their jobs due to AI systems. But does this go far enough? Can we simply let our brains rot as long as the EU funds some workshops to teach us new skills, which we can gladly exercise under the panoptic gaze of our newly coded machinic brains? Peter Thiel, figurehead of the far-right turn in Silicon Valley, sees the impact of AI as far more far-reaching by any measure. Back in 2010, Thiel said in a speech:
The basic idea was that we could never win an election on getting certain things because we were in such a small minority but maybe you could actually unilaterally change the world without having to constantly convince people and beg people and plead with people who are never going to agree with you through technological means. And this is where I think technology is this incredible alternative to politics.
Thiel said the quiet part out loud: democracy is an obstacle to profit and progress for tech business, and technology is a means to circumvent democracy. Technologies like AI should cement policy into the software that runs our lives, so that we ourselves have less and less to decide. So the fact that systems like ChatGPT impair our ability to think is not a bug but a feature of the system. Those who control the funding and design of the technological underpinnings of our thought need not bother with democratic debate. Why enforce anti-discrimination law for job applications if AI-driven CV-screening software automatically turns down job applications? Why organize elections if citizens ask ChatGPT for whom they should vote? Why tackle online hate speech if Elon Musk’s Grok can spit out thousands of anti-Semitic tweets in seconds? Meanwhile, Thiel is known as one of Donald Trump’s most rabid supporters. He decisively rejects democracy and the welfare state in favour of a dictatorship of tech entrepreneurs. So merely retraining people who have lost their jobs to AI is not going to get us there. LLMs are the vanguard of a political coup.
This problem is not new by the way. As Matteo Pasquinelly writes in The Eye of the Master, AI has been a project of absorbing and atrophying human skills from its inception. In the 19th century, artisans complained about the industrialisation of factories because the machines made their thinking work redundant and centralised all control of the factory with the bosses. The assembly line system mimicked the human labour process so that workers themselves no longer needed to know about the overall production process. They could let their artisanal brains rot because machines had taken over their vital functions. Specialised craftsmen thereby became replaceable by unskilled labour because the brains of the factory operations were located on the level of the machinery. Indeed, each worker only had to perform his small tasks on the assembly line, while the overall coordination of the factory was in the hands of the machines and the managers who operated them. Technological innovation was a means of centralising control over the labour process and moving it from workers to management. You might argue for retraining artisans, but the deeper political problem was the power relations on the shop floor itself. Technology influences those power relations but is itself rarely the subject of political democratic debate. After all, factory bosses unilaterally decide which technologies to invest in and how to implement them.
Especially at a historical pivot point like today, when more and more US tech entrepreneurs are taking extreme right-wing positions, we need to think about how much power we want to give AI technology and LLMs. Elon Musk’s Grok has recently caused scandal because of its sudden turn toward anti-Semitism and conspiracy theories about white genocide. Yet Musk himself has entertained these ideas for a long time, so it comes as no surprise that his technologies have become to mimic his own rotten thoughts. Don’t these systems form a Trojan horse in our society? We have enthusiastically brought in these technologies to make our (thinking) work easier, but meanwhile the far-right views of their owners sneak in through the back door. We enthusiastically let ChatGPT write our e-mails, but do we realise that by doing so we are outsourcing our critical thinking skills to companies with potentially evil intentions? Previously, only our computers could be infected by viruses, but if we link our brains directly to far-right software, those viruses might well nestle in our own brains. Do we want our reasoning controlled by people who openly claim that racism is OK, that trade unions are dangerous and that gender equality is too ‘woke’?
A European political policy for reskilling humanity in the age of AI must therefore go beyond merely retraining the unemployed. The problem is not merely one of employment but of democracy. As long as private companies with fickle yet unaccountable CEOs own the basic technologies that shape our daily lives, we are at the mercy of their arbitrary power. We need a more democratic governance of AI itself before it is too late for our rotting brains. That does not necessarily require banning or abolishing AI, but we do need to think about the institutions that decide on the funding, development and implementation of new technologies. Who decides which AI systems are made at all and for what purposes? A democratic society worthy of the name claims these powers for itself instead of letting tech entrepreneurs and their financiers decide how our futures are going to look like.
Allow me to make one suggestion: in the early 1970s, the Chilean government launched Project Cybersyn, a rudimentary AI system that would drive the Chilean economy in real time. When socialist president Allende was elected, the US cut off all technology exports to Chile. However, Allende saw this as an opportunity to develop technological sovereignty in the Global South. He gauged an opportunity for publicly owned socialist technology as an alternative to capitalist technology under private ownership. Consequently, the priorities were also different. Instead of mimicking human communication with LLMs, Chileans were more interested in the exchange of information between factories. In the Soviet-Union – but also among anti-Soviet socialists like Allende – AI development had never been framed in an intelligence arms race with humanity. The point of AI was never to outperform human beings in tasks like communication or artistic creativity. Socialists were more interested in AI as a tool for complex calculation problems for economic planning. They aimed not to let our human brains rot but to use AI for cognitive tasks human brains would never have been able to perform in the first place. However, the top-down planned economy of the Soviet Union was terribly inefficient because the central government lacked reliable information to make efficacious economic predictions. Once the information arrived at the government, it was often already out of date. Project Cybersyn was supposed to solve this problem by putting factories directly into contact with each other. The Soviets eschewed this option because it would have implied a weakening of the central party bureaucracy. With real-time information exchange, the Cybersyn system would put factories in direct contact to tailor the economy to the needs of the population.
Large logistics companies like Amazon or Walmart today use versions of this kind of software to manage their supply chain. But again, technology is mainly used to centralise control in the hands of one private company. In contrast, Cybersyn’s intention was to let factories negotiate production quotas and distribution chains with each other. AI formed only the computational support to draw up realistic plans. Critical thinking was still necessary but it could count on reliable calculations of production statistics to conduct informed negotiations and make efficacious decisions. With Cybersyn, the human brain was not to rot but to make extensive use of calculative capacities beyond its own powers. Factories would decide for themselves what rate they would work at and how much they would produce, but the AI would calculate the economic impact of those decisions and suggest how to align it with the rest of the economy. However, due to General Pinochet’s far-right coup d’état in 1973, the project was never completed. But the idea was beautiful: a publicly owned AI system so that the population would have a direct say in the technological underpinnings shaping their working conditions. Instead of yet another update of ChatGPT, perhaps it is time for Cybersyn 2.0.
Tim Christiaens, assistant professor of economic ethics, Tilburg University. Bluesky: @timchristiaens.bsky.social

0 Comments