The rapid development of so-called artificial intelligence has sparked a debate about the impacts of this technology on our economies, our political systems, our media and – less talked about – our prospects to survive the severe crisis of life on Earth that our civilization has triggered. Often in these debates, technological developments are portrayed as a quasi-natural force, driven by innocent human curiosity and inventiveness. The one and only direction of that force is, according to this narrative, called progress. Its road is paved by milestones like the invention of writing, the printing press, the steam engine and eventually digital technologies. What can be done will be done. Societies and the planet will have to adapt.
What I am going to show is that, in contrast, technological choices are highly dependent on power structures, property relations, economic and military interests and ideologies. They are by no means “natural” and one-directional. This also means that if we want to change course and steer us through the existential challenges that we are facing, we have to deconstruct the mythology that technological development is wrapped in.
The Steam Engine and the Automotive System
At any given moment, people invent stuff. We are a creative species. The question is which inventions are developed further, promoted and put to large scale use. And this depends on social systems and power relations. I will give just two general examples.
The first one is the steam engine. The basic principle of the steam engine has been invented 2000 years ago by a man called Heron of Alexandria. In the social system of those days, however, it was not very much of any use, because ancient societies were not based on growth. In contrast, at the end of the 18th century, in a time of full-fledged capitalism, the situation was quite different. The system was already about to hit limits to growth, there was a shortage in energy, especially in wood. Wood was crucial for many things including smelting, especially for military production, which was key to political power. In Britain it became particularly scarce around the metallurgical centers, so prices went sharply up. In order to keep its empire running, Britain had to come up with a new energy source, which was indeed known for a long time: coal. So the question whether this technology was developed and used on a large scale depended on the power structures and vested interests of the time, not on the genius of inventors like Thomas Newcomen or James Watt.
Another example is the transport sector. In the early 20th century, there were two competing concepts of transport: one was public transport, the other was based on private cars. Now some car, oil and tire companies like GM, Standard Oil and Firestone bought up rail systems across the U.S. in 45 cities including New York City and Los Angeles. And what did they do with this infrastructure? Did they run public transport? No, they destroyed it to build roads and highways. So public transport was crushed and the U.S. became addicted to fossil fuel infrastructures. This story is also very instructive for our current situation when it comes to digital infrastructures. Private monopolies tend to make us addicted to destructive systems.
From the Invention of Writing to Modern Mass Media
Now let's come back to information technologies and the question “Who owns the truth?”, which is the motto of this conference.1 Let's go back 5000 years ago to the time when writing was invented. What was writing used for originally? Was it used for poetry or rational debate? It was first of all used as a logistical tool to organize trade and slavery. In those days, the first highly militarized city-states emerged in Mesopotamia and elsewhere, and with them the first permanent class formation in history, including systematic slavery.2 These societies based on centralized power needed bureaucracies – and therefore writing.
Later, in the third and second Millennia BC, writing was also used to propagate the ideologies of the upper classes, their worldview, their religion, their thoughts about the universe. If you dispose of writing, you have more power to propagate your views. However, later on, in the first millennium BC, we see writing being also used as a tool of counter-power, for example by the early biblical prophets who fervently criticized kings, property relations and the abuse of power. So there is a dialectics of technology. Large-scale technologies tend to be used at first predominantly by the powers that be. But some of them can end up undermining these very powers.
Something similar happened with the printing machine in early modernity. What were the most widespread printed texts in the16th and 17th century of early capitalism? Some of them, like the famous “12 Articles” of 1525, which played an important role in the German Peasants’ War, called for social reform and justice. But most of them were religious and political propaganda, including many writings of Martin Luther directed against the peasants. One of the most widely distributed texts was the “Hammer of the Witches” with about 300 000 copies. The key question for the development and impacts of technology is again: Who owns it, who has the money and infrastructure to use and control it?
Things however changed dramatically in the 18th century. Printing got much cheaper and widely distributed. An educated middle class and a culture of debate emerged. Consequently, printing was increasingly used as a counter-power, which eventually contributed to the French Revolution and other revolutions of the 19th century.
Things again changed in the 1870s, when the rotary printing press was introduced. The new technology was very expensive, only rich people or corporations could afford this investment. This led to a process of monopolization of the press and the emergence of the great press moguls of these times like William Hearst, who became the model for the famous film Citizen Kane by Orson Welles. In Britain it was Harold Harmsworth, who owned most of the newspapers including The Times and the tabloid press, in Germany Alfred Hugenberg, who controlled half of the German press and contributed massively to the rise of the Nazis.
These people had the power to manipulate public opinion in favor of their interests and private capital in general. They also spread a quasi-religious propaganda to promote nationalism in order to distract workers from the class struggle and to mobilize them for World War I and World War II.
The Systemic Challenge of Universal Suffrage and the Manufacturing of Consent
Around 1920 the capitalist world-system system faced a major problem. After more than a century of fierce struggle, people in most Western countries had finally achieved universal and equal suffrage, including women, workers, black people and indigenous communities. That posed a problem to the economic and political elites, because they were extremely afraid that these people would vote them out – and the whole system out of existence. It was not by chance that at this very time theories about what was later called “guided democracy” emerged. One of the major proponents was Walter Lippmann, one of the first directors of the Council on Foreign Relations, a major liberal intellectual of the time. He published a book called “Public Opinion”, in which he claimed that humanity is divided into two types of people: an educated minority who know how the system works; and a majority, the “bewildered herd” as Lippmann called them, who do not know how politics and economics work and therefore must be guided. Public opinion, according to Lippmann, has to be shaped and eventually manipulated in order to “manufacture consent”, as he put it, and keep the system running. Edward Bernays, the inventor of public relations, had similar theories at that same time.
This ideology and the concentration of property in the press were the basis for the development of mass media in the 20th century. There were important challenges to that system, for example with the emergence of radio and television and the upheavals of the 60s and 70s. However, at the beginning of the 21th century, property relations in Western media were about the same as a century before. In Germany for example, more than 60 percent of the printed press are controlled by a tiny number of billionaires. In France the situation is similar, in Britain and the US even worse.
The internet then changed things dramatically. At the beginning, it was a decentralized structure, which challenged established corporate and state media. But later on, it was increasingly privatized and turned into proprietary products. Today it is dominated by the monopolies of Big Tech, who increasingly work in close collaboration with governments, for example when it comes to censorship or military contracts.
AI Mythologies and the Crisis of Truth
AI of course amplifies this concentration of power. But when we talk about AI, we first have to deconstruct the mythology surrounding it. When we hear buzzwords like “singularity” or “disruption”, which claim that AI will eventually outdo humans, we have to be aware that artificial intelligence is in a way a misnomer. Computers and algorithms neither understand nor think, no matter how complex and powerful they are. As the philosopher John Searle once pointed out rightfully, they are just syntactic machines. That means they process signs without understanding their meaning. This does not change even if programs like ChatGPT pass the so-called Turing test, in which people are supposed to find out whether they are talking to a computer or a human being. In fact, this kind of tests does not tell us anything about whether machines have a consciousness and understand what they are doing, but only about how well they have been trained to simulate human behavior.
The mythology surrounding AI is used by Silicon Valley and others to distract us from what is really happening. One major problem is the manipulation of public discourse by opaque algorithms. Although AI doesn't understand anything, it can seriously distort our understanding of reality. Geoffrey Hinton, sometimes referred to as the “godfather of AI”, remarked after quitting Google: “People will not be able to know what is true anymore.” Micro-targeted misinformation and disinformation powered by AI chatbots ushers in a new era of manipulation, especially when people don't know that they are micro-targeted. As long as disinformation is spread in a shared public sphere, it can be challenged in this arena. But micro-targeting destroys this common sphere. Everyone is trapped in his or her own bubble of manufactured illusions. And again, those who own and control the technologies, have an overwhelming influence on the content of these bubbles.
However, we have to face the fact that the crisis of truth has not started with social media or AI. When the New York Times in 2003 claimed that Iraq was possessing weapons of mass destruction, it created a fake news story that contributed decisively to a war that left one million people dead. Major traditional media, whether liberal or rightwing, sometimes spread very dangerous and consequential fake news. It's not bad AI versus our good old media – nor the other way around.
AI also is, as Naomi Klein has pointed out, maybe the biggest theft in human history. It's a gigantic privatization of the work of billions of people and of public knowledge to train proprietary products without consent or permission. AI is also powering pervasive surveillance technology on a massive scale. Its military use, for example in autonomous weapons, is extremely dangerous as well.
AI and the Limits to Growth
At this point, however, I want to highlight another aspect that is less talked about and that's the ecological and climate impact. Digital technologies are the fastest growing energy consumer in the world, with an exponential growth rate of about six percent per year. That means they double their energy consumption in 12 years and have a tenfold increase in 40 years. If we go on like that, in 58 years, their hunger for energy would be the equivalent of today's total global energy consumption and production. It's clear that we cannot go on like this on a limited planet, where the necessary fast transition to renewables is already challenged by the huge energy demands of other sectors like traffic and industry.
Silicon Valley of course claims that AI will help us to save the climate and make more rational choices. But in fact, what many of the CEOs of those companies are doing is preparing for the worst. Sam Altman for example, the CEO of OpenAI, remarked in 2016: “I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.” That's the mindset of some of these big Tech moguls. They know how close we are to ecological collapse. And they know that AI won’t save us.
Having said that, I think that digital technologies and AI could certainly play a role in an ecological transition. We need for example smart electricity grids. But under the current economic framework, AI will likely have much more destructive consequences than benefits. So it’s all about changing the framework.
Breaking Up Big Tech: The Question of Public Ownership
Now what we can do about all of this? How to address the challenges of the crisis of life on Earth, of extreme inequality, and of the manipulation of public opinion? And we have to address all of these issues at the same time, because they are closely interrelated.
The first thing we have to do is to become able again to make choices. And this means to demystify technology. It is not a quasi-autonomous subject headed to a single direction called progress, it is designed by people who have interests. And it can be changed by people.
The next step is to organize democratic processes of public debate on choices about technological developments. I would like to propose just four guidelines for such a discussion. The first one is saving the planet, which means drastically reducing our consumption of raw materials, energy, water and land. This is a must, it's a do-or-die choice. If we fail, no civilization. The second one is advancing social justice, which means reducing income and wealth gaps. Thirdly, it is about enhancing human well-being and health and, four, enabling well-informed public debates on these issues, undistorted by power structures and vested interests.
Now the key question here is, which technologies, which property structures and which decision-making processes could help to bring all this about. I would like to focus on two aspects. The first one is ownership. Let’s go back to the rotary printing press. When new tools are introduced that can be used for mass propaganda, then it's not very wise to leave the control over such means in the hands of a tiny elite. And the same is true for Silicon Valley today. We finally need to break up the big tech companies. They have far too much power over far too dangerous tools. And they are driven by a single base motivation, which is often at odds with the common good: profit. We have to come up with concepts for public ownership of key digital infrastructures – something at times being referred to as “digital socialism”. Public ownership, however, does not necessarily mean state ownership, which can be very problematic, especially when it comes to media. It would rather mean the creation of new commons beyond both corporate and state control. Such a shift would also include transparent algorithms and key regulations, like a ban on AI chatbots.
This entails also changing the business model of the tech sector. Digital tools should not be designed as addictive drugs as they are now. The current business model of Silicon Valley is to keep people hooked on certain platforms to make more money by ads. This has already contributed to an enormous psychological crisis especially among young people. We shouldn't design and allow addictive digital drugs for profit-seeking.
On the Path to Digital Sufficiency
The second point is quantity. We live on a limited planet, we have overshot its boundaries by far already. Moreover, we are close to decisive tipping points in the Earth system. When we pass them, the Earth could flip into a different state, called Hothouse Earth, leaving many parts of the planet uninhabitable. So we have to ask a very fundamental question: What do we really need in order to allow all human beings and other species to live a decent life now and in the future? If we take this serious we have to make choices: where to employ raw materials, energy, work, creativity and money. We cannot have everything at the same time, that's a very anthropocentric, egocentric and childish idea. However, that's hard to accept given the hubris embedded in our culture and our economic system, which depends on eternal growth. Facing the limits to growth is indeed challenging this very system at its core.
In order to make the necessary choices we have to consider two things at the same time. One is the redistribution of wealth and the other is sufficiency. How much do we need to have a decent life? As for the social ecological transition in general, there are many proposals. I only cite the proposal for a Global Green New Deal by Robert Pollin and Noam Chomsky, which would cost about five percent of global GDP annually, which would be doable. It could be financed by taxing the rich and drastically reducing inflated military budgets. For this kind of transition, again ownership is key. Let's take the energy sector. If we have decentralized renewable energies owned by the public or by cooperatives they can produce energy for the needs of the people, they don't have to grow, they don't have to sell more and more energy, which a capitalist enterprise would have to do. The same is true for public transport systems instead of private cars.
We also have to talk about digital sufficiency. The digital sphere is no exception from the limits to growth and from the principle of sufficiency. So we have to ask again: What do we really need for the well-being of ourselves and the planet? Any sincere answer to this twofold question would entail some sort of restrictions to boundless digitalization. When we add social justice to this picture, it would lead us to a concept of basic digital services for all prioritized according to social and ecological criteria. 4K movies on our smartphones and self-driving cars would most probably not be part of these essential services. Given the existential crisis that humanity is in, we should focus on what we really need.
To sum it up: AI could play a – probably modest – role in steering us into a livable future, but only under very different economic and political conditions. This means we should engage in changing these conditions and start to reorganize the fundamental institutions of our civilization.
Video:
Podcast:
See also:
Fabian Scheidler: Our Life in the Matrix: Technological Progress Mythologies and the Pitfalls of Digitalization
The keynote was held at the symposium „Who Owns the Truth?“ in Linz, Austria, at the Ars Electronica Festival.
See also Fabian Scheidler: The End of the Megamachine. A Brief History of a Failing Civilization, Washington D.C. 2020