Can Artificial Intelligence Change Power Structures?
Prof. Dr. Peter Imbusch / Sociology of Politics
Photo: UniService Transfer

Can Artificial Intelligence Change Power Structures?

Peter Imbusch on the relationship of people, technology and society to artificial intelligence.

It can complete Beethoven's unfinished 10th symphony, better prepare our healthcare system for future crises, and make banks higher profits faster and more effectively. We encounter it on chatbots in the call center, as a colleague on the assembly line or as a player in a computer game. We are talking about artificial intelligence, or AI for short. At the Max Planck Institute for Intelligent Systems in Tübingen, scientists are even working on the possibility of giving learning machines a sense of causal relationships.
Seemingly imperceptibly, AI is gaining an ever-increasing influence on our lives. In a project entitled "Transformation of Power and Domination through Artificial Intelligence," Wuppertal sociologist Prof. Dr. Peter Imbusch is researching the relationship between humans, technology and society and the as yet unassessable potential dangers that this development may bring.

The project

"Technology is, after all, far more than a medium or a mere artifact that benefits all people equally," Imbusch begins. "While on the one hand technology has become an integral part of our society - we live in a technical civilization, after all - on the other hand different technologies have always been central resources for the exercise of power and domination. In our project, we assume that this problematic will come to a head as AI becomes more prevalent." Fundamental questions of power theory are at stake, he explains, "such as whether power arises from the technical artifacts themselves, or whether it is inscribed in technology as a 'constraint' or as a 'side effect,' or whether people can act powerfully through technical mediation and exercise power through the mastery or application of technology." The problem, he said, is that such questions have remained strangely underexposed in the sociology of technology or among the producers or users of new technologies, with the result that profound analyses of new developments and their power-political consequences, differentiated and systematized according to social spheres, have so far been lacking. "Since technology is not simply neutral, we are interested in how AI is embedded in social power relations in the first place, but also in how AI changes and transforms power structures and power relations in specific domains."

Developing an understanding of the political and social consequences of AI

Some political systems and their rulers in the world make many people want more AI, but its use is just not that simple.
"If you expected it to curb political stupidity, you would say yes," the sociologist begins. "After all, artificial intelligence systems have so far only done what they are allowed to do or what can be 'learned' by means of algorithms. That this will increase the political wisdom of some rulers is open to doubt." Therefore, he pleads for more intensive research, saying, "Given the multifaceted nature of AI, as well as the diversity of its applications in very different sub-areas such as economics, politics and society, it seems to me that it would be more appropriate to examine these areas in more detail in order to understand the differences and similarities with regard to the political and social consequences of AI." Digitization processes and AI could be used for very different purposes and to achieve dubious objectives, he said, and that's where power and domination come into play again. "If you think about the growing importance of AI in healthcare, for example, or new production processes and distribution methods in the world of work, you will generally find the associated technical progress quite good. But there are also areas where we are rather critical of the application of AI." In this context, Imbusch points to the great power of tech corporations and platform capitalism, which he sees as a cause for concern. The various possible applications of AI in the area of the political system and their effects on democracy should also be viewed not only critically, but also pose a real danger, he said. "We also now perceive the whole area of securitization, with its elaborate surveillance and security technologies, as very problematic (especially in authoritarian regimes); and in the area of military technologies and strategies made possible by AI, veritable revolutions have taken place in recent decades." From these different examples alone, he said, one can see that the gains and losses of power by different social groups are distributed differently, and that AI can be used for domination purposes - perhaps even more so than the technologies of the past. These social power relations and structures of domination interest the scientist in his project. The focus is on questions about the power of actors and their different claims to power as well as the specific resources and forms of power, but also on the role of AI in the expansion, stabilization and transformation of power and domination.

Opportunities and threats for power and domination through AI

In an evolving world, everything new is always associated with both curse and blessing. In terms of power and domination, AI poses both opportunities and threats. "It is true that, from a historical perspective, new technologies have always attracted a considerable amount of criticism and resistance," says Imbusch. "This can be observed with all technical innovations at least since early industrialization. We are also familiar with this in relation to developments in the field of artificial intelligence, because the corresponding debates about AI usually oscillate between a richly naive euphoria about technology on the one hand and fatalistic or dystopian fantasies of extinction on the other." After all, he says, it's not always just about the valued technical progress, but also about what benefits it brings for whom and how it affects society. "And that brings us to the opportunities and dangers. Used sensibly and in democratically established communities that are committed to social balance and political participation, the new developments in the field of AI may well be beneficial. The situation is quite different in authoritarian regimes or dictatorships." But significant dangers lurk in democracies as well, the researcher emphasizes, simply because of the way AI has been talked about so far, what expectations went with it and how it is dealt with. "Through the use of artificial intelligence and its further development, technological progress is making a qualitative leap that observers do not really survey and that we commonly underestimate in its scope. The use of artificial intelligence also consolidates the rule of a small expertocracy."

A "values-based design" of AI.

The strategic development of the use of AI is always a political issue. An Enquete Commission spent two years on the topic in the Bundestag to develop a roadmap for Europe and Germany. As a result, there is agreement that a 'value-oriented' design of AI should be pushed forward. But what is meant by this?
"In its report, the Enquete Commission very precisely noted the opportunities and risks of using artificial intelligence," explains Imbusch. "In its recommendations, it then rightly speaks of a 'value-oriented design' of AI. That means, above all, that we in society must become clear and agree on what we want to do with artificial intelligence, how much space we give it, i.e., for what purposes we want to use it, and in which areas it really enriches our lives and where not." To that end, he said, the commission urges guidelines and humanitarian standards so that not everything is done that could possibly be done technically. It insists that we humans are the individuals who set the tone and that technical possibilities should not at some point overpower us and degrade us to mere objects. "Even if we are still a long way from this state," Imbusch points out, the warning against this is more than appropriate in view of the developments to date.

Artificial intelligence is as dumb as crispbread

"Artificial intelligence is as dumb as crispbread," says ethicist and theologian Alexander Filipovic. Some people, on the other hand, grant machines a certain intelligence. So who holds the upper hand in this transformation process?
"The concept of intelligence in the term 'artificial intelligence' is controversial, because artificial intelligence is not really intelligent (yet) in its current applications. But I think that can still change in the next decades, and then a real danger will come from AI. For now, at any rate, humans still have the upper hand, and we must do everything we can today to ensure that this is or remains the case in the future." But if humans are to determine the future path of AI, who will choose the humans to lead the way? "That's another problem in the context of AI development," Imbusch says, asking questions like, "Who actually determines the direction of this development? Who sets the framework for AI? Who, if anyone, exceeds limits of what is technically feasible? Will people even be able to properly assess and truly understand the scope of some developments in this field?" In this context, experts are assuming a new position of power, and it is a question of who controls the controllers in the end. Imbusch would like to see a timely social discourse about what can and should be done by whom and how, because, he warns in conclusion: "In the past, we were degraded to silent accomplices in some technical developments in this field and only woke up when it was virtually too late. That must not or should not happen again!"

Uwe Blass (conversation from 08.10.2021)

Prof. Dr. Peter Imbusch studied sociology, political science, social and economic history, and economics, and earned his doctorate on the social structure analysis of Latin America. He habilitated in 2001 with a thesis on "Modernity and Violence." Since 2011, he has taught as a professor of political sociology at Bergische Universität.



More information about #UniWuppertal: