Prof. Dr. Tobias Matzner: Understanding the affect of algorithmic publics
News from Jul 15, 2021
Several forms of Western publics are increasingly structured by algorithms. Traditional forms of gate keeping are replaced by algorithmic filters that compile “feeds”, search engine results, etc.
Increasingly, algorithms also create content, e.g. as chatbots disguised as human social media users. Algorithmically generated measures such as likes, retweets or subscribers join established markers of visibility or relevance. Many of the recent debates regarding such developments and other implications of algorithms for publics concern affect. Those who follow Habermasian ideals of publics fear a shift towards the affective, populist, and partial through algorithms – most present maybe in public discourse on filter bubbles, behavioral manipulation, dark patterns, and similar phenomena. Yet, also prominent adherents of affective publics consider algorithms as a threat, provoking wrong, detrimental, or defective forms affectivity. Even where publics are not directly concerned, in critical debates on algorithms a rich, relational form of intersubjectivity is often pitted against a more affective, even behavioral form of interaction brought about by algorithms.
Considering the history of thought on algorithms and technology, this connection of algorithms and affects seems surprising. For a long time, the algorithmic and the machinic have been associated with instrumental reason, “cold” pitiless logic, and the proverbial reduction of thinking and feeling beings to numbers or cogs in the machine.
Given that peculiar development, I want to argue that the combination of algorithms and algorithmic publics with affect is not only based on efforts to describe recent effects of information technologies. Rather, they are the result of a particular contemporary way of supplementing thought and debate on algorithms with other discourses and theories.
E.g. the potentials and threats of “nudging” stem from combining algorithmic possibilities with a view of cognitive dispositions from behavioral economics. Theories of “filter bubbles” and “echo chambers”
arise when thought on algorithms is supplemented with concepts like “homophily” from empirical sociology. Cambridge Analytica and other attempts of psychological profiling is based on an algorithmic variant of the OCEAN model from psychological trait theory. Thus, the effects of algorithms on publics that are diagnosed hinge on these discoursive or theoretical supplements.
The selections of these supplements are, however, not only driven by descriptive accuracy. Rather, recent developments in machine learning and related fields afford the algorithmic implementation of certain theoretical ideas from the aforementioned fields (e.g. measuring homophily in very large social graphs). Thus, there is a structural parallel between certain characteristics of theories and algorithmic possibilities. The impressive algorithmic efficacy that such applications lend to these “implementations” of said theories makes them plausible candidates for understanding what is going on – leading to the particular combination of algorithms and affect regarding publics that I mentioned above.
It is absolutely clear that we cannot understand algorithms by their technical nature alone. Thus, some kind of supplement regarding their relation to subjects and societies is necessary. However, the strong and multifaceted critiques from feminist thinkers and Southern theorists on publics have demanded time and again to pay attention to the social dynamics of publics, their particularities and specifics for certain groups. Given such critique we have to ask whether the supplements that are currently used afford the necessary scope to encompass such factors.
I will conclude in suggesting a few ways in which algorithmic publics and their relation to affect could be understood in ways that attempt to do justice to such critiques.