CfP: Automation, Control, and Incompetence – Socio-Technical Ecologies of Responsible Filtering in the 20th and 21st Century

How do practices of "filtering"—technical, racial, cultural, political, environmental, and otherwise—produce the ecologies of modern life? How might a theory of filtering encompassing technological and cultural operations better explain the forms of control, responsibility, and incompetence ordering modern life? We invite papers addressing these questions from fields including media studies, cultural studies, histories of science and technology, human-computer interaction, digital humanities and informatics.

We understand filtering as a fundamental technique that enables human and non-human actors to shape and create environments through differentiation. The environments emerging from processes of filtering are interconnected and encompass several dimensions. They are always material, but also invoke the symbolic and social (Cubasch et al 2021; Razghandi and Yaghmaei 2020). We consider these filtered environments as powerscapes that are shaped by constellations of interests, entangled agencies and power relations. The notion of powerscapes addresses filtering not only as a fundamental technique of making environments but also as techniques that must be open to critique from all involved. Powerscapes hint at the underlying power and hierarchical structures that emanate from filtering technologies, as technologies – or “engines” – of differentiation (Irani and Philipp 2018), in situations where different actors, entities and machines interact and share awareness. We raise the urgent question of how filtering can become responsible filtering both as a design process and with regard to the resulting socio-technical environment.

Questions on the extent of automation and control (Endsley 2016; Shneiderman 2020) are at the center of many current and historical filtering technologies and the environments they create. This holds especially true for industrial production, work spaces, health care systems, forms of cultural production and circulation as well as the sciences themselves that were and are subject to the introduction of novel processes and design strategies. The operationalization of such automated settings requires programmed infrastructures involving planning/design, materials, people and meaning. A prominent example for such distributed agencies and acts of delegation is the operation room in the military (see Galison 1994, Geoghegan 2019), transport or economic settings, where automated tasks and human control interact in ways that unsettle the cybernetic fantasies of computing and predicting everything.

By incompetence we not only refer to mistakes or inabilities of human actors. Instead, we include the limitations of non-human actors. Yet, these limitations stand in a complex interdependent relationship with the knowledge, skills and embodied experiences that are made possible or forclosed within filtered environments. Humans are being imbricated in socio-technical assemblages through particular interfaces and experiences (Bui 2020; Hassenzahl et al 2021, 2018). Thus, automation may establish its operations by ascribing specific roles and sets of (delegated, ignored) practices – for instance “corporate programmers’ code review communication” (Bialski 2019) yet it also discontinues learned practices required in piloting an aircraft, driving a car or directing a vessel. Thereby, introducing new forms of human, machinic and automated incompetence. Moreover, by implementing such complex constellations, devices, designs, platforms and infrastructures filtering technologies produce comprehensive cultural effects and socio-political controversies with wide-ranging consequences for inclusion and exclusion of certain population groups (Irani 2015; Zarsky 2016), in short they effectuate powerscapes. Practices of collaboration as well as acts of cooperation, delegation and coordination generate different social groups, often by drawing on biased categories and thereby seem to foster and engineer globalized social inequalities.

Automation and control through powerscapes and their respective filtering technologies, hence, are not to be described in terms of a teleological development from early digitization to advanced formations of digital capture capitalism. Rather, thinking through algorithmically generated environments requires us to reflect their encompassing effects and productive yet problematic capacities for living together on a damaged planet (Tsing et al 2017). The diversity of historical and contemporary filtered environments encourages us to engage the politico-ethical question of what needs to be automated and not what can be automated (cf. Tedre 2006).

Furthermore, we are required to scrutinize the multi-faceted effects of algorithmic automation regarding their racial (Anatasoski and Vora 2019; Benjamin 2019), gender and disability dimension (Whittaker et al 2019; Goggin et al; Stock, forthcoming) and investigate them in terms of political, social, economic and natural-cultural consequences. Hence the urgency to acknowledge the cultural, economic and political underpinnings, active materiality (Bennett 2010) and queer character (Spiel et al 2019a, 2019b) of filtering technologies and forces that lie at the heart of human-computer-interaction (HCI) and critically frame such operations by instantiating novel critical perspectives and feminist knowledges (Bardzell 2010).

Such an interdisciplinary way of thinking automation and HCI points to the uncertain boundaries of computer science, design, the sciences and the humanities (Bardzell and Bardzell 2016; Geoghegan 2020; Matzner 2019; Miyazaki 2019). Uniting different disciplines and perspectives allows us to provide new insights and to create awareness of the limits of automation and control. This could be key for envisioning the fallacies of ubiquitous computing in order to responsibly co-design future systems and emancipatory applications of filtering.

The goal of this workshop is a broad and interdisciplinary debate on automation, control and incompetence. We strive to elaborate the historical and cultural-political dimensions of automation processes, as well as the current challenges of HCI and technology design in the face of diverse societies. Against the background of the workshop’s discussion we would like to bundle recommendations on responsible filtering in a forthcoming position paper.

Hence we aim to discuss questions of filtering, powerscapes, automation, control, and incompetence from a historical and a contemporary perspective, bringing together disciplines, like HCI, Design Studies, Media Studies, Science and Technology Studies (STS), medical humanities, and History of Science and Technology. We invite contributions on the following topics within the described conceptual framework. However, the list shall function best as a source of inspiration and not as limitation:history of science and knowledge perspectives, cultural and design histories, media archaeologies of automation, control and incompetence
conceptual histories of incompetence, deskilling and ignorance in the context of automation, HCI, AI
automated decision making (adm) and their transparency, explainability, and reproducibility in decision making systems and their designs
human control in human-machine interaction in complex socio-technological systems
epistemologies of automated processes
critical HCI design research and experimental design projects addressing issues of automation, control and incompetence
Abstract Submission

Individual paper submissions should be in the form of abstracts of up to 150 words and a list of references. Abstracts should include the paper’s main arguments, methods, and contributions to the workshop. Deadline for submissions is 15 July 2022. Send your abstract to mario.cypko [at] fu-berlin.de

Schedule

08 April 2022 Open Call for Papers
28 August 2022 Extended Deadline for Abstracts
15 October 2022 Final Program
01-02 December 2022 Workshop at the Cluster “Matters of Activity”, Berlin


References

Atanasoski, Neda; Vora, Kalindi (2019): Surrogate Humanity. Race, Robots, and the Politics of Technological Futures. Durham: Duke University Press (Perverse modernities).

Bardzell, Shaowen (2010): Feminist HCI. In: Elizabeth Mynatt, Don Schoner, Geraldine Fitzpatrick, Scott Hudson, Keith Edwards und Tom Rodden (Hg.): Proceedings of the 28th international conference on Human factors in computing systems - CHI '10. the 28th international conference. Atlanta, Georgia, USA, 10.04.2010 - 15.04.2010. New York, New York, USA: ACM Press, S. 1301-1310.

Bardzell, Jeffrey; Bardzell, Shaowen (2016): Humanistic HCI. In: interactions 23 (2), S. 20–29. DOI: 10.1145/2888576.

Benjamin, Ruha (2019): Race after technology. Abolitionist tools for the new Jim code. London: Polity.

Bennett, Jane (2010): Vibrant matter. A political ecology of things. Durham: Duke University Press.

Bialski, Paula (2019): Code Review as Communication. The Case of Corporate Software Developers. In: Paula Bialski, Finn Brunton und Mercedes Bunz (Hg.): Communication: meson press, S. 93–111.

Bui, Long (2020): Asian Roboticism: Connecting Mechanized Labor to the Automation of Work. In: Perspect. Global Dev. Technol. 19 (1-2), S. 110–126. DOI: 10.1163/15691497-12341544.

Cubasch, Alwin J.; Engelmann, Vanessa; Kassung, Christian (2021): Theorie des Filterns. Zur Programmatik eines Experimentalsystems. Online verfügbar unter https://www.researchgate.net/publication/351252610_Theorie_des_Filterns_Zur_Programmatik_eines_Experimentalsystems.

Endsley, Mica R. (2016): From Here to Autonomy: Lessons Learned From Human–Automation Research. Human Factors 59 (1), 5–27. https://doi.org/10.1177/0018720816681350.

Galison, Peter (1994): The Ontology of the Enemy: Norbert Wiener and the Cybernetic Vision. In: Critical Inquiry 21 (1), S. 228–266. Online verfügbar unter http://www.jstor.org/stable/1343893.

Geoghegan, Bernard Dionysius (2020): Orientalism and Informatics. Alterity from the Chess-Playing Turk to Amazon’s Mechanical Turk. In: Ex-position (43), S. 45.

Geoghegan, Bernard (2019): “An ecology of operations: Vigilance, radar, and the birth of the computer screen.” In Representations 147 (Summer), 59-95.

Goggin, Gerard; Prahl, Andrew; Wong, Meng Ee; Calvo, Rafael A.: Designing AI to Stop Disability Bias. Online verfügbar unter https://www.ntu.edu.sg/nisth/the-nisth-story/research-projects/micron-ni..., zuletzt geprüft am 28.01.2022.

Hassenzahl, Marc; Burmester, Michael; Koller, Franz (2021): User Experience Is All There Is. In: i-com 20 (3), S. 197–213. DOI: 10.1515/icom-2021-0034.

Hassenzahl, Marc (2018): A Personal Journey Through User Experience. In: Journal of Usability Studies 13 (4). Online verfügbar unter https://uxpajournal.org/wp-content/uploads/sites/7/pdf/JUS_Hassenzahl_August2018.pdf.

Irani, L., 2015. The cultural work of microwork. New Media & Society, 17 (5), 720–739.

Irani, L., and Philip, K., 2018. Negotiating Engines of Difference. Catalyst: Feminism, Theory, Technoscience, 4 (2), 1–11.

Matzner, Tobias (2019): The Human Is Dead – Long Live the Algorithm! Human-Algorithmic Ensembles and Liberal Subjectivity. In: Theory, Culture & Society 36 (2), S. 123–144. DOI: 10.1177/0263276418818877.

Miyazaki, Shintaro (2019): Take Back the Algorithms! A Media Theory of Commonistic Affordance. In: Media Theory; Vol 3 No 1 (2019): Rethinking Affordance. Online verfügbar unter http://journalcontent.mediatheoryjournal.org/index.php/mt/article/view/89, zuletzt geprüft am 24.01.2022.

Pecune, Florian; Jingya Chen; Yoichi Matsuyama; Justine Cassell (2018): Field Trial Analysis of Socially Aware Robot Assistant. In: Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems. Stockholm, Sweden: International Foundation for Autonomous Agents and Multiagent Systems, S. 1241–1249.

Razghandi, Khashayar; Yaghmaei, Emad (2020): Rethinking Filter: An Interdisciplinary Inquiry into Typology and Concept of Filter, Towards an Active Filter Model. In: Sustainability 12 (18), S. 7284. DOI: 10.3390/su12187284.

Shneiderman, Ben (2020): Human-Centered Artificial Intelligence: Three Fresh Ideas. AIS Transactions on Human-Computer Interaction 12 (3), 109–24. https://doi.org/10.17705/1thci.00131.

Spiel, Katta; Frauenberger, Christopher; Keyes, Os; Fitzpatrick, Geraldine (2019): Agency of Autistic Children in Technology Research. A Critical Literature Review. In: ACM Trans. Comput.-Hum. Interact. 26 (6), S. 1–40. DOI: 10.1145/3344919.

Spiel, Katta; Keyes, Os; Barlas, Pınar (2019): Patching Gender. Non-binary Utopias in HCI. In: Stephen Brewster, Geraldine Fitzpatrick, Anna Cox und Vassilis Kostakos (Hg.): CHI'19 extended abstracts. Extended abstracts of the 2019 CHI Conference on Human Factors in Computing Systems : May 4-9, 2019, Glasgow, Scotland UK. Extended Abstracts of the 2019 CHI Conference. Glasgow, Scotland Uk, 5/4/2019 - 5/9/2019. New York, New York: The Association for Computing Machinery, S. 1–11.

Stock, Robert (forthcoming): Broken elevators, range anxiety and open data practices. How wheelchair mobility, social media activism and situated knowledge address access issues in public transport systems. In: Mobilities.

Suchman, Lucy A. (1987): Plans and situated actions. The problem of human-machine communication. Cambridge: Cambridge University Press.

Tedre, Matti (2006): What should be automated? In: Daniel Gatica-Perez, Alejandro Jaimes und Nicu Sebe (Hg.): Proceedings of the 1st ACM international workshop on Human-centered multimedia - HCM '06. the 1st ACM international workshop. Santa Barbara, California, USA, 27.10.2006 - 27.10.2006. New York, New York, USA: ACM Press.

Tsing, Anna Lowenhaupt; Swanson, Heather Anne; Gan, Elaine; Bubandt, Nils (Hg.) (2017): Arts of living on a damaged planet. Ghosts of the Anthropocene. Minneapolis, London: University of Minnesota Press.

Whittaker, Meredith; Alper, Meryl; Bennett, Cynthia L.; Hendren, Sara; Kaziunas, Liz; Mills, Mara et al. (2019): Disability, Bias, and AI. New York NY. https://ainowinstitute.org/disabilitybiasai-2019.pdf.

Zarsky, Tal (2016): The Trouble with Algorithmic Decisions. An Analytic Road Map to Examine Efficiency and Fairness in Automated and Opaque Decision Making. In: Science, Technology, & Human Values 41 (1), S. 118–132. DOI: 10.1177/0162243915605575.