THESES ON THE COMPLEXITY OF (MIS)TRUST MACHINES
The complexity of the world cannot be represented by mathematical formulas and thus not be fully grasped by Artifical Intelligence/automated decision making systems.
The religion of Big Data has popularized the imagination that mining more and more data will solve this problem of complexity and representation.
Users/Inhabitants of automated decision making systems keep being forced to put trust in Black Boxes.
(Mis)trust machines are constantly working on the decomposition of meaning.
Users/Inhabitants of automated decision making systems are not only training the algorithms, they are also being trained by them. Trained to limit their range of actions to those actionable by computers and conforming to the logics of Big Data. Users/Inhabitants of automated decision making systems are autotuning their mindsets to filter out the complexities of the world/earth.
Thus getting trapped in loops of asking the wrong questions and imagining having to create a vision of a perfect and managable future instead of a livable present.
At the same time social media forces users/inhabitants of automated decision making systems to constantly practice creating trust-building networks that can work as digital undercommons. Digital undercommons are allowing users/inhabitants of automated decision making systems to learn how to learn to navigate the (mis)trust machines.
Making of
Gallery of generated hallucinations
Credits
The ‘Algorithmic Mindset’ workshop consisted of: Aslı Dinç, Dream Studio beta, Katrin M. Kämpf, Michal Kučerák, Julia Molin, and Cristina Pombo.
This project was conceived at the Berliner Gazette’s annual conference 2022 entitled AFTER EXTRACTIVISM.
All text and images: Creative Commons License 4.0 (CC-BY 4.0).