Hacking the Urban Backend

city image designed by Freepik

An Introduction

An Introduction

Our everyday activities and environments become increasingly mediated by self-learning algorithms. Mediation through algorithms and AI-driven decision-making operates often beyond our awareness. “Hacking the urban backend” poses questions about our present day political agency within an AI-driven city. Three teams worked collectively to uncover the phenomenon of AI from a variety of perspectives. 
Two aspects were central to better understand AI and our relation to it: First, the idea of a backend suggests that processes are beyond our awareness or visibility. Second, the idea of hacking suggests to disrupt the smooth experience of AI and re-appropriate it.
The idea of ‘hacking’ started with a consideration of WHAT and HOW that we randomly combined. 
WHAT: Commerical | Traffic | Infrastrucure | Social | Security | Services 
HOW: Spoofing | Tempering | Information disclosure | Repudiaton | Elevation of priviledge | Denial of Service 

HAVOC Get Your Dream House By Two Clicks!

HAVOC Housing Association for Vanguard Orderly Communities

Welcome to HAVOC (Housing Association for Vanguard Orderly Communities), powered by Smart Urbanite of Berlin Administration (SUBA).

HAVOC is a Curated platform, community led of Legend homes for the Sophisticated, Eclectic, Soulful Berliners! We allocate your House Fast so you do not have to do the Hassle!


Choose the easy way and find your new home in BERLIN within a few minutes!

You can [Apply here] for your housing allocation.

(Pro tip: If you get rejected, try changing the data you put in. Can you convince the algorithm to accept you?)



With the advances in artificial intelligence (AI), its promises in improving services in cities have attracted a large amount of attention from urban planners, researchers, and political actors. It has been touted as to benefit many services in the public (and private) sector, with the aims of “improving efficiency,” “reducing costs,” “streamline services,” in order to “transform work” “increase citizen satisfaction”. The definition of AI is debatable and often murky, but broadly, it can be distinguished as weak (general) AI or strong (narrow) AI. In our case, we refer to computers simulating human abilities and performing tasks which humans usually do, utilizing technologies such as predictive analysis, robotic process automation, cognitive computing and machine learning.

The service areas that can benefit greatly range from improving housing allocations, facilitating benefit claims, identifying tax evasions, applying for identification documents, to name a few. The use of AI-based online virtual assistants and chatbots are the most visible forms of this use. As Alexander Measure, economist at the Bureau of Labor Statistics in the US stated, possible applications of AI technology in government is ” “too many to list”. (The New Statesman, 2018).

The economic and social benefits appear to be massive. Across the web, many reports and white papers have been released by consultancy firms and technology companies such championing the benefits of AI in the public sector. The consolidation and monopolization of data and services by Big Tech companies mean that the dependency them have been critical in the development and deployment of AI, as services often do not have the necessary infrastructure and resources. Amazon Web Services have been the backbone of delivering services in Aylesbury Vale District Council in the UK. Alphabet and Toronto have started their urban smart city project.

Yet, discriminatory and machine biases in risk assessments in AI have been proven. Problematic data sets, false logic, or prejudices of their programmers and creators, mean that AI not only reproduce, but amplify human biases. Further, private-public partnerships have increased over the years and often with limited regulation or safeguards. What would this mean for the citizen?


Change the way AI citizens experience AI. The framing of AI in our rhetoric to citizens (i.e. do you care about your privacy vs do you want Amazon to be in-charge of your healthcare?) in order to close the gap between citizen and AI applications.


    • We call for better scrutiny of data sets. Not merely transparency, but also an independent body or watchdog to vet for data sets.
    • We call for responsibility and accountability. When individuals or workers within an organisation engage with utilising AI, then they need to be trained to interpret results by these systems, to exercise their judgment and to take responsibility for the judgments made.
    • We call for citizens to be able to access their own data that is being used for (public) services and the ability to send in requests to amend and update their data.

6 Seconds

6 seconds

Core Idea

Hacking pervasive techno-narratives by blurring and overlapping human and non-human centric ontologies.


Hacking the language of populist A.I. by deconstructing / creating a dictionary that addresses human, machine, and computer perspectives.


Foto: Private

A series of mapping exercises took place, creating multiple-perspectives of the 2018 Uber self-driving car accident through the eyes of investigators, Uber, mass media, vehicle sensors and algorithms. Through illustrating both human / non-human perspectives of the accident, we hope to create a dictionary followed by radical interpretations through the process of shifting perspectives.

Investigator’s perspective

On March 18, 2018 at 9:58 pm, An Uber self-driving vehicle collided in a fatal crash in Tempe, Arizona.  The crash occurred as the pedestrian walked a bicycle east across Mill Avenue. As a result of the crash, the pedestrian died. The vehicle operator was not injured. According to data obtained from the self-driving system, the system first registered radar and LiDAR observations of the pedestrian about 6 seconds before the impact, when the vehicle was traveling at 43 mph. 

As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian as false positive — an unknown object  with varying expectations of future travel path. At 1.3 seconds before the impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision. According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.

The limits of my language mean the limits of my world

(Wittgenstein, 5.6, Tractatus Logicus Philosophicus)

Uber car’s perspective

At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision (see figure 2). According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.

LiDar’s perspective

Distance = (Speed of Light x Time of Flight) / 2


The Team felt at an early stage that our discussions happened on quite different understandings – implicit and explicit ones – of key terms. Throughout our iterative draftings we kept building a dictionary of these words. Reviewing the two intense days of our conversations we agreed, that time and depth of reflections was too limited for a proper thesaurus. The AI dictionary is now a loose redundant listing where keywords act as headers for more or less specific questions that drove our critique of the 6′ Uber accident.

For MacOSX users this dictionary is available as a Screen Saver – instigating reflections at unexpected moments, small openings in your daily routine, that may extend this year’s Berliner Gazette’s conference on Ambient Revolts into the indefinite future.

Download the AI_Dict Screen Saver

Further Readings

  1. Engineering Uber’s Self-Driving Car Visualization Platform for the Web



TRIVE_ Many users, one identity, a new social media experience

Many users, one identity, a new social media experience

Trive lets you and your friends post content which is then shared by one account. You form a collective when you post as one.

Trive allows you to experience social media in a fun and unique way. Experience social media as different people. Or switch user to suit your desire. Trive provides several possible identities.

What’s the point in Trive?

Trive is a service designed to reverse the black box

  • Prevent corporations from collecting data and profiling you
  • Be more free to express anything (don’t feel constrained) – research shows we censor ourselves if we think posts are attributed to us
  • You don’t have to worry about accounts being hacked — that’s the idea of this account (an account that can’t be hacked because it has no personal data)
  • Social media wants data: let’s give it a lot of it, let’s give it to them together!
  • Benefit from using social media as a collective intelligence / see things you wouldn’t have seen on your own
  • Make social media more social.

But doesn’t Facebook stop fake accounts?

People can ‘donate’ old, unused social media identities which Trive turns into new social media identities.

How does this relate to the urban in “hacking the urban backend”?

Social media accounts will be increasingly used to give access to city infrastructure e.g. logging into Facebook to access a shared bike network. We aim to challenge and provide alternatives to the received idea that individual accounts must be exclusively attached to individual, since this allows for extreme personal profiling.

The extreme possibilities are demonstrated in China’s social credit system and seem to be the logical conclusion of digital expansion across the world.

Trive can become a platform for other identity obfuscation schemes

Participate in the exchange of public transport cards (like London’s Oyster Card), public library cards… And other identity-related accounts.

Credits and License

This project was conceived at the 2018 annual conference of the Berliner Gazette AMBIENT REVOLTS.

Guests: Zarinah Agnew, Tekla Aslanishvili, Marc Böhlen, Jose Miguel Calatayud, Ellen Koenig, Matthew Linares, Juliane Rettschlag, Nicolay Spesivtsev, Gabriele Schliwa, Andreas Schneider, Jill Toh, Niloufar Vadiati, Xin Xin, Dzina Zhuk. Moderators: Nina Pohler & Michael Prinzinger.

Licenses: All chapter images were taken at the AMBIENT REVOLTS conference by Norman Posselt and are licensed under CC BY 4.0.

All texts and videos were created by the workshop group and are licensed under CC BY 4.0.