Tag Archives: Democracy

Big Data In Our Hands?

BIG DATA IN OUR HANDS?

Introduction

I. As the wide-ranging powers enjoyed by states and corporations continue to grow, an increasing number of people feel powerless and estranged from political as well as economic decision-making. This situation has been compounded by the rise of state bodies and corporations with bombastic and, indeed, authoritarian ambitions to dominate our increasingly digital age.

NSA representatives have sworn to use any digital means necessary to save the world from evil. Meanwhile, Google-CEO Eric Schmidt has announced that „we want to change the world” through the mobilization of technological innovation, flexible corporate culture, political influence and raw economic power. This attitude dismisses and undermines any criticism (of Google’s or the NSA’s data model, for instance), condemning it as criticism of progress and security in general. To be against Google or the NSA is to be against the common good itself.

Can the commons serve as an alternative to the corporate or state domination of big data, the lifeblood of this dawning digital world? Can it serve as the foundation for collective empowerment? Can it provide a means of criticizing the IT corporations‘ perception of themselves?

II. Today the importance of „big data“ cannot be understated, nor can the power relations that circulate through it be ignored or downplayed. Data processing is becoming a key means by which our society, economy and social lives are being shaped.

However, mass collection, storage and analysis of private data sets is organised by companies, governments and secret service agencies in mostly non-transparent, unregulated ways, lacking effective democratic control. This is undermining fundamental individual and civil rights and the power of the people in general.

Therefore we need to put things into perspective: We, the users, produce more than 75% of the data that make up our digital universe. However, we do not think of those data as the product of our collective labour and therefore as something we should own. Instead large corporations and powerful states own ‚our‘ big data. So we need to ask: Is there a way to turn big data into our digital commons?

III. Both individual and common privacy rights as well as autonomy are fundamental human rights and also common goods that need to be consciously and intentionally cultivated and protected. Therefore, these principles ought to be applied to big data and guide its use and governance.

We, as individuals and collectively, produce data, therefore we should claim, and fight for, the rights as well as capacities to govern and control it. Big data should be a common good. Everyone should have the power to take decisions about big data as a common good and how it should be organized within the commons.

In order to make this happen, the project „Big Data In Our Hands?“ proposes to work on five potential solutions. Before going into the details, please look at the CONTENTS section below to get an overview.

Contents

CONTENTS

Big Data Commons
Imagining and constructing the big data commons in order to renegotiate the role and value of data in our post-digital societies and in order to create something like commons data. For details please scroll down or use the navigation bar [Chapters].

Governance of the Data Commons
Creating an international, multi-stakeholder governance group that defines the scope of issues to develop narratives surrounding big data and the digital commons and that provides guidance as well as advisories to all relevant stakeholders. For details please scroll down or use the navigation bar [Chapters].

New Data Infrastructures
Creating new data infrastructures that rely on local networks, community, self-hosting and non-profit ISP, community-owned data centers based on net-neutrality and end to end-encryption throughout all the data channels and free & open source soft- and hardware, open protocols and encryption tools. For details please scroll down or use the navigation bar [Chapters].

Commons Data Centers
Creating new data centers for big data that enable to make the collection and use of big data democratically accountable and to administer data in the interest of the public. For details please scroll down or use the navigation bar [Chapters].

Big Data Enlightenment
Creating tools for big data enlightenment and education that enable to demystify what big data is all about, that empower us, the people, to use big data and the data commons in a friendly, approachable and engaging way for an average user and that make visible the economical value of big data as well as how it is being used by states and corporations. For details please scroll down or use the navigation bar [Chapters].

Big Data Commons

BIG DATA COMMONS

The commons in the anthropocene

The industrial or post-industrial age is increasingly being referred to as the “anthropocene,” referring to an era in which humans are “one of the most important factors that influence the biological, geological and atmospheric processes of the earth” (Wikipedia). Must our understanding of the commons shift as a result? Are our ideas and responsibilities actually evolving together with this shift?

We have tended to associate the idea of the commons with the collective and community-based protection and cultivation of „natural“ phenomena, from the lands that peasants tended and harvested together in the medieval period to today’s rivers and lakes that social movements seek to protect as commons. But can we expand the idea of commons to those ephemeral „second nature“ products of the anthropocene like data itself? Such questions have been posed for many years by free/libre and open source campaigners and others concerned about intellectual property regimes. Drawing on these and other traditions of thought and activism, we want to ask: can big data be understood and (re)claimed as a commons? By whom and under what circumstance? And with what consequences?

From top-down to bottom-up big data

Wikipedia reports that „big data is a broad term for data sets so large or complex that traditional data processing applications are inadequate. Challenges include analysis, capture, data curation, search, sharing, storage, transfer, visualization, and information privacy. The term often refers simply to the use of predictive analytics or other certain advanced methods to extract value from data, and seldom to a particular size of data set.“

In addition to standard definitions of the technology, politics and history of big data, there has developed, over the last few years, a critical discourse that looks at the concept from the perspective of ideology, hegemony, and social and economic power. In this discourse, big data is often viewed, amongst other things, as the „oil“ of the 21st century, as a post-democratic means of surveillance and control or as a theory of everything. We take our cue from this critical discourse and aim to forge a new dimension, building upon the notion that big data are a „phenomenon of co-production“ (Jeanette Hofmann) and therefore demand other modes of more democratic governance.

From open data to commons data

This issue not only concerns data that are generated by institutions and are made publicly accessible by them (as an “act of generosity”), but it also looks at the following potential development:

– focus on all types of data that are generated (including personal data and data produced by individuals) and
– assign them the status of a democratically administered and regulated good, access to which can be negotiated in a democratic manner.

One general issue here, and in particular with respect to the above, is whether individuals should have their “right to be forgotten” extended to also cover big data and search engine data, i.e. whether it is necessary to store a certain type of data in the first place. Not storing such data would make it unnecessary later to administer these data (regardless of who would administer them).

It needs to be clear what part of the big data pool is “private”. This private part should, and needs to, remain private property that belongs to an individual’s personal realm, and is fenced-in by data protection rules. Examples of “private data” are: the content of e-mails and other personal communications; data on which web pages an individual has accessed; records of the data that have been transmitted to websites; and personal data on social networks that are only meant to be shared with friends.

Unlike private data, “public data” have been published on purpose. Examples of public data are: an individual’s website, public profiles on social networks (published by artists or journalists for instance), or blogs. An intermediate position is occupied by individuals who appear in public, but use an alias in doing so in order to escape persecution for their political beliefs, for instance.

From this, it follows that the remaining data (i.e. non-private data) may be used and classified as “commons data”. Before elaborating on this, please note the following:

The line between public and private

We think it is best to say that the question of the line between public and private must be debated and determined by the same authority-yet-to-come that will govern and manage the data commons, rather than suggested before the fact by „us.“ Just a brief example: what about consumer data? What we buy online can reveal a great deal. And what about the „private“ data of public figures whose „private“ communications have an impact on the public good? We should discuss whether the criteria here is to be a suggestion, or a rule.

The legal dimension

Data protection has historically been understood to be an individual right, i.e. the law defines it as a right that an individual is entitled to. However, in the wake of the NSA surveillance scandal a broad debate has broken out over the question whether data protection or the right to informational self-determination are, in fact, not part of a government’s obligation to protect its citizens, i.e. whether data protection is a collective right that the state must actively protect. The concept of socializing big data or of looking at big data from a collective perspective takes its cue from this debate.

Ownership of data

The question of ownership with regard to data remains completely open. Data protection was established as a right of protection, not as a proprietary right. From a legal perspective we are in uncharted waters here. German law only provides for one area where a public good is produced according to predefined values more or less on a trust basis, this being the Interstate Broadcasting Treaty (the „Rundfunkstaatsvertrag“ in Germany). On the basis of this treaty, the parliaments of the German Länder (federal states) commission various bodies to determine how to deal with the public good that is “broadcasting”. This treaty might function as a model for all kinds of legal and practical aspects.

The value of data

We must finally clarify what value data actually have. The reason for this is that the demand for a commons inherently implies a shift from private property to common governance and responsibility. This leads us to try to solve the following issues: 1) what belongs, or should belong, to whom, 2) what is actually worth how much to people, and 3) what something (i.e. in this case “data”) is actually worth, and how we should assess its value.

Today, there is a rapidly growing industry dedicated to measuring, monetizing and commodifying data for use by corporations. Here its value is determined solely by markets. We suggest that the assessment of the value of data should be dependent on a dynamic, continuous reassessment of „our“ own values as they correspond to social realities. In other words, instead (or in addition to) the economic valuation of data, we must also assess its value in terms of how it can serve the principles like democracy, health and wellness, freedom, environmental sustainability and peace.

Assessments by critical thinkers, advertising professionals or terror prevention analysts have one thing in common: The big data discourse is primarily concerned with data that are personal or have been produced by individuals. After all we, the users, produce more than 75% of the data that make up our digital universe. Asking, “what value do data actually have?”, “who owns the data, who has the power to process and cross-reference it?” and “how can we transform big data into commons data?”, are questions precisely about that type of personal data.

In order to lay a basis for a bottom-up approach to big data, we propose to introduce a strategic distinction between personalized and anonymous data. The question is: Which data are necessarily personalized? Which data can be anonymized?

We do not consider anonymized data as a solution to the overall problem. Rather, we see the creation of an anonymized big data pool as the basis for constructing an area of the big data commons, which can be opened to certain practises of data analytics by third parties. After all a big data commons can only be considered a dynamic and flourishing commons if various ways of commons-based peer production are enabled, for instance, data commons-based peer production in the field of research and science, as proposed by Jane Bambauer in her paper „Tragedy of the Data Commons“ (2011).

Against this backdrop one the main objectives on this issue should be to develop a standard that is used by default and which stipulates that data are to be collected anonymously.

Personalized data

The problem with personalized data is that their mere existence can pose a risk to individuals. For instance, many people in the former German Democratic Republic in the 1960s and 1970s unexpectedly did not receive a university place or a promotion they had been counting on. Once the archives of the state security service were opened in the 1990s, they discovered why: information on them gathered by the security service had found its way to other branches of the state like universities or state-owned companies. Often, this information was plain false or construed falsely, but because they were unaware of these secretly collected data, they could not lodge any objection.

To take another example: In countries that had a central register of Jews, the Nazis were able to locate and deport most of the Jews once they had occupied the country in question. In countries that had no such records, the Nazis were only able to locate very few Jews. It is clear that no matter who owns the data or who has control over personalized data (personal profiles), these data can pose a significant risk to individuals.

Note: This is a good historical example, but as they teach in rhetoric: never reach for the Nazi example. It is considered unnecessarily alarmist and often alienates sceptical readers. So it would be great if we could come up with an alternative example.

Today, there are concerns that companies that specialize in tracking personalized data can create highly tailored and specific lists based on everything from Amazon purchases to GPS tracking that can identify, for instance, gay, lesbian, bisexual or people practising otherwise marginalized sexualities. While it is bad enough this data will be sold to marketers, it could also feasibly used by governments or reactionary groups to target individuals.

Anonymized data

Anonymized data are a solution to this problem. The demand for data is growing, and it can probably not be contained by moral, social, political or educational measures alone. But the demand can easily be satisfied by anonymized data. Data collected in such a way delivers the same content (e.g. someone who buys product A will also be interested in product B) without the data being attributable to a specific person. It is important in this case, however, to collect and store each data record separately, because an individual about whom everything except his or her actual identity is known is not really anonymous.

Commons data

After carrying out a subdivision of big data, the data that are to become “commons data” must then to a large extent be anonymized. Anonymization means that (a) the data are no longer private or personal and therefore constitute socialized data, and (b) the data are to a certain degree protected by the anonymization process, i.e., third parties should be prevented by regulation from misusing them; commercial and government bodies should be, for instance, prevented from using them for profiling or surveillance purposes.

The challenge to create regulations for the data protection issue is great: After all computing power is such today that anonymization is easily circumvented by cross-referencing many different datasets. Anonymization is held out as a solution, but a clever algorithm can fairly precisely identify an individual based on anonymous data. Here is a simplified example: One could, feasibly, triangulate an individual from (a) anonymized Uber user data (frequency of travelling to one location, must be their home); (b) anonymized Amazon data (shipped to the same postal code); and (c) a variety of invisible datamarks like type of computer, browser, etc.

According to the current state of the art, in addition to the anonymization of „commons data“ it is also possible to anonymize and/or encrypt personal data. The anonymous communication provider Tor, for instance, enables users to visit a website, without their generating data which would permit a third party to establish a relationship between the individual and his or her access to the website. Also, the content of any personal digital communication may be encrypted so that only the sender and the recipient can read it. Some social networks additionally use technology to ensure that the personal data an individual has published are only shared with the individual’s personal contacts.

Standards for data collection

The provision and collection of data in an anonymized way can generally be achieved by introducing technical standards. One example are settings in the internet browser that enable the user to decide which personal data he or she wants to disclose, and what data is to be transmitted anonymously. Once such a standard has been established, it could be transformed into political and social demands. The “political demand” would advocate the passing of statutory requirements for data collection that would represent an implementation of these standards. The “social demand” would come in the form of pressure on commercial companies to implement these standards (due to pressure from the community, WhatsApp for example, an instant messaging app for smartphones, now implements encryption in its software).

Implications

– At the individual level: users no longer readily surrender data.
– At the regulatory level: the monopolization of data can be challenged. (This would be a completely new development. Normally, the issue of monopolization is debated from a totally different perspective).
– At the social level: the socialization of data becomes a possibility, which has cognitive, political and economic implications, i.e. people are able to participate (a) in an existential part of the environment, (b) in political processes (decision-making on rules, distribution, etc.), and (c) in economic processes (in which “my” data become a potential economic resource which I am able to exploit myself, or I can have it exploited by third parties).

Conclusion

With constructing commons data and with democratically negotiating as well as assessing the value of data, we can initiate a shift from a top-down to a bottom-up narrative on big data. Eventually, we will lay the theoretical and political basis for the big data commons.

Governance of the Data Commons

GOVERNANCE OF THE DATA COMMONS

Vision

Democracy, transparency, individual rights and autonomy are fundamental human rights and also common goods that need to be consciously and intentionally cultivated and protected. Therefore, these principles ought to be applied to big data and guide its use and governance.

We, as individuals and collectively, produce data, therefore we have the right to govern and control it, both as individuals and collectives (a delicate balance between individual and collective rights).

Data processing is becoming a key means by which our society, economy and lives are being organized and shaped. Therefore we should have a key say in how it is used.

We envision the creation of a Global Forum on Big Data and the Commons of and for the commons that will be fiercely autonomous from state and corporate power.

Composition

The form should be amulti-stake-holder network of activists and advocates open to groups, individuals, organizations that exlcudes participation of governments and corporate actors; we will not provide a plan, but lay out values and principles that ought to guide the establishment of a Global Forum on Big Data and the Commons: Values; broad civil society participation and consultation; inclusive and global; capacity and enthusiasm for action and activism, participatory, horizontal, democratic; diverse in perspectives and participants, conscious intent towards active outreach to and inclusion of the excluded but impacted.

Mandate

– Create a robust roadmap to a common data future, perhaps a „Constitution for Common(s) Data“.
– Contribute to the creation of a common digital and data infrastructure.
– Develop narratives, definitions and education on commons data and define the scope and impacts of the issue.
– Develop, advocate and fight for policy and regulation.
– Foster research and collaboration on the risks and possibilities of big data to and for the commons.
– Collaborate with other interested and impacted groups towards this end.
– Always maintain autonomy from corporations and state organizations.

Process

– How will it start?
– The initiators will become an ambassador for the WORKING GROUP FOR BIG DATA COMMONS and we’ll continue to build it out.
– Create a Slack platform for collaboration and to build and share tools, materials, “starter kit”.
– Develop a Listserv (only for announcements).
– Plan a global meet-up at next year’s BG conference to establish the Global Forum on Big Data and the Commons.
– In the next 12-months, encourage local groups for discussion and collaboration and also smaller closed digital groups/communities who can participate.
– Establish a provisional core team who can manage and plan listserv/slack platform/ next year’s gathering and also will be regional ambassadors and organize meet-ups. Meet/check-in once per month and/or as needed.

– How will it be funded?
– For now, we will all volunteer.
– We may crowdfund or ask for donations for costs.

Operations

Look at:
– WSIS (World summit on info soc)
– IGF

Lessons learned:
– Defining the „we“
– What is „civil society“?
– Who is included/excluded?
– How is it funded?
– Pressure from the for-profit actors

New Data Infrastructures

NEW DATA INFRASTRUCTURES

Restitution

In order to have data managements in favor of the common good, so we can ensure autonomous control & non-commercial use of citizen data, as well as equitable access, sustainability and the protection of fundamental and collective rights, we envision the following infrastructure.

Most people live with a ‘privacy paradox’: we know that data is collected about us, and we are bothered by the fact that this means we have little privacy. But paradoxically, we fail to act to protect this privacy, instead (for the most part) we continue to use products and services that produce feelings that privacy is violated. Why? Because these products and services generate ‘network effects’ where participating generates social and economic value. If you don’t participate, you miss out. Thus the privacy paradox is a game that we can’t win.

We propose not to solve this paradox or propose a way to win the game, but to change the game itself. Our vision includes proposals for alternative data management structures that operate at various scales, linked into networked infrastructure (internet infrastructure with its standards and protocols), that have the potential to forestall or challenge the collection of personal or collective information without consent.

Some of the features that we propose exist within technologies that have already been built or proposed both historically or in recent years. Some have not yet been assembled and require us to imagine solutions outside of obvious ones.

Software and devices

The devices through which we share our data and receive information, such as phones, and laptops other connected objects, are the entry points. These data infrastructures should rely on free / open source software and hardware, open protocols and give access to easy end-to-end encryption tools, easy anonymization tools to preserve digital anonymity and privacy.

Network infrastructure

We will have internet/network/phone/energy infrastructures that are owned by us, or in any case run in our interest. For this we need net neutrality so that there are no discriminatory tiers of service, and access is equal. The internet consists of nodes, optical fibre cables, ISPs, points of internet exchange and satellites. Currently the large majority of that entire infrastructure is privately owned. In practice our vision would mean that (parts of) that infrastructure would either be public infrastructure or commons/autonomous infrastructure.

Examples of what would likely be public are internet exchange points (backbone), optical fiber cables and ISPs; on the other hand, what could be owned commonly are networks, but also ISPs. We will also have commons/community driven infrastructure in the form of local networks, community/self-hosted solutions and for-profit ISPs. There will be expanding self-hosted local-community networks that are meshed with a (more basic) public infrastructure.

Data storage

We imagine networked (local) data commons,with both data bases and data repositories, governed in the common interest of the community providing the data. These data banks and repositories are either commonly hosted or publicly hosted. On the other hand there is also a place for private data banks, although what we share with these is up to each indivudual.

– public analytics
– common analytics
– private analytics

Services

We imagine offline services at a community level, caching services from an ecological point of view, and also community alternatives to commercial services (gmail, google doc, skype, doodle, dropbox, google search engine, facebook, ….) (look here). The governance of these alternative services and their funding remain an open question.

Protocols

We will invent and negotiate common protocols, to allow online and offline digital activity, to allow us to deploy discriminating tools for exchanging data (encryption / filter) considering what information we want to exchange with whom, thus allowing direct and partial or complete connection to a database. We will invent and implement new protocols for banking, shopping, exchanging cultural contents, discussing the TOR network and the blockchain technology can be considered as premise.

Agenda

– define structure and aims of alternative data infrastructures,
– define social protocols/governance,
– define economic model,
– drawing exercise: draw internet, draw your data.

Infrastructure

What it is:
– Enabler of data communication (collection) and processing,
– socio-technological (needs to be governed/organized),
– hardware (servers, cables, routers) + software (protocols, libraries, frameworks, algorithms) + devices

What we need:
– have a reliable access to a neutral internet,
– common autonomous infrastructure (hardware & software) to do data analysis,
– organisation to drive analysis.

Protocols and services

What we want:
– being able to use internet in an anonymous way,
– anonymous mail & encrypted/ communication,
– have transparent algorithms,
– public back doors.

Enablers:
– open source hardware,
– open source software,
– open formats.

Software

– encryption,
– should have the ability to chose to whom you open the data (linked to encryption, key management, etc.),
– education on digital security, anonymity and privacy.

Commons Data Centers

COMMONS DATA CENTERS

We wish to engage in a thought experiment: what if big data, which has so much potential but also poses so many threats, could be governed and owned collectively by the public, rather than by corporations or state agencies? What would such commons data centers look like, and how could they be brought into reality?

It will be important here to define what is meant by „public“ as the word has very different meanings in different contexts. Typically, we understand it as „in the trust of the state,“ which is in this case problematic because we have learned not to trust the state with our data because of fears of surveillance and complicity with corporate interests (though there are important differences between states and governments).

Beyond that, we think there is a very important analytic and political distinction to be made between the ideas of the „public“ and the „commons“. Related to this is also the discussion about the nature of „rights,“ such as the right to privacy. When we talk about rights, we often implicitly imagine that some authority can or should enforce and protect rights, usually the state. But the same problems apply as above: can we trust it to do so?

One of the most important questions or challenges in this respect is how we would be able to prevent a new administrative elite from establishing itself in this area. We would also need to determine which principles of knowledge transfer, information security, and information sharing might be applied.

A further challenge is the fact that “decentralizing big data” by organizing it in a multitude of small “commons administration units” would not automatically guarantee that big data would not be used in a detrimental way or made available to third parties for their private interests.

Directives

– If the data are administered in the commons, we must make sure that they are only utilized “in the public interest” and that no individual freedoms are infringed. This will mean, among other things:

– There will always have to be a democratic and transparent decision-making process to determine what to do with certain data in individual cases.

– There will have to be clear statutory rules that stipulate what constitutes an admissible use of the “data” and what does not. These rules must also form the basis for the debates that the commons administration units hold for decision-making.

– The members of the administration units must be elected. The example of the Berlin Round Table on Energy (Berliner Energietisch) and its plan for the public administration of a future public utility in Berlin (Berliner Stadtwerke) shows how this can be done according to a model of grass-roots democracy.

– The specific organizational structure of a “commons administration unit” must be based on the principle that the individuals actually generating or producing the data on the internet must also be able to share democratically in the decision-making on how these data are used. This would take the form of a “producer-user democracy” in the field of data.

Big Data Enlightenment

BIG DATA ENLIGHTENMENT

One fundamental issue in the context of the big data commons concerns strategies of education. Since it is of importance to all of us, how can we make sure that the issue of big data is not exclusively understood and discussed by the corporate and state elites? How can we make sure that all citizens understand the implications, and are able, if necessary, to take counter measures? What measures might these consist of? And, last but not least, how can one carry out a discussion on of this issue so that it has a broad impact? What public places and media should be used?

Demystifying the status quo

– Data grabbing
– Industrial Data Stock Farming
– Data invaders
– Data colonization
– Digital Natives

Campaigning

– Data-giving pledge
– Data Evader Register
– Google employee of the month
– Data evader
– Hacking Star Wars
– Data Vader
„Join the right force“
Data Master
Luke Filewalker

Data grabbing in our hands! Or: re-grabbing!

– Data Mass Index (DMI),
– Big Data in our hands in real time (App showing which data are being collected from the phone),
– Show me the data, show me the money!

Consumer information system

– Data Traffic-Light System (Evaluation)
– „Stiftung Datentest“
– Data instructions for use
– Fair trade data

Open questions

Most of the ideas above go in the direction of rising individual awareness and offering individual solutions. This is a good beginning, but it still misses the collective aspect of the whole topic: The answer to the current status of Big Data needs to include a common approach if it doesn’t want to reproduce individualistic patterns which are part of the problem.

Challenges

Finding catchy images qne narratives for the „common“ aspect of the whole project.

Sources

SOURCES

Journalism and scholarship

* Tragedy of the Data Commons
by Jane Bambauer (formerly Jane Yakowitz), Harvard Journal of Law and Technology, Vol. 25, 2011.

* Is Data the Oil of the 21 Century or a Commons? (German-language)
by Krystian Woznicki, Berliner Gazette, 2011.

* Big data and genetic material should be commons (German-language)
Social anthropologist Shalini Randeria, Principal of the Viennese Institute for Human Sciences (IWM), advocates the rediscovery of the concept of “commons” that are beyond the control of governments and commercial companies.

* The temptations of Big Data (German-language)
by Jeanette Hofmann. Die Versuchungen von Big Data. In: Markus Beckedahl und Andre Meister (Hg.): Jahrbuch Netzpolitik 2012. Von A wie ACTA bis Z wie Zensur. Berlin: epubli, S. 74-79. 2012.

* Socialize the Data Centres!
by Evgeny Morozov. New Left Review 91, January-February 2015.

* Big data from the bottom up
by Nick Couldry and Alison Powell. Big Data & Society, 1 (2). ISSN 2053-9517, 2014.

* Data is not an asset, it’s a liability
by Marko Karppinen. Data is not an asset, it’s a liability. September 2015.

* Alternatives to the controlling corporations (German-language)
By Ayad Al-Ani. Alternativen zu den kontrollierenden Konzernen. Netzpiloten, February 2015.

* Big Data’s Radical Potential
by Pankaj Mehta. Today, big data is used to boost profits and spy on civilians. But what if it was harnessed for the social good? JacobinMag, March 2015.

* Data as commodities (German-language)
by Herbert Zech. Daten als Wirtschaftsgut – Überlegungen zu einem „Recht des Datenerzeugers“. Gibt es für Anwenderdaten ein eigenes Vermögensrecht bzw. ein übertragbares Ausschließlichkeitsrecht? CR, No. 3/2015.

* Revised data retention sought
by Merkel cabinet. Germany’s cabinet has adopted revised legislation on data retention for police probes into severe crimes. Telecoms would log phone and Internet usage for 10 weeks. Privacy advocates and publishers are strongly opposed. May 2015.

* Controlling the future. Edward Snowden and the new era on Earth
by Elmar Altvater. Original in German. Translation by Ben Tendler. First published in Blätter für deutsche und internationale Politik 4/2014

* Engineering the public: Big data, surveillance and computational politics
by Zeynep Tufekci. First Monday, Volume 19, Number 7 – 7 July 2014.

* Politics of Data – Between Post-Democracy and Commons
by Felix Stalder. Lecture at the conference Data Traces, July 3-4, 2015. This is an unedited live written english translation created by Felix Gerloff, an impressive on-the-spot summary. July 2015.

Politics

* International Surveillance: A New French Bill to Collect Data Worldwide!
After the French Constitutional Council censored measures on international surveillance in the Surveillance Law the government fired back with a bill. La Quadrature du Net strongly rejects the unacceptable clauses.

* EU-USA Umbrella Agreement on Data Protection
The agreement puts in place a comprehensive high-level data protection framework for EU-US law enforcement cooperation. The agreement covers all personal data (for example names, addresses, criminal records) exchanged between the EU and the U.S.

* Heiko Maas, give us data sovereignty! #freeyourdata (German-language)
This petition is addressed to Heiko Maas, the German Federal Minister for Justice and Consumer Protection. Quote: “We are supplying the oil of the 21st century for free with no charge for delivery. But what about us?”

* Startups and data protection (German-language)
Hearing in the “Digital Agenda” Committee of the German Parliament. The issue of “open data” was strongly emphasized as an unused resource for innovation. Experts Stephan Noller and Hermann Weiß spoke of “commons data”.

* Data Protection Newsletter (German-language)
An annual German report about data protection. It shows what has been done with the citizen’s data. Transparency without consequences?

* Position paper on commons (German-language)
In 2013 the German Bündnis 90/Grüne (Green Party) parliamentary group adopted a position paper on commons. The paper touched on many areas (including the internet and data) and discussed the social dimension of the commons.

Tools

* Environmental Justice Atlas
Impressive use of lots of data, mapping and open-source infrastructure to map out the connections between global environmental justice struggles.

* Inside Airbnb
A project related to that how big data could be used by social movements. It seeks the publicly accessible data from AirBNB to map and chart its impacts on property prices.

* Mailpile
Mailpile is an e-mail client, a search engine and a personal webmail server. A project to rescue our personal lives from the proprietary cloud.

* New Cloud Atlas
The New Cloud Atlas is a global effort to map each data place that makes up the cloud in an open and accountable way.

* You Broke The Internet
Theory and Practice of a completely encrypted and obfuscated new Internet stack, enabling us to unfold a carefree digital living.

* User Data Manifesto 2.0
Defining basic rights for people to control their own data in the internet age. Control
over user data access. Knowledge of how the data is stored.

Institutions

* Council for Big Data, Ethics, and Society
The Council brings together researchers from diverse disciplines to provide critical social and cultural perspectives on big data initiatives.

* Max Planck Institute for Research on Collective Goods
Research focused on antitrust, regulation and financial stability.

* Open Media Canada
A community-based organization that safeguards the possibilities of the open Internet.

* P2P Foundation
Studies the impact of peer to peer technology and thought on society and aims to be a pluralist network.

* Research Group: Ethics of Big Data
The aim of this interdisciplinary group is develop concrete resources for scholars conducting big data research.

Credits

CREDITS

„Big Data in Our Hands. Re-Claiming the Oil of the 21st Century“ is a long term project by Berliner Gazette in collaboration with civil society actors.

The project started in autumn 2012 at the Digital Backyards conference in Berlin and is scheduled to continue until 2022. Focusing on the commoning of big data, the project most recently culminated in a Berliner Gazette workshop at the UN|COMMONS conference, which took place October 22-24, 2015 in Berlin. This document is the preliminary result of the project.

The chapters „Big Data Commons“ and „Commons Data Centers“ were collaboratively produced before UN|COMMONS in order to prepare and kick off the creative processes at the conference. The chapters „Governance of the Data Commons“, „New Data Infrastructures“ and „Big Data Enlightenment“ are the immediate results of the UN|COMMONS conference.

The Berliner Gazette intends to continue and expand the project in dialogue with a great diversity of civil society actors. Hence the sharing of this document is very welcome as are suggestions and questions on the material at hand as well as on the potential steps to go from here. Please contact us under info(at)berlinergazette.de

People who have participated in this Berliner Gazette initiative so far include Bangi Abdul (tokyo-ritual.jp), Avantika Banerjee (wiredandnetworked.com), Zeljko Blace (Multimedia Institute), Sean Bonner (Safecast.org), Sophie Bloemen (commonsnetwork.eu), Benjamin Cadon (labomedia.org), Martin A. Ciesielski (Medienmosaik), Benjamin Diedrichsen (OPENMEDiAID), Christian Franz (cpc-analytics.com), Max Haiven (Nova Scotia College of Art and Design), Ted Han (Documentcloud.org), Harlo Holmes (New York Times), Hiroyuki Ito (Crypton Future Media), Joi Ito (MIT Media Lab), Ela Kagel (Supermarkt), Anna Magdalena Kedzierska (code4sa.org), Florian Kosak (berlinergazette.de), Tomislav Medak (mi2.hr), Annette Mühlberg (ver.di), Kazushi Mukaiyama (Future University Hakodate), Taketo Oguchi (shift.jp.org), Junichi Oguro (43d), Chris Piallat (berlinergazette.de), Nina Pohler (Hafen City Universität Hamburg), Alison Powell (London School of Economics), Michael Prinzinger (berlinergazette.de), Annika Richterich (Maastricht University), Jaron Rowan (Xnet), Andreas Schneider (Institute for Information Design Japan), Christopher Senf (berlinergazette.de), Lukas Stolz (European Alternatives), Mitsuhiro Takemura (Avec Lab), Keiko Tanaka (Kyoto College of Graduate Studies for Informatics), Edward Viesel (berlinergazette.de) and André Wilkens (Analog ist das neue Bio).

Concept, project coordination and editing: Magdalena Taube and Krystian Woznicki (berlinergazette.de)

The Berliner Gazette is a nonprofit and nonpartisan team of journalists, researchers, artists and coders we analyze and test emerging cultural as well as political practices. For more than 15 years we have been publishing berlinergazette.de under a Creative Commons-License – with more than 900 contributors from all over the world – and also organizing annual conferences and editing books. Visit Berliner Gazette.

The chapter images stem from the documentation of Berliner Gazette conferences including UN|COMMONS at Volksbühne in Berlin, SLOW POLITICS at Supermarkt in Berlin, SLOW POLITICS at Porto in Sapporo. They were taken by Norman Posselt, Andi Weiland and Krystian Woznicki. All contents (text, images, etc.) are licensed under Creative Commons CC BY NC SA.